Cluster closed - reinitialize connection?

We currently have a couchbase 6.5 server in a docker container and a node.js/express api server in another docker container.

They both start via docker-compose and startup fine.

However, the couchbase container start up takes a little while to get started, and the node.js/express container starts up and tries to initialize and export a cluster object that is reused in various routes.

let conn = `${config.database.database}://${config.database.host}`
// This log can be removed once we all see what connects to what

console.log(`conn: ${conn}`)

export const  cluster = new couchbase.Cluster(conn, {
  username: config.database.username,
  password: config.database.password
});

In the route object, we import the cluster object as such and use it to initialize the bucket and collection objects to fetch/mutate data as appropriate:

import * as express from 'express'
import { cluster, couchbase } from '../couchdb'

const router: express.Router = express.Router()
const path = '/users';

// Open a specific Couchbase bucket, `travel-sample` in this case.
var bucket = cluster.bucket('user')

// And select the default collection
var coll = bucket.defaultCollection()
//... some api function:
if (req.params.id) {
      const result = await coll.get(req.params.id)
      const doc = result.value
      return res.status(200).send(doc)
    }

The problem is we get an exception saying that the cluster connection is closed. Putting in a delay of 45 seconds on the api container startup gets around this problem, but that’s fairly arbitrary and not ideal for production deployment. Is there some way to manage the cluster connection better via the API side? Ideally, i would like some way to test if the connection underlying the cluster is open and if not, re-open the connection. Any other ideas that are also elegant would be welcomed. Thanks!

Hey @fredwang00,

The Node.js SDK should be continuing to attempt to establish a connection to the cluster, but this functionality is not quite available in our underlying C library just yet. One workaround would be to wrap your connection object and retry connecting internally whenever there is a failure. I understand this is suboptimal, and we hope to have a fix available in the near future.

Cheers, Brett

Hi @brett19 and @fredwang00
I’m testing couchbase from node and I have the same issue.
I would like to know if it has been solved

Thanks!

Is this related to https://issues.couchbase.com/browse/JSCBC-706?

It seems to be flagged for fix in 3.1.1 … is there a date for this?

2 Likes

Is this issue fixed in 3.1.1?

It works for me without docker since 3.1, but in my docker setup it still doesnt work.

Hey @EtzBetz, @geraldapeoples ,

We are aware of this issue and working towards a long-term solution. You can track the progress on this issue through the following ticket which is intended to cover the improvement to connection behaviour.
https://issues.couchbase.com/browse/JSCBC-706

Note that in the majority of cases, the connection failure occurs due to a miss-configuration somewhere, and it is possible to start your application with debugging enabled to see the underlying reason for the failed connection with something akin to: DEBUG=* node index.

Cheers, Brett

@brett19 Hi Brett, thanks for the update … I will follow the issue as per the ticket, I will also try it out on our dev environment once more to see if I can find any mis-configurations.

Regards, Gerald

I use @brett19 solution. Wrapping the connection object works very well and it help me centralize logics, like the reconnect:

module.exports.get = function(...args){
    return collection.get(...args).catch(errorHandler);
}

function errorHandler(err){
    if(err is a network erro) reconnect();
    return Promise.reject(err);
}

You can also use cluster.ping() to test the connection