Could not dispatch request, cancelling instead of retrying

We are seeing a lot of Could not dispatch request, cancelling instead of retrying exceptions in our logs,
after one node in the cluster went down (even for a few seconds of network problems).

The real issue is that even when the node is back up and normal (and able to answer to new client) -
currently running application keep failing and only a restart of the application resolve this issue.

We looked around in the SDK an did not find anything that can “refresh” the existing connection.
We’ve considered creating a new connection (create a new Cluster obj and Buckets) in the application once too many of this exception is being thrown - But it really seems like not what we want to do.

Are we missing something ?
Shouldn’t the connector try to re-open the connection to that specific node ?
Any best-practice on how to solve this issue?


Hi @idanna,

Which version of the client are you using and which connector? The client would automatically reconfigure if the node goes down, there isn’t any special handling required on the application side.

Hi @subhashni , I am also experiencing the same issue.
I am using the client version 2.2.5
Server Version: 4.6.1-3652 Enterprise Edition (build-3652)