I am using Spark connector for Couchbase. I have a question regarding connection retries. lets say if the data nodes are unavailable momentarily, does the connector or Couchbase retries internally. if yes, how many times does it retry and how can we control the number of retries.
@rajib761 you cannot control the number of retries on the sockets - the underlying SDK will retry the socket until it either comes online again or it receives a cluster configuration telling it that the node has been removed from the cluster (failover or rebalance out) and then it will stop trying to connect.
The part that you can influence is how your request is behaving with the properties I think you are already aware of based on the emails we’ve traded.
What is the specific issue you are seeing and what are you trying to accomplish with those properties?
Thanks Michael for your response. I was trying to see if I can change the timeout and circuit breaker settings while creating the connection through the SPARK connector. Through the SDK, I can do it with the environment object. But I did not see a way to do this in the Spark Connector