The 1.1.2 release is the second bugfix release for the 1.1 series. Failover and rebalance resiliency have been improved, and also a bug regarding persistence constraints and replicas has been fixed. Spymemcached has been upgraded to 2.8.11, which now does not try to reconnect when the node is not part of the cluster anymore.
Spymemcached has been upgraded to 2.8.11. 2.8.11 fixes a important bug that can happen when using commands like "stats" and memcached-buckets together, where the node list was not updated correctly. This may also affect broadcast operations. In addition, when using authenticated buckets the code now only tries to reconnect when the node is still part of the cluster (by checking against the node map). This is not an issue on its own, but should give the IO thread more time handling IO and not trying to reconnect when not necessary.
When the Streaming connection dies and no node is available to reconnect, then the client tried as fast as it can to reestablish the connection. This may lead to unnecessary CPU usage. The fix implements a backoff algorithm that increases from 1 second to max 10 seconds between retries. There is no maximum time where it stops, so once one of the nodes in the list is back online, it will be able to reconnect. INFO-level log messages indicate the reconnect attempts.
When writing view operations to nodes, previously when one node had no connection available, ultimately the operation was cancelled. Now, when one node is not able to provide a connection in a reasonable time (max of 5 seconds), then a different node is tried. After a max. of 6 retries, the operation is considered cancelled (this would mean 6*5=30 seconds).
In general, this will slightly improve performance, but also help when nodes have problems fulfilling requests and when shutting down.
When the Couchbase Server closes the HTTP Chunked connection properly (during a rebalance finish), it can be the case that a different Netty event is generated that has not been caught preivously. This led to a case where the streaming connection never got reestablished fully, leading to a potentially "deaf" client.
When running against a 1-node cluster with replicas enabled (which is done by default) and using the persistence constraints with ReplicateTo.One or more, it created an IndexOutOfBoundsException. The client now properly checks for the replica count when using "observe". Note that this does not affect 1-node buckets when replication has been disabled completely.