View Source

h1. Situational Behaviors

In order to properly write a program against a system, especially a complex system such as a distributed database, it is necessary to understand how that system behaves in situations that are outside normal operation.  This document will try to describe the different situations and what application code can expect to need to handle in those situations.
{info}This document is still under development and review. Comments and questions are greatly appreciated.{info}

h2. Maintenance Operations


h3. When a Node is Added

During a node addition, all data operations and query operations will continue to operate normally.  Application code should not be able to discern (other than effects to resource utilization) that anything is happening.

h3. When a Node is Removed

When a node is removed, all data operations and query operations will continue to operate normally.  Again a{color:#333333}pplication code should not be able to discern (other than effects to resource utilization) that anything is happening.{color}


{note}{color:#333333}There is a known issue with this behavior when using server releases prior to 2.0. With 1.6, 1.7 and 1.8 an application will see errors as the end of the rebalance completes. These will not be retried by the client. This condition is also, unfortunately, not distinguishable from a node failure.{color}

{color:#333333}The application code may retry this operation shortly after the rebalance completes and it will most likely be handled normally.{color}

{color:#333333}Application developers are advised to consider what is appropriate for their application in this circumstance.{color}

{color:#333333}The issue related to this behavior is{color} {color:#333333}[MB-5406|http://www.couchbase.com/issues/browse/MB-5406]{color}{color:#333333}.{color}
{note}


h3. When a Node is Added and Removed

In this scenario, data is migrating from the node being removed to the node being added.  Frequently this is referred to as a _swap rebalance._  

{color:#333333}All data operations and query operations will continue to operate normally.  Again a{color}{color:#333333}pplication code should not be able to discern (other than effects to resource utilization) that anything is happening.{color}


{note}{color:#333333}Application code may see the same errors described in the section above about removing a node due to the same underlying issue. See the note in that section for recommendations.{color}{note}

h2. Node Failures


h3. When a Node Fails

The exact behavior depends on whether the operation is related to a data update or modification, or a query.

If the the requested operation is a data operation, such as a retrieving a document with a get() or storing a document with a set(), the success or failure of the operation would be based on which node it is supposed to be for.  In Couchbase Server, this is determined by a hashing algorithm driven by the key/id for the given document.  For example, "foo" could be mapped to the hostname _node1.example.com_ and "bar" could be mapped to _node2.example.com_.  In the event the server for _node1_ fails, any attempt to modify the key/id "foo" will eventually fail.  It will not necessarily immediately fail, as the client library will continue to try to reconnect to the node.  As long as the operation requested is still valid and has not reached the timeout threshold, the client will continue to try to service the request.

If the requested operation is a query, the request will be routed to an available node.  The JSON response from the cluster will contain an errors\[\] array, and the client may indicate an error condition.  Through modification of the on_error parameter to the query, the client may request different behavior from the server.  Note that this means a client *may get query results which are incomplete* and it is the responsibility of the application to handle these error conditions.  See the Couchbase Server documentation for more info.  

h3. When All Nodes Fail

When all nodes fail, no operations will be available.  If the client had started and been working with a cluster previously, it would continue to try to connect to that cluster until it becomes available or the client is terminated.

If a newly instantiated client tries to connect to the cluster, it will fail to initialize.

h2. Node Failover


h3. When Failover is Triggered Without Full Failure


h3. When Autofailover is Triggered Because a Node is Down


h3. When Failover is Triggered with Full Failure


h3. When a Node is Added Back and Rebalanced In


h2. Noteable Circumstances

{color:#000000}{*}When the Bootstrap Node for a Client Fails{*}{color}

If the bootstrap node, which is always the first URI in the list of URIs supplied to the client, happens to fail, the client will walk the list of URIs until it finds a system which can service it's requests.

This is currently implemented two different ways.

In some clients (Java), the persistent failure of operations over a period of time is treated as a reason to distrust the current operating configuration.  This will make the client attempt to re-bootstrap.  

In other clients (.NET), the client will make a small request of all nodes of the cluster on a regular basis to verify connectivity.  If any node is seen as not available, the client will attempt to re-bootstrap.

{color:#000000}{*}Starting a Client When the Cluster is Unreachable{*}{color}

The client will fail to initialize.  The application should handle this condition appropriately.

h2. What is Happening When My Application Sees...

h3. A TMPFAIL Response

A TMPFAIL, (which may also be 0x86 or 134 in particular circumstances) is returned by a Couchbase Server to indicate to the client application that the request couldn't be serviced at this time, but retrying the request at a later date will be successful. Application code do whatever is appropriate in this context. For example, doing a regular backoff/retry until some deadline.

This can arise in a few different situations.

First, this response may be received in response to running out of memory quota for a given bucket. This is frequently in response to heavier than expected or planned for load.

Second, it may be in response to a flush command issued either through the console or the application. During the flush, all operations will be responded to with a TMPFAIL. After the flush completes there should be no more TMPFAILs returned, but as of this writing version 2.0.0 server does return TMPFAIL longer than expected owing to MB-6232/MB-7160.

Third, it may be in response to a bucket being in warmup/pending. This can happen if, for instance, the Couchbase service is restarted on the OS.

In general, whenever appropriate an application should check for and provide error handling for when a TMPFAIL may occur. It is also highly recommended that application developers load/stress test their software to ensure this situation is handled appropriately if workload cannot be predicted. This is one of the primary ways unexpectedly high "pressure" from requests will be handled and it's a difference in Couchbase versus other systems. Couchbase is always fast by design, and gives the application, which has more context about what is appropriate in a particular situation, the ability to react well on behalf of the user rather than just slow down.