By default Couchbase Server uses the
http://localhost:8091/pools for connections
with clients. A Couchbase client will also automatically adjust
the port it is configured to communicate to the Couchbase
Server. Therefore it is not necessary to adjust your code for
connecting to accommodate cluster rebalance, or to accommodate
node addition or deletion.
In order to connect and perform data operations, you will need to have at least one default bucket established, for instance one that you have made in the Couchbase Administrative Console.
The following shows a basic steps for creating a connection. Other languages/frameworks follow a similar pattern, with slight variations due to language and compilers/interpreters:
Include, import, link, or require Couchbase Client libraries into your program files. In the example that follows, we require 'couchbase'.
Provide connection information for the Couchbase cluster.
Typically this includes URI, bucket ID, a password and
optional parameters and can be provided as a list or string.
To avoid failure to connect, you should provide and try at
least two URL's for two different nodes. In the following
example, we provide connection information as
"http://<host>:<port>/pools". In this
case there is no password required.
Create an instance of a Couchbase client object. In the
example that follows, we create a new client instance in the
client = Couchbase.connect
Perform any database operations for your applications, such as read, write, delete.
If needed, destroy the client, and therefore disconnect.
require 'rubygems' require 'couchbase' client = Couchbase.connect "http://<host>:8091/pools" client.quiet = false; begin client.set "hello", "Hello World!", :ttl => 10 spoon = client.get "hello" puts spoon rescue Couchbase::Error::NotFound => e puts "There is no record." end
In this example, we set and retrieve data in a Ruby begin rescue end block. The code block attempts to set the value "Hello World!" for the key "spoon" with an expiration of 10 seconds. Then gets the value for the "spoon" key and outputs it. If the Couchbase client receives and error, it outputs "There is no spoon."
(Optional) Depending on the language you are using, you may need to be responsible for explicitly destroying the Couchbase client object, and therefore destroy the connection. Typically it is a best practice to try to reuse the same client instance across multiple processes and threads, rather than constantly create and destroy clients. This will provide better application performance and reduce processing times.
The next example in Java we demonstrate how it is safest to create at least two possible node URIs while creating an initial connection. This way, if your application attempts to connect, but one node is down, the client automatically reattempt connection with the second node URL:
// Set up at least two URIs in case one server fails List<URI> servers = new ArrayList<URI>(); servers.add("http://<host>:8091/pools"); servers.add("http://<host>:8091/pools"); // Create a client talking to the default bucket CouchbaseClient cbc = new CouchbaseClient(servers, "default", ""); // Create a client talking to the default bucket CouchbaseClient cbc = new CouchbaseClient(servers, "default", ""); System.err.println(cbc.get(“thisname") + " is off developing with Couchbase!");
A similar approach should be followed in any language when attempting to connect to a Couchbase cluster. That is, you should set up an array of two or more possible nodes and then attempt to connect to at least one node in an array before performing other operations. The following demonstrates creating a connection with more than one possible URI in Ruby:
Couchbase.connect(:node_list => ['<host>:8091', '<host>:8091', 'example.net'])
After your initial connection with Couchbase Server, you will not need to reattempt server connection using an explicit list of node URLs. After this initial connection, your Couchbase client will receive cluster information with all nodes available for connection. After rebalance and failover, if a Couchbase client object still exists, it will receive updated cluster information with updated node URLs.