The following sections provide information on best practice for different aspects of Couchbase Server 2.0.
Port Exhaustion on Windows located in Microsoft Windows Installation
The TCP/IP port allocation on Windows by default includes a restricted number of ports available for client communication. For more information on this issue, including information on how to adjust the configuration and increase the available ports, see MSDN: Avoiding TCP/IP Port Exhaustion.
Default Bucket Should Only for Testing located in Initial Server Setup
The default bucket should not be used for storing live application data; you should create a bucket specifically for your application. The default bucket should only be used for testing.
Backup your data before performing an upgrade located in Upgrading to Couchbase Server 2.0.x
Before you perform an upgrade, whether it is online or offline, you should backup your data, see Section 5.6, “Backup and Restore”.
Use Online Upgrades for Couchbase Server 1.8.1 to Couchbase Server 2.0 located in Upgrades Notes 1.8.1 to 2.0
We recommend online upgrade method for 1.8.1 to 2.0. The process is quicker and can take place while your cluster and application are up and running. When you upgrade from Couchbase Server 1.8.1 to Couchbase Server 2.0, the data files are updated to use the new Couchstore data format instead of the SQLite format used in 1.8.1 and earlier. This increases the upgrade time, and requires additional disk space to support the migration. Be aware that if you perform a scripted online upgrade from 1.8.1 to 2.0 you should have a 10 second delay from adding a 2.0 node to the cluster and rebalancing. If you request rebalance too soon after adding a 2.0 node, the rebalance may fail.
Ensure Capacity for Node Removal located in Removing a Node from a Cluster
Before you remove a node from the cluster, you should ensure that you have the capacity within the remaining nodes of your cluster to handle your workload. For more information on the considerations, see Choosing when to shrink your cluster. For the best results, use swap rebalance to swap the node you want to remove out, and swap in a replacement node. For more information on swap rebalance, see Section 5.7.3, “Swap Rebalance”.
Failed Over Nodes located in Swap Rebalance
With swap rebalance functionality, after a node has failed over, you should either clean up and re-add the failed over node, or add a new node and perform a rebalance as normal. The rebalance will be handled as a swap rebalance which will minimize the data movements without affecting the overall capacity of the cluster.
Monitoring a Rebalance located in Monitoring During Rebalance
You should monitor the system during and immediately after a rebalance operation until you are confident that replication has completed successfully.
Enable Parallel Compaction located in Enabling Auto-Compaction
It is recommended to run data and view compaction in parallel based on the throughput of your disk.