Upgrade Disk Size


How can I make the disk size bigger in the cluster? Does it work if stop couchbase, resize disk quota, start couchbase on each node? Or do I have to go the full way. Remove Node, stop it, resize quota, start, re-join, rebalance?

Thanks :slight_smile:


If your disk / filesystem allows dynamic partition resize then you can simply resize the filesystem Couchbase is using for the data and/or index directory and Couchbase will happily start using the additional space.

If you don’t have such a disk configuration, then you need to treat it as another maintenance operation:

A) If are happy to take downtime you can (1) disable automatic failover if enabled, (2) shutdown Couchbase on the affected node, (3) perform the disk/ filesystem resize (4) start Couchbase on the node (5) re-enable automatic failover (if applicable).

B) If you cannot permit downtime, then yes you would need to (1) rebalance / remove the node from the cluster, (2) perform the disk / filesystem resize (3) add and rebalance the node back in.

Thanks, worked perfectly with A) :smile:

We have at the Moment a Couchbase 2.2 Cluster (Community Edition) in our old datacenter and a Couchbase 3.0 Cluster (Enterprise Edition) in the new datacenter. We are working to migrate everyting to the new datacenter. So we have a XDCR CB 2.2 --> CB 3.0, that woks normally perfectly. Now, after restarting Nodes in the new cluster, XDCR gives lot of errors like this

[xdcr:error,2014-11-26T14:53:02.314,ns_1@xxx:<0.7709.948>:xdc_vbucket_rep_xmem_worker:handle_call:381]out of 1 docs, succ to send 0 docs, fail to send others (by error type, enoent: 0, not-my-vb: 1, einval: 0, timeout: 0 other errors: 0
[xdcr:error,2014-11-26T14:53:11.022,ns_1@xxx:<0.7709.948>:xdc_vbucket_rep_xmem_worker:handle_call:217]Error! memcached error when fetching metadata for key: <<“infinity_1”>>, just send the doc (msg: “unexpected response from remote memcached (vb: 815, status code: 7, error: not_my_vbucket)”)

and not all items were replicated. I deleted then the replications and created it again, now it’s working. Looks like the old cluster was not updated with the topology changes in the new cluster during restarting nodes.

Is this an issue between CB 2.2 and CB 3.0? Then it will be ok for us, because we need this only while migrating to the new datacenter.

Thanks. Pascal

That isn’t expected. you should be able to restart nodes even with mixed versions across XDCR. is this birectional or unidirectional from 2.2 to 3.0 or unidirectional 3.0 to 2.2?

Hi Cihan,

It is unidirectional From 2.2 to 3.0.