First of all we recommend bare minimum of 4GB on each Couchbase node for any kind of reliable testing. On production, I would suggest to have at least 8 GB RAM per node.
“RAM quota” setting only applies for amount of memory that memcached could use to store all of the buckets defined on the cluster. It doesn’t account for memory usage by cluster manager(manages XDCR, views, rebalance, failures and management operations), so it’s essential that you leave headroom(at the very least ~20% of total RAM on the system) for cluster manager and OS.
From high level, you could assume memcached to b the storage engine that caches and persists data for you.
beam.smp refers to the Erlang VM that typically has the responsibility to manage rebalance, failures, heartbeats, XDCR, views etc.
“RAM quota” setting provides you that control knob.
If this is your production then you really need more RAM on your servers else OOM-killer might kick in and kill couchbase specific processes.
@asingh, thanks a lot for very comprehensive explanation. It helps us a lot.
Servers have 3.75GB of memory, which is AWS m3.large instance. The good thing is that three machines of this size are our staging and development clusters, for production we have more and bigger machines.
One more question: on production cluster (machines with 60GB of memory), I’ve reduced cluster RAM quota (to make sure that there is enough memory for beam.smp and OS), but the memory usage of memcached process did not go down. Is there any way to trigger any kind of “garbage collection” or “memory reclaim” operation for memcached process, or do I have to restart the couchbase to cause it?
If restarting is the only way, the preferred way would be to add node (we’re in cloud so it’s not a problem) and proceed with swap rebalance. We have four nodes - is it ok to add four other nodes, proceed with swap rebalance, restart couchbase and then swap re-balance back to the old nodes? (we prefer to stay with the old IP addresses of current nodes)
I have similar issues here: our cluster has 9 nodes, each with 256G memory. We set up the couchbase quota to be 242G, however, the management UI shows on two of the nodes the memory usage is more than the quota - data usage is only 16G, “other usage” is 233G.
Logging into one of these two servers, I can see memcached is the process that takes a huge amount of memory: