Unbalanced Resident Memory Percent

Good day Everyone. I expose the next issue: I have a Couchbase Cluster with two nodes (Debian SO on each node). I recently restore a bucket from an old cluster to the new one, but in VBUCKET RESOURCES section I found a unbalanced memory usage between the two nodes. One placed all items on memory (100% resident mem) and the other place only a part of it (less than 40%). (See image below)

I expected that between the nodes, the sum of each resident percent gives 100%. But it is not the case. So, my two questions are:

  1. Is it a sign of an issue inside this bucket?
  2. If so, how could I do to rebalance the memory usage between these nodes for caching items?


I think you’re misunderstanding how active and replica data is managed.

Your bucket is split into 1024 shards or vBuckets, and of you have 1 replica you’ll also have 1024 replica vBuckets.

All active vBuckets will be equally distributed across the available nodes (so 512 each with 2 nodes). Same for the replica vBuckets.

When it comes to keeping recent data in RAM, priority is given to active vBuckets, if there isn’t sufficient RAM to keep all data resident. This is why you see 100% of active data resident in RAM, but only 40% of replica data.

Hi @drigby. I appreciate your answer. So, in my case, even that both nodes have enough RAM size (15GB on each node to be exact), Couchbase always gives more priority to active node than replica. Am I right?

Also, my doubts came when I compared the same stats with other buckets I have into the cluster. Some displayed 100% on both active and replica, and others showed 50% and 11% respectively.


Yes, although if there’s sufficient RAM it’ll keep replica data 100% resident also.

Are you sure you’ve set the Bucket Quota to a sufficient size for your data? You don’t show how much memory your data is currently taking, but given you only have 72 items that would suggest at most 1.4GB RAM used (maximum document size is 20MB).

Hi @drigby. Thanks for your response. In fact my bucket is sized to 256MB (128MB on each node) and currently it keeps 58.7 MB of memory usage for data and 25MB for views.