Couchbase Memory Utilization


We would like to understand how much memory does Couchbase server uses.
Eg. If a a RHEL 7 VM has 28GB RAM where DATA Service of CB Server is 8GB, There are 7 buckets whose combined memory quota is around 6GB. The data within these buckets is not more than 1GB. Indeed, it has never gone above 1GB. So then what should be the memory utilization by Couchbase.

We noticed 8+ GB memory used by Couchbase OS user and thus we are curious to understand the mem usage.

@pratmeht Every service on a node will have its own memory requirements. Data is just one of those. Index service can consume even more memory than Data if there is a large enough number of indexes. You might find the following Sizing Guideline document section helpful:

Hi Kevin,

My question is not on sizing. It is on Memory utilization. For the example which I have mentioned above, how much memory consumption do we see on Linux host?


@pratmeht The sizing documentation I posted the link to is mostly about memory consumption. It has detailed information about how to calculate the memory consumption. For example, Table 7 tells the number of bytes of overhead required for different pieces of index metadata, and Table 8 gives formulas for how to calculate the total memory consumption of indexes in each of the three different supported index storage mechanisms (Plasma, ForestDB, Memdb).

Hi @Kevin.Cherkauer , thanks for quick response again. I appreciate that :slight_smile: . I have gone through CB Sizing doc fully. I see that for DATA service, minimum RAM consumption will be 2.5GB. Now, if you look at the example which I have shared above, where 8GB of RAM was used by OS user “couchbase” against the fact that combined data in all the 7 buckets was less than 1GB. So, we wonder, however come Couchbase is using memory equivalent to what is allocated to it and not what it needs to store data! Are there some scenarios which causes this ? I hope I able explain my question in better way.

Pratik Mehta

@pratmeht Is Data the only service on the node? Are you using replicas and if so have you accounted for these?

Note I am not on the Data team – perhaps someone from that area will have more insights.