Memory leak when using Couchbase document

There seems to be a memory leak in my couchbase server when I use documents.

Details -
Couchbase version - Community Edition 5.0.1 build 5003
Client SDK - node 2.3.3
Max No of keys at a time - 400k
Memory used - 2.9 GB and rising
User Data - 2.7 GB and rising
Document size - less than 1 KB
Ops - 1000 (500 gets and 500 sets)
Each key is created with an expiry of 10 mins

The No of Items displayed in the Couchbase Console seems right because it is fairly constant.

The Memory and User Data in RAM are ever increasing and don’t get cleared.

Over the last 3 months Memory Used got cleared twice - both at 1.86 GB but that cycle did not repeat now and memory is still rising

I have another bucket which is not Document based and I don’t see any memory leak in that bucket. It is only the bucket with Document based which has this leak.

Is it an issue with the node js library or with couchbase server configuration? How can I proceed to fix the issue.

Is this related to Tombstone purging. The interval set to clear metadata cache is 1 day which should not be an issue unless there is a bug or another config parameter needs to be tuned too.

What’s your bucket quota set to? Couchbase will attempt to use as much memory as you assign to it for caching hot documents; are you seeing memory_used increase past the quota?

Bucket quota was initially set to 5 GB. High Watermark was set to 2.5 GB and once it crossed this couchbase started evicting keys. I had to raise the high watermark to get it working again. During eviction no of keys dropped but the actual Memory used did not drop and is still rising.

@drigby any idea on why this could be happening?

So from what you’re saying everything’s working as expected - memory usage is below the bucket quota.

Couchbase only evicts items from memory once memory usage reaches the high watermark.

@drigby

It doesn’t look to be working that way. I reduced the High watermark to 2.5 GB and couchbase evicted keys. I know this for certain because no of Items reduced to 200. But the Memory used and user data did not get cleared even then.

I had to raise the high watermark because eviction did not result in Memory being freed and stopped new keys from getting inserted due to this.

Some stats collected from the bucket @drigby -

mem_used:                                              3909554744
vb_active_curr_items:                                  410729
vb_active_seqlist_deleted_count:                       39962327
ep_ephemeral_metadata_purge_age:                       86400
ep_ephemeral_metadata_purge_chunk_duration:            20
ep_ephemeral_metadata_purge_interval:                  60
vb_active_ht_tombstone_purged_count:                   88594372
vb_active_seqlist_purged_count:                        50915874
vb_pending_ht_tombstone_purged_count:                  0
vb_pending_seqlist_purged_count:                       0
vb_replica_ht_tombstone_purged_count:                  0
vb_replica_seqlist_purged_count:                       0
vb_replica_curr_items:                                 0
vb_replica_seqlist_deleted_count:                      0

Ok, so you’re using an Ephemeral bucket (which you didn’t mention previously!).

So what’s happening here is expected - you have deleted 88,594,372 documents since the bucket was created (vb_active_ht_tombstone_purged_count).

In Ephemeral buckets the record of deletion (the “tombstone”) must be kept in memory until for the duration of the Metadata Purge Interval - see https://developer.couchbase.com/documentation/server/5.1/architecture/core-data-access-buckets.html

Consider reducing the Metadata Purge Interval for this bucket - if you arn’t using XDCR you can pretty much set it to the minimum value; if you are using XDCR you need to consider what the maximum duration of diconnectivity your application expects (and set the purge interval to that).

The metadata purge interval was set to 1 by default which means it should get cleared in a day. I had tried setting it 0.04 which is the lowest possible value but that didn’t help too.

Also the other bucket I mentioned which has no such leak has the exact same configuration. Only difference is the values in that bucket are integers. The values is this bucket which has a leak are json objects (couchbase documents)

This should help - you’ll still have to wait for that interval to pass (IIRC 1 hour) before you see any effect. Note when you took the stats it was still set at the default of 1 day {ep_ephemeral_metadata_purge_age=86400 seconds). Try setting it to the minimum of 1h, wait 1h, and then see what you stats look like.

Note: This isn’t a leak - it memory which is still used to hold the tombstones. If you want to free up that memory you need to ensure the tombstones are aged away. :slight_smile:

Still not helped. One more difference in the 2 buckets is that this particular bucket with the high memory is set to Default Priority and the other bucket is set to High Priority. Could that also be a cause?

Anything to go forward here or any more data points which could help to find the issue here? @drigby

Anything to go forward here or any more data points which could help to find the issue here? @drigby