Thanks for the questions @kingbbode.
The answer to both questions are related.
Couchbase actively manages memory and works to do so efficiently within the quota of memory you give it. Because of everything Couchbase does with an item, even if you’re updating it, there may be multiple copies in memory for a short while. These will be cleaned up as additional background processing completes. It’s not managed by erlang, no. The quota you give to a bucket is managed by the KV engine.
What you see in the Web UI for a bucket, represents the memory used by the KV engine. That’ll be managed to be under the quota you specify, but may not be exactly the number of items currently in the bucket.
It’s also relevant to mention that the way memory allocators work in general (including the one we use in Couchbase), they can return memory to the OS, but will frequently hold on to memory allocated for a while because it’s more efficient to reuse memory a process already has rather than need to return it to the OS and then ask the OS for memory again. That means, for instance, if you were to update the same item a large number of times with get/set/touch, you’ll see memory usage grow to a certain level that is under the quota, but then it will stabilize there.
As you add more items, Couchbase will work to keep as much in memory as you have given it under the quota, but will occasionally give free space back to the OS.
From a user perspective, as long as you feel that in steady-state the right items are in memory and it is staying under the quota, there is no reason to worry about it moving up and down with respect to updating a single item.
Just to add a bit more technical detail…
Some of the above is accomplished by working with shared pointers in C++ and some of it is done by carefully working with the open source memory allocator, jemalloc. Couchbase leverages some of jemalloc’s functionality for tracking that quota and freeing space back to the OS if needed. As the team find better ways to manage this and balance performance with memory usage, the implementation may change but the contract is fundamentally the same: use as much space as needed to be efficient under the the quota, but don’t go above the quota.