Persistence becoming an issue : Can it be disabled?

Hi,
I've recently upgraded to Couchbase 2.0, and compared to 1.7 and 1.8, it seems like persistence is becoming an issue.
My cluster has servers with 48G RAM and "only" 146G disks. Couchbase stores up to 50-100 million items, mostly very small keys, which represent about 5-10k GETs per second and 1.5-3k PUTs per second.
The default auto-compactation has the main bucket enter an almost constant compactation loop, and raising the 30% to 60% or more "fixes" this, but I then see the disk usage get dangerously high at times.
Last night, the disks of all nodes in the cluster became full, and hell broke loose...
What I saw shortly before it happened was that we had hit a very high number of items (close to 100 millions), and that the compactation was taking much longer to complete. I'm guessing that it wasn't able to complete fast enough, and the disks got full before it was done.
The RAM quota per node for the bucket is 40G, and it was far from being all used. On disk, though, 120G of Couchbase files existed, effectively filling it all up.
So my questions are :
* Can I disable data from being on disk? I really don't need it, since persistence is not a requirement in my case, as Couchbase is only used as a caching layer (a clustered Memcache, basically). My understanding from what I've seen here and there about 2.0 is unfortunately "no"...
* Can I easily change something so that the disk never gets full? Something like having it be managed more like the RAM?
Any tips are welcome!
Matthias

Hi Matthias,
Wondering what the use case is. A few questions to help us understand it better
- is it pure key - value? (no indexing , no XDCR)
- do you heavily use TTL / expiration?
- is yes, how often do you expire items?
- do you re-use keys? That is if a key expires, will it ever be reused? or will only new keys be generated?
- Dipti

I'll be more than happy to provide any relevant details!
* It's definitely pure key/value. We use Couchbase as if it was a memcached cluster (through moxi, from web servers). No indexing, no XDCR.
* We heavily use TTLs, setting many at less than 1 minute.
* We don't explicitly re-use most keys, as they're typically custom created to have session-like behaviour in our application. They're quite small and short-lived, rarely to be reused.
Also, regarding the problematic cluster setup itself, it was of 4 nodes with 2 replicas and index replicas enabled. Right now I've switched to a much simpler 2 node setup with only 1 replica and index replicas disabled. It seems to be behaving better, but with auto-compactation set to kick-in at 50% fragmentation, it still seems to be running most of the time.
The current disk usage I see is as follows :
* Each node is around "12.5GB / 20GB" for "Data/Disk Usage".
* The single bucket reports "24.8GB / 37.5GB" for "Data/Disk Usage" and "16.2GB / 78.1GB" for "RAM/Quota Usage".
The real Couchbase-related disk usage is this :
cache1 ~ # du -sh /opt/couchbase
22G /opt/couchbase
cache2 ~ # du -sh /opt/couchbase
19G /opt/couchbase
(which is nearly all /opt/couchbase/var/lib/couchbase/data)
So my understanding is that even though the items only use 16GB RAM in total, the disk usage is over 40GB for the same set of data, even right after compactation is finished.
Matthias

Hi,
we use Couchbase for php sessions. A three node cluster with replica 1. Items expiration works as expected but disk usage is always growing. Automatic compaction cleans some disk space for a while but overal usage only increases. Tonight I upgraded all nodes from 2.0 to 2.0.1 but there is no difference.
Before upgrade I tried to remove one node from cluster, delete all files from data directory and rejoin the cluster. It didn't help.
Is there a way to free some disk space? As a workaround we could even flush all data from the bucket at night.

Yes, there is a way, and last I checked it wasn't documented anywhere. You need to perform a different kind of compactation, which seems to be "unsafe" at least when it comes to XDCR from what I understood, though I have had no problems with it. It's not available from the web interface, you need to do the following on any single cluster node :
curl -u : -X POST http://localhost:8091/pools/default/buckets//controller/unsafePurgeBucket
Of course, replace , and with your own values.
HTH,
Matthias

Can you explain what exactly does: curl -u : -X POST http://localhost:8091/pools/default/buckets//controller/unsafePurgeBucket this do?
Why is it unsafe, what could happen?
I am in same case, my disk usage grows a lot, normal compaction helps a bit but trend is growing...

Hello,
This REST Call is purging some data that are not deletes by the standard processes.
Theses data are some metadata that we keep in the database to optimize the XDCR, this command just purge them.
Regards

1 Answer

« Back to question.

Thanks Matthias,
you don't need index replica if you have not defined any indexes.
Also, currently there is a bug (specific to TTL) which is why you are seeing growth in disk space. in 2.0 we have an append-only storage and the persistence engine was completely re-written. fix is coming soon to purge expired items from disk.
Also, its not a problem is compaction is running all the time (as long as you have I/O capacity and cpu)

Thanks for these clarifications!
Right now, things should be stable, and the frequent compactation is taking care of maintaining the disk usage to an acceptable level without impacting Couchbase overall performance, so I'll wait for the next bugfix release.
But I'm quite sure that if I had to rebalance now, I would be in trouble again, so fingers crossed! (that, and I have another 2-node cluster on standby)
Matthias