Persistence becoming an issue : Can it be disabled?
I've recently upgraded to Couchbase 2.0, and compared to 1.7 and 1.8, it seems like persistence is becoming an issue.
My cluster has servers with 48G RAM and "only" 146G disks. Couchbase stores up to 50-100 million items, mostly very small keys, which represent about 5-10k GETs per second and 1.5-3k PUTs per second.
The default auto-compactation has the main bucket enter an almost constant compactation loop, and raising the 30% to 60% or more "fixes" this, but I then see the disk usage get dangerously high at times.
Last night, the disks of all nodes in the cluster became full, and hell broke loose...
What I saw shortly before it happened was that we had hit a very high number of items (close to 100 millions), and that the compactation was taking much longer to complete. I'm guessing that it wasn't able to complete fast enough, and the disks got full before it was done.
The RAM quota per node for the bucket is 40G, and it was far from being all used. On disk, though, 120G of Couchbase files existed, effectively filling it all up.
So my questions are :
* Can I disable data from being on disk? I really don't need it, since persistence is not a requirement in my case, as Couchbase is only used as a caching layer (a clustered Memcache, basically). My understanding from what I've seen here and there about 2.0 is unfortunately "no"...
* Can I easily change something so that the disk never gets full? Something like having it be managed more like the RAM?
Any tips are welcome!
you don't need index replica if you have not defined any indexes.
Also, currently there is a bug (specific to TTL) which is why you are seeing growth in disk space. in 2.0 we have an append-only storage and the persistence engine was completely re-written. fix is coming soon to purge expired items from disk.
Also, its not a problem is compaction is running all the time (as long as you have I/O capacity and cpu)