I’ve have never been able to get cbbackup to run without destabilizing my cluster. (See previous posts referenced at end of post) It seems the problem is lack of RAM. I plan to change my app to actively archive a significant portion of data in hopes this will free up RAM and allow cbbackup to run successfully. I’ve explained my situation and assumptions below, but would like to hear any insights into couchbase or suggestions on better solutions. Maybe my app is a unique use case (seems unlikely) or I’m doing something wrong.
I started with a cluster RAM of 10GB. While this ran high on memory, all reads and writes seemed to work well. Temp OOM was at zero. When cbbackup was tried and failed, I upped the RAM to 14GB. The cluster still ran high on memory and cbbackup still failed in the same way. I then jumped to 24GB RAM. The cluster still ran high, using near all available RAM and cbbackup still failed.
My application has a large number of small documents, most of which are historical and not accessed. There are approx 200K new documents a day, which are no longer accessed after about two days. While active docs resident is about 1%, cache miss ratio is 1%. Of note, the cluster is four EC2 m1.large instances running only couchbase (and only for this app) with all data in one couchbase bucket. There is also a very small memcached bucket (2GB).
Through my experimentation, it appears that couchbase must be holding on to too much of the historical data. It also seems that cbbackup must use a lot of memory in order to load data from disk and send to the (separate) backup machine. This makes some sense, though I’m not sure how much memory I need to allow for cbbackup to success without killing regular couchbase operations.
I’m curious if anyone can confirm my suspicions about what is happening or set me straight. Outside of getting backups, couchbase has worked fantastically well for my app. I just need to get backups working and things will be good. Any and all help is much appreciated.