I’ve been running some load tests using Couchbase Community 3.0.1 to see what benefits we can glean from the new full eviction and tunable memory introduced in CB3. Using the Java Client v 2.0.3 I’m seeing a consistent performance drop of around 70 percent when switching between value eviction and full eviction. We are running a cluster of 3 nodes (Amazon EC2 m3.xlarge) with a cluster quota of 13G node memory (39GB cluster) memory. During my tests all things remain the same, all I do is change the bucket from “value” to “full” eviction and re-run the tests.
WIth a test set of 10M documents, no indexes (yet), from a single client I’m consistently seeing ~ 30k write ops/sec using value eviction, while with full eviction I’m consistently seeing ~8k write ops/sec, all other things equal. Is there some other tuning that I need to be aware of? Are my expectations that the value eviction and full eviction throughput should be around the same wrong? I was expecting a little peformance drop based on perceived extra bookkeeping, but I wasn’t expecting this much of an impact. I’m having a hard time finding doc related to this feature, so I admittedly could be doing something wrong.
What kind of operations are you using, how many % of objects are ejected?
Keep in mind that if you use something like insert() or replace() we always need to check on disk, so you might want to replace that with an upsert() if it makes sense for your context (since then we don’t need to check its always overridden).
My fault, the insert vs upsert accounted for the difference. I planned on using upsert when migrating to the new API, but I overlooked it in this first test. The extra read from disk and it’s resulting impact makes perfect sense.
BTW I’m seeing 0 ejections/sec, which is what I was expecting as we have enough ram to fit all those documents into memory. The next phase will push more, I just wanted to address this variable before proceeding.
Let’s assume following situation.
Bucket is set up to ‘full eviction mode’ and has >1TB data with documents created using upsert operation.
Does this mean that this bucket can work correctly on 100MB RAM, if there will be no document reads, only through views (since they’re stored on disk)? In other words - if I’m using upsert, does Couchbase server store document directly on disk or it loads metadata to memory and after stores to disk?