Multiple indexes performance

Hi, I’m expriencing weird behavior regarding index performance. On production we are using Community Edition (6.5.1) but I was able to reproduce the same issue on Enterprise Edition (6.5.1) on local environment with 3 dockerized Couchbase nodes.

For performance testing I use Gatling calling local microservice which uses Java SDK v3
There are such cases:

  1. All indexes having a single replica (no “nodes” nor “num_replica” specified during index creationg) - I get my “base” performance
  2. All indexes created with 3 replicas using “num_replica” param from Enterprise Edition - I get my “base” performance
  3. All indexes created with 3 replicas but this time not with “num_replica” parameter but with indexes having different names - it’s basically the suggested way of creating replicas in Community Edition - peformance drops by ~30%

What’s bothering me is why case 3. has performance drop? Arent replicas created in such way supposed to be load balanced?
I don’t expect performance improvement but how is it possible that it drops? Does index service has trouble finding index to use among too many options?
I assume that it might be really specific to my documents etc, but are you able to give me some tips where to look for some answers?

My bucket contains ~2 milion documents with ids like “key::AAA::BBB” and in different queries I’d like to query based on AAA or BBB part
Therefore the indexes I have are like:

CREATE INDEX `indexA` ON `bucket`(split(META().id, "::")[1]) WHERE META().id LIKE "key::%";
CREATE INDEX `indexB` ON `bucket`(split(META().id, "::")[1],valueA) WHERE META().id LIKE "key::%";
CREATE INDEX `indexC` ON `bucket`(split(META().id, "::")[1],valueB) WHERE META().id LIKE "key::%";
CREATE INDEX `indexD` ON `bucket`(split(META().id, "::")[2]) WHERE META().id LIKE "key::%";
CREATE INDEX `indexE` ON `bucket`(split(META().id, "::")[2],valueB,valueC) WHERE META().id LIKE "key::%";

Option 3: You have 3 times as many indexes and planner has to go through all the indexes and decided which index to choose. But that doesn’t contribute that much unless you are talking each query only takes mlli seconds. Java SDK using prepared statements (adhoc=false)

You can use prepare statements and execute query in query work bench and check the timings of each operator and see where it taking time. Also try USE INDEX and see.


Requesting more information on this:

  1. Just a confirm, the problem here is performance of select queries and NOT the DDLs. Am I right?
  2. In case 1 and case 3 mentioned by you, are you using Community edition (CE) for both use cases? and only case 2 uses Enterprise Edition (EE). Am I right? Because query performance in CE may not be comparable to query performance in EE in some cases.
  3. Which consistency model is being used by the queries? Is it “not bounded”, “request plus” or “statement plus”? Please check if same consistency is being used for all 3 use cases.

From load balancing perspective, replicas in EE and equivalent indexes in CE behave the same. Both EE and CE use same randomness to pick up an index to use. So, query load should get equivalently distributed.

Yes, the problem is only regarding select queries - none of these had “USE INDEX” specified. No custom consistency model from Java SDK v3 is used for all queries so I believe it’s “not bounded”.

To clarify the cases:

  1. Tested on CE and EE - achieved the same performance with single instance of a an index
  2. Only EE because of “num_replica” param being available in EE only. With total of 3 replicas per index created with “num_replica” param I achieve the same performance as in case 1.
  3. Both CE and EE - indexes created by specifying different names and selecting different nodes in WITH clause. Both CE and EE have the same performance drop.

Based on my tests I can tell that even in EE there’s a difference between creating replicas with “num_replica” and by creating indexes having the same structure but with different names.
As @vsr1 explained - the more indexes , the more to choose from and it takes more time.
I seems that even if a few indexes have the same structure but different names they are not being treated as a single “entity”. Only using “num_replica” param makes it working as expected.

Until we decide to go with EE, we’ll go with “USE INDEX” clause. It’s not perfect but for the time being it provides satisfying results. After I modified queries and pushed to prod, latencies dropped by a half!:
That’s amazing how much it impacted us and we didn’t realize it until recently :wink:

Hi @tlipinski,

As I and @vsr1, both have said already, picking up an index is a very tiny task. So, I believe that it should not lead to drop in performance (at least at the scale you mentioned).

Having said that, comparing case 1 CE and case 3 CE is also not a simple comparison. Depending upon index memory quota and RAM available, from case 1 CE to case 3 CE, we are almost doubling/tripling the number of indexes. So, It is possible that the machine resources are under contention and hence the performance is dropping.

With EE, you may not see such performance drop because EE uses an advanced storage engine which uses a better disk and in-memory layout. So, EE can scale with more number of indexes, but CE may not scale equally. Relevant differences between CE and EE are listed under Index Services table in By default, EE uses “Plasma – high-performance storage engine for indexes”, which is not available in CE.

Hope the helps.

If you are not using prepared statements in Java SDK (adhoc=false) try those will reduce prepare time.

Guys, I really appreciate your help!
We found out that index service was running low on memory (default 512MB setting). We assigned it a lot more and we no longer experience performance drop if there are more index replicas than one:)

1 Like

Very interesting!

I’m using the same concept for having the same indexes on both nodes in my CE cluster :slight_smile:

How/where did you see that the index service was running low on memory?

In web console, go to Servers and unfold any of the nodes hosting index service. There you’ll find memory section.
In our case “remaining” value was negative! After we switched to the cluster with more memory dedicated to index service then our performance under heavier load increased with exactly the same bucket/indexes setup as before.

Thanks. I’ll keep an eye on that - right now they are Ok :+1: