N1QL Performances - Are 2,5K requests by second at 2ms each an expected average?

If the statement is the same, then each client should be using the same name automatically with .adhoc(false), caching the encoded_plan and then sending it to all of the other nodes. By turning the log level up you can probably see exactly what the client is doing.

@MikeGoldsmith might have another idea here.

Thanks a lot for your help, I think I found what was happening.

In the client code, I was starting a lot of async request. As they were all fired more or less at the same time, CB did not manage to use the same prepared statement for each.

By starting a dummy request before anything else and waiting for it to finish before starting the bulk of the async request, I’m now able to reuse the same prepared statement for each client.

With that change, I’m getting a throughput of around 8000 N1QL query / seconds, with an average response time of 2ms, which is much better than what I previously got.

I’m feeling that the 2ms response time is more or less the floor for N1QL queries (as I never gained much on that). Do you think that, given my cluster configuration, 8K N1QL queries / sec is an expected throughput ?

To me this part sounds like a defect. Do you have a simple example by chance or is it just as simple as firing off a lot of async tasks at nearly the same time?

To investigate this in the client, I’ve opened NCBC-1433. Glad you have a workaround for now though!

Hi @quentin.a

I believe what you have done by preparing the statements before your test begins is the best way to ensure the N1QL statements are prepared and reused correctly. If you have statements that would benefit from being prepared in a real-life application, these could be prepared during the start-up process.