Lcb_wait taking most of the time

Hi,

We are building online application using c sdk of couchbase. We are using N1QL to fetch the data to our appliaction. After analysing the performance of the individual parts of the program I figured out that total opration ( setting query, making cmd, passing arguments, and opration under query callback is taking too low amount of time.where as lcb_wait() is taking most of the application time…

For stats:
Time taken by all other oprations : 0.000547809 sec
Time taken by only lcb_wait : 0.185946043 sec

Time taken by query to execute on server (extracted from matrix of row fetched) : 857 microsecond

Please let me know why lcb_wait is taking too much time.
Is it because we are using default couchbase settings?
how we can reduce this time?

Thank you…

When you call lcb_wait() it actually executes your commands. Which means it have to communicate over the network and receive responses (including calling the callbacks of the application, like query row callback or get callback). Measuring lcb_wait() is not what you want to find out bottlenecks. Have you tried to run CPU profiler? It can show you where really the application spends CPU cycles the most.

I’d concur with @avsej there and this is often a point of confusion with profilers. When profiling, you’ll frequently find lots of code waiting on IO. In libcouchbase that may be lcb_wait() (unless you’re running your IO loop differently), where in Java it might be the internal blockForSingle(), etc. Why in IO? IO is pretty much the slowest part of your computer. :slight_smile:

If your N1QL queries are taking a while, try looking at the N1QL metrics returned, running an explain on the query you’re using (can it be optimized with a change or a better index?) and try the profile if needed.

Also, I hope you’re not just profiling the above program with one-shot. There is some bootstrap time required as the client gets set up which is typically not significant for most deployments, where apps run for a long time.

In my first implementation I am being deliberately naive with the waits and invoking one after every store or retrieve. My expectation is that to get good performance for bulk operations I will need to later change this to run a whole set of operations and only wait at the end. My assumption is that this should allow some to be executed by the server while I create more requests and that data will then flow as fast as the network will allow.