We have a spring boot rest application using spring data couchbase, the application fetch data from couchbase and returns json response. Our service is designed to fetch 2700+ documents from a couchbase bucket that holds around 1,000,000 documents.
The expected response time of the service is 600ms and we could achieve average response as 800ms while performing load testing using Jmeter with 10 users. However the response time have increased dramatically when the test load was increased to 30 users - the average response time rises to 1720 ms; with 500 ms as best case and 3272 ms as worst case.
The application uses Rx API for async operations and in-order to achieve high throughput. The code snippet is as below.
Observable.from(couchbaseRefs).flatMap(id -> bucket.async().get(id)).toList()
Upon analysing the thread dump report we could notice considerable CPU wait within Couchbase API and Rx below breakdown results shows the same. Kindly help in understanding this behaviour, Also we would appreciate with any better performance tuning suggestion within Java SDK or Couchbase setup.
Application uses : Spring Data Couchbase 2.1.2.RELEASE
Couchbase : 4.5.0-2601 Enterprise Edition with 2 nodes
Yes your observation about CPU wait is right. The blockingWaitStrategy in the lmax disruptor does not burn CPU if there are no requests to be handled, it is conservative with respect to CPU usage. There could be a slightly additional latency but the resources are efficiently shared.
Then why the park method is taking so much time when load was increased to 30 users ?
The application uses Rx API for async operations.
Can it be that CB client SDK when using Rx.Java is not thread safe and causes deadlocks to kick in which causing the time to increase?
Hey @Tal_Avissar, I’d recommend opening a new topic with your specific info rather than tagging a question on to a 3 year old topic. Can you raise a new topic with the specific observations (including how you’re measuring time)?
I’ll look out for that other topic.
Hey @ingenthr - we’re having the same issue as mentioned in this thread and I think the original concern was not addressed
In this case it I believe it was. @subhashni effectively said you will see this in the stack, but it is not actively using CPU time. It is a pretty normal state to find threads in if they are waiting on IO completion.
If you have some other observations based on profiling or a reason to suspect a deadlock (like one in a thread dump), that’d be a different discussion, of course.