Couchbase JDK connection pooling

Hello there,

I am using couchbase jdk 2.7.6 for couchbase server community 4.5.
My couchbase jdk java spring code is roughly like below.

public class SessionHelper {
	public Bucket bucket;
	public SessionHelper() {
		final CouchbaseEnvironment env = DefaultCouchbaseEnvironment
		Cluster cluster = CouchbaseCluster.creatre(env, "ipaddres")
		this.bucket = cluster.openBucket("bucket","password")

public class CommonService {

	SessionHelper sh;
	public int insertUserInfo() {

Often, bucket processing delays are detected from my APM tool.( Actually it’s not processing time but waiting time)

When an api is called from outside, bucket.upsert() of CommonService is called from a new thread. it means that multiple threads share single couchbase client.
Can this situation lead socket processing delay ?
Is parellel processing possible without delay in synchronouse mode?

I know that bucket.async().upsert is for asynchronouse mode and two values iopoolSize and computationPoolSize of CouchbaseEnvironment look related with connection pool.
What exactly ioPoolSize and computationPoolSize do?
What are the recommended values for each ?

Thank you.

Hi wanjune, welcome to the forum!

It’s safe and efficient to share the same bucket between threads. If two threads each do an “upsert” at the same time, the requests are executed in parallel, even if they both use the blocking API.

They set the number of threads to use for asynchronous IO and computation, respectively. SDK 2.7.6 API reference. They default to the number of CPUs, which is the recommended value. Since all of the SDK’s IO is ultimately asynchronous, increasing the number of threads is typically not expected to improve performance… but you can experiment and see.

It’s worth noting that Couchbase Java SDK 2.7 reached End of Life in March 2022, and Couchbase Server 4.5 reached End of life in April 2019 (reference).


Thank you David.

As you’ve mentioned couchbase server and jdk module we using is outdated so our company considering upgrade it.

I know that the ioPoolSize value is the number of NIO workers in netty network module. There is no bottleneck in couchbase jdk due to its asynchronous communication to the couchbase server. So, I think how many numbers of wokers are needed is ultimately related to the bandwidth that a worker can process. If a server doesn’t handle more than hundreds of MBps per second, it doesn’t need more than 3 workers I think. This is clear.

The problem is computationPoolsize.
I know the number of workers to process the Response Ring Queue is computationPoolSize.

In this article,

Ordering is kept by an Observable and all emitted items are processed sequentially by the same thread handling the stream. This means that all the rows are processed by one thread picked from the pool. If ordering is not important and the row transformation code is expensive it’s possible to achieve parallel processing.

As the writer says if lantency is more important than ordering then he/she has to change their source code to asynchronous. computationPoolsize value is useless until then. Am I right?