Couchbase BucketClosedException while bucket is in use

CB server- Enterprise Edition 6.0.0 build 1693
CB Java sdk- 2.7.10
I’m running stress test on my CB cluster, where I’m generating load of up to 1k ops/sec. The load is a mix of set/get operations. The 2 operations are fairly equal in number. The cluster has 2 buckets, of which I’m using only one.
I’m connecting to the cluster in following manner

Cluster cluster = CouchbaseCluster.create(config.getString(CB_GE_HOSTS));
		cluster.authenticate(new PasswordAuthenticator(config.getString(CB_GE_USERNAME), config.getString(CB_GE_PASSWORD)));
		bucket = cluster.openBucket(config.getString(CB_GE_BUCKET));

In my code, i do bucket.disconnect() only when the pod get a shutdown signal. so I’m not explicitly closing the bucket or the cluster connection.
While the operations succeed initially, I start to get BucketClosedException while the operations are running.
Following is the stacktrace I’ve in the logs-

com.couchbase.client.core.BucketClosedException: gameengine has been closed
        at com.couchbase.client.core.RequestHandler.dispatchRequest(RequestHandler.java:240) ~[com.couchbase.client.core-io-1.7.10.jar:na]
        at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:207) ~[com.couchbase.client.core-io-1.7.10.jar:na]
        at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:78) ~[com.couchbase.client.core-io-1.7.10.jar:na]
        at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) ~[com.couchbase.client.core-io-1.7.10.jar:na]
        at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) ~[com.couchbase.client.core-io-1.7.10.jar:na]
        at com.couchbase.client.deps.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[com.couchbase.client.core-io-1.7.10.jar:na]
        at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_242]

Doesn’t this happen usually with idle connections only? I’ve a single pod instance running, and I’m creating connection to the cluster just once(in static code segment).
This mostly happens more often when I increase the traffic to CB cluster. For some initial requests, CB performs fine, but then all calls to CB start to fail. The cluster currently has a single EC2 instance, with 90GB of data service and 512 MB index service. When I look at server resources, it lists ~45 connections.

I found that the disconnect is being triggered from client end.

Got instructed to disconnect.
2020-03-26 09:17:14.921 +0000 - [DEBUG] - from com.couchbase.client.core.node.Node -  - com.couchbase.client.core.logging.Slf4JLogger - [Thread-29] - []: Got instructed to disconnect.
2020-03-26 09:17:14.921 +0000 - [DEBUG] - from com.couchbase.client.core.node.Node -  - com.couchbase.client.core.logging.Slf4JLogger - [Thread-29] - []: Instructing Service CONFIG to disconnect.
2020-03-26 09:17:14.921 +0000 - [DEBUG] - from com.couchbase.client.core.node.Node -  - com.couchbase.client.core.logging.Slf4JLogger - [Thread-29] - []: Instructing Service VIEW to disconnect.
2020-03-26 09:17:14.921 +0000 - [DEBUG] - from com.couchbase.client.core.service.Service -  - com.couchbase.client.core.logging.Slf4JLogger - [Thread-29] - [][ConfigService]: Got instructed to disconnect.

However, the service explicitly invokes bucket.close() only on kill/terminate signal to service.