Couchbase 7.0 deployment strategies on different environments to improve utilization and reduce context switching

We deploy Couchbase servers ad-hoc for each application and each environment on top of the same Kubernetes cluster. Although the use-case is on the Kubernetes cluster, I think the following question may be relevant on any other system.

Story (long):
As an example, application A under development has a full deployment of itself (through Helm charts) in order to facilitate testing through CD/CI. Let’s say, we deploy components C1 and C2 on Kubernetes in order to test a new feature committed to C3 that interacts with C1 and C2. For this, since A is using a Couchbase server as K/V and query layer, a CB (CB1) server is deployed onto Kubernetes for application A in environment testing.

Then, component C4 is developed while there is a development deployment of A. This development deployment is using a completely separate CB (CB2).

Then in staging, canary, and production environments, we run separate CB instances. For one application A we, therefore, have 5 CB deployments. For application B we have another 5 CB deployments.

Story (short):
As we learned, a CB instance may run up to 300+ threads, thus, with all of the CB 30+ instances on the Kubernetes cluster, each testing, development, staging CB instance running on only a few 100 000s of data points, we observe 5000 to 6000 threads on each Kubernetes node. This leads to 100 000s of context switches, thus, significantly degrades performance.

Questions:
Q1 Would the following model reduce the number of threads? Instead of running separate CB clusters for each application and environment, use one CB cluster and separate applications in buckets and environments in scopes?
Q2 If it would, by how much?