ok so, to get documents from different buckets, you have to open a connection to each of said buckets:
Cluster cluster; //you should already have that part... remember to reuse Cluster and Bucket instances
Bucket bucket1 = cluster.openBucket("bucket1", "pass1");
Bucket bucket2 = cluster.openBucket("bucket2", "pass2");
Bucket bucket3 = cluster.openBucket("bucket3", "pass3");
once you have these references, you can get your documents:
JsonDocument docFromB1 = bucket1.get("a");
JsonDocument docFromB2 = bucket2.get("b");
JsonDocument docFromB3 = bucket3.get("c");
Problem is, as you may have guessed, that this is done serially, in a blocking fashion.
If you don’t wan’t to have individual docs in a semantically correct variable, but rather would be happy with a list of the docs in any order, I can offer the following async optimization:
AsyncCluster asyncCluster = CouchbaseAsyncCluster.create(env, listOfIP);
//prepare to open each bucket then get it's document, asynchronously
Observable<JsonDocument> doc1 = asyncCluster.openBucket("bucket1", "pass1").get("a");
Observable<JsonDocument> doc2 = asyncCluster.openBucket("bucket2", "pass2").get("b");
Observable<JsonDocument> doc3 = asyncCluster.openBucket("bucket3", "pass3").get("c");
//trigger the actual connection by merging the 3 individual bucket streams...
List<JsonDocument> allDocs = Observable.merge(doc1, doc2, doc3)
//...then collecting each doc in a common List...
// optionnally set a timeout for the whole operation by chaining in ".timeout(duration, timeUnit)"
//...then block, rendez-vous waiting for all of this to finish (it can execute in parallel)
The optimization here is that each individual bucket stream can be executed in parallel, so it’ll open connections and retrieve documents in parallel, then aggregate all documents in a list that you wait for at the end.
The catch is that, for instance, bucket2 may be somehow quicker to respond and will serve doc2 first, so the documents in the list arrive in the order
[doc2, doc1, doc3] in this case.