Memcached buckets return inconsistent results


I am evaluating the use of memcached buckets in our project. I got a bucket created and am able to connect to it and add documents to it using JavaSDK. I can see the ids in the webconsole. However when I try to query all 4 objects that I added earlier using bucket.get I get inconsistent results. Sometimes I get only 1 result and others as null, sometimes 2 and sometimes all 4…
I am using java sdk client 2.5.3

Any pointers on what and where I should look at to solve this issue ?

Thank you!

@sachinguptus can you please share your code that you are using to store and fetch the documents? Are you inserting and reading only from one sdk (java only) or are you using multiple languages or sdk versions?

I am using the following code to get the mem-cached bucket and then “put” and “get” methds on bucket to store and retrieve documents.
I get all the documents back if I query immediately after putting them in. However when I query after some time I get only “some” of the inserted documents, others are reported as not found.
I am using JAVA SDK, client version 2.5.3, but have tried with other versions as well with similar results.
I am only inserting 50 small documents for testing purpose and the bucket is configured for 5 gb size.

CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
String host = "_couchbase._tcp.xxxx.xxxx";
List<String> addresses = new ArrayList<>(0);
try {
	addresses = Bootstrap.fromDnsSrv(host, true, false);
	for (String adr: addresses)
	        System.out.println("Got address " + adr);
	} catch (NamingException e) {
Cluster cluster = CouchbaseCluster.create(env,addresses);
Bucket bucket = cluster.openBucket("zzzzzzzz-zzzz","zzzxxxx");

can you also show me the put and get method calls if possible?

Also, are you performing any topology changes between read and write (adding nodes, removing nodes, failover, rebalance)

Sure, here they are. No I am not doing any topology changes in between…

public static void insert(int from,int to,Bucket bucket){
		JsonObject user = JsonObject.empty()
      		  .put("firstname", "Walter")
      		  .put("lastname", "White")
      		  .put("job", "chemistry teacher")
      		  .put("age", 23);
		for(int i=from;i< to;i++){
			System.out.println(bucket.upsert(JsonDocument.create(key+i, user)));			
    public static void readAndPrint(int from,int to,Bucket bucket){
    		for(int i=from;i< to;i++){
    		}catch(Exception e){
    			System.out.println(key+i+" does not exist. "+e.toString());

hmm okay that looks fine, no expiration used or so. I think we need to go further down a little bit. Can you enable TRACE level logging for your full run and post the log? If you don’t want to redact the hostnames and stuff you can also send it here via private message to me. Maybe this gives us more clue whats going on at the lower levels.

Also alternatively if you have a standalone program for me to run and reproduce (ideally with the setup you are using) that might also work.

1 Like

private message has a limited length of 32000 characters, logs are way longer…is tere any specific part that I can send to you ?

you can also drop me an email to michael dot nitschinger at couchbase dot com :slight_smile:

1 Like

Have sent you email with the logs, let me know if you need further info.

Thank you.

Thank you @daschl , After fixing the bug in key mismatch, I cannot seem to reproduce the issue. I hope it was not a false alarm and will get back if I see the issue again.