View inconsistency - Key duplicates

Hi everyone,

We recently updated our cluster to Couchbase 3.0.1 Community and are facing a problem we never had to deal with with previous versions. Indeed, from time to time, with the latest Couchbase version, view are “corrupted” and include multiple occurrences of a key that was emited only once.

We tried and reproduced the bug with a simple script that populates a bucket with 1.000.000 documents containing a single field “random” which is assigned a random value in the 0-9 range. Then you just have to perform random updates by requesting all object with a given “random” value using the following view :

function (doc, meta) {
  emit(doc.random, null);

You just have to update the “random” field to a new random value, save the documents and after a while you’ll notice duplicates in the index, a document is emitted more than once and we don’t understand why.

Do you have any ideas concerning the origin of our problem?

Have a nice day!


I though you might find the example below useful. This is what we can find in the view :


As you can see, we have the same document which emits two different keys.

Note that it is entirely valid for the map function to call emit() multiple times, so if your function does that you can end up with more than one entry in the view results. That’ll depend on your view logic.

If you’re confident that your view logic doesn’t do this, can you check the map_reduce.errors log to see if there is something “interesting” in there? Also, if your map function is small-- could you post it here? If it’s really complicated, it may be better to post it elsewhere (like a gist on github) and post a link here.

Hi ingenthr,

I’m well aware of that! But as you can see in my first post, my map function only calls emit once.

I’ll check map_reduce.errors and see if there anything there worth mentioning! Thanks for pointing that out.

Ok, I just gave a look at mapreduce_errors.log and the file is empty.

Also, I ran another bunch of test on my bucket and, when I disable bucket compaction, it seems like this problem doesn’t appear anymore.

Can you please have a look at the couchdb.log and look for messages like:

Compactor for set view `...`, ... group `...`, applying delta of ... changes (retry number ..., max # of log files per btree ...)

The ... are specific to your run, so grep for something like applying delta of. That would help a lot finding the issue.

Sure! There is the result:

My bad, I have run a lot of different tests and this may not be representative of what really happens. I’m going to clear logs and run the tests again. I’ll link the new logs as possible.

Hey guys! Any news concerning the issue?

No news yet. I’ll default further developments to the JIRA isse MB-13160. Please keep an eye on the issue.