Same objects dequeued multiple times

The same objects are getting dequeued from the CouchbaseQueue multiple times. The expectation is that the item gets removed from the queue before it can be accessed in a subsequent request. Instead, the same items are being retrieved from the queue more than 200 times over the course of 15 minutes. Also, in my single node test environment, this issue doesn’t exist. This leads me to believe that the CouchbaseQeueue does not work with replication.

My application has several clients. It uses a CouchbaseQueue to store “jobs”. Couchbase is clustered with 3 nodes. The queue may have over 5,000 objects in it at a time. Using 2.7.16 of the .Net SDK.

Is there a way to enforce durability for a CouchbaseQueue? Is there anything that can be done at the query level or the server level to speed up replication, so that quick succession queuing and dequeuing is reliable?

    public QueueClient(string bucketName, string queueName)
        _bucket = ClusterHelper.GetBucket(bucketName);
        _couchbaseQueue = new CouchbaseQueue<QueueMessage>(_bucket, queueName);

    public QueueMessage Dequeue()
        return _couchbaseQueue.Dequeue();

Hi @quin-l,

This may not be related, but what does CouchbaseConfig.Register do? This doesn’t look like it’s part of the .NET SDK.

Also, what version of the SDK are you using? I’m assuming 2.x?

Now, to address the problem you’re seeing, here’s my two cents:

It doesn’t look like you are using the SyncRoot patternCouchbaseQueue appears to be implementing ICollection, which I don’t believe is thread-safe out of the box. Please correct me if I’m wrong, @jmorris (or @ingenthr or anyone else), but I think you’ll need to lock on SyncRoot if you are accessing the queue concurrently to avoid race conditions. As for why it’s working locally, I suspect it’s not because of replication, but because of a reduced likelihood of race conditions.

Yes, I believe that’s correct. The idea with CouchbaseQueue is that we implement System.Collections, including any specifics on locking. We might want to document this a bit better though.