Partition Rollback issue

We are using the latest version of the Couchbase Kafka connector 4.2.1 and noticed all partitions are being rolled back.

Below is an example for one of the partitions (partition=380).

The resume from offset are as follows for partition 380:


After starting the connector, we received warning:

Received rollback for vbucket 380 to seqno 0 ; requested start offset was: partitionUuid = 76884041696838, seqno = 3397918, snapshot = [3397918-3397918], collectionManifestUid = 0

After a few minutes of letting the connector run, we observe the offset for this partition as:


Comparing the “resume from” and “current” offsets for partition 380, we identified the “vbuuid” value remained the same. Why would a rollback be requested for this partition by kafka connector?

The couchbase server is on version=Couchbase 7.1.4-3601-enterprise 7.1.4

We read this documentation and could not identify why a rollback to 0 occurred - kv_engine/docs/dcp/documentation/ at master · couchbase/kv_engine · GitHub

Ok - after re-reading the documentation, I am guessing the issue is due to this after realizing our CB version is Couchbase 7.1.4-3601-enterprise 7.1.4:

2. Wild Card 'Purge'

SnapStartSeqno < PurgeSeqno and StartSeqno != 0

The consumer needs to full rollback to 0 (if the requested start seqno is not already 0). This is necessary because for a consistent view the consumer should not miss out on any deleted (and subsequently purged items) on the producer.

Note [CB 7.2]: DCP consumers can choose to opt-out of this rollback case by setting the Ignore Purged Tombstones flag as part of their StreamRequest flags - see add-stream. However, this can result in the consumer having an inconsistent view of the vBucket - they can miss Deletion messages for documents and hence end up with stale documents (which have been deleted on the server) forever.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.