Couchbase Kafka Connector leaks event

I am using couchbase kafka connector 4.0.1 version with distribution mode to capture business events and send them to Kafka.
I sometimes encounter an interesting case that didn’t be discussed or solved, when documents are updated instantly some of the updates do not be sent to Kafka. I check the logs of events and it shows just the latest version is sent to Kafka.
Is there any mistake in my config?

Couchbase -> Enterprise Edition 6.0.2

“connector.class”: “com.couchbase.connect.kafka.CouchbaseSourceConnector”,
“couchbase.persistence.polling.interval”: “0”,
“tasks.max”: “3”,
“couchbase.compression”: “ENABLED”,
“”: “2000”,
“transforms”: “ignoreDeletes,deserializeJson”,
“couchbase.source.handler”: “com.couchbase.connect.kafka.handler.source.RawJsonSourceHandler”,
“couchbase.seed.nodes”: “nodes”,
“couchbase.bucket”: “bucket”,
“couchbase.username”: “user”,
“value.converter.schemas.enable”: “false”,
“event.filter.class”: “com.couchbase.connect.kafka.filter.AllPassFilter”,
“name”: “couchbase-connector”,
“couchbase.password”: “pass”,
“value.converter”: “org.apache.kafka.connect.json.JsonConverter”,
“transforms.deserializeJson.type”: “com.couchbase.connect.kafka.transform.DeserializeJson”,
“couchbase.topic”: “topic”,
“transforms.ignoreDeletes.type”: “com.couchbase.connect.kafka.transform.DropIfNullValue”

Hi Onur,

There’s no mistake in your config; this is the expected behavior. Only the latest version of a document is guaranteed to be published to Kafka; intermediate and/or historical versions might not be published.

I think we can do a better job of documenting this limitation, so I filed KAFKAC-237.


Thanks, David,
I will re-consider my system architecture to process events.

Hi @onur,

If your system design is updated to use unique keys and stream that to Kafka you will see everything and avoid the dedup behaviour that @david.nault pointed out.

I know you are using Kafka. However as the Eventing PM I do want to point out that this Couchbase service is also subject to the same dedup behaviour that you just experienced in the Kafka connector. For a simple description of dedup (fromthe Eventing perspective) refer to eventing-Terminologies.html#deduplication

1 Like