XDCR sending 300K+ mutations for documents which no longer exist
We were setting up ElasticSearch using XDCR and doing some tests which included generating around 300K documents. In the process of doing this we stopped the replication, including deleting it from the web console, deleted almost all of the documents (left 5), and then set up XDCR again from the beginning. However, what we noticed was that the mutations for all of those documents, which no longer existed in the bucket, were being sent over the wire.
Given that it was an entirely new replication being set up, it would seem like the most efficient thing to do would be to just start pushing over all documents in their current state and ignore older mutations since they aren't really necessary. Does the observed behavior mean if we set up a new replication 6 months or a year down the road it is going to have to push over all mutations until the beginning of the buckets life? Is this expected behavior, or is there something we are missing/misunderstanding?