In our usecase we have around 3 millions db rows to download which takes around 2 hours which is a bit unacceptable in our usecase. So to speed up the process we decided to run Multiple replicators in parellel but that leads to about 10% of the inserts failing with the following message.
Looks like when a thread is writing to the DB then other threads are unable to write. But the thread which failed to write instead of either waiting or reattempting the insertion, it just moves on to the next row
Unfortunately I’ve also run into this problem using just one replicator during long running replications.
It’s rare but when it happens clients that have completed replication and you expect to have fully updated documents instead have randomly outdated documents. The problem is that the replicator never retries failed documents when this happens. I would expect it to retry on the next replication attempt. Currently the only way I’ve found to force it to retry a document is to create and save another revision of the failed document on the server.
I do have another thread that writes to the database while replication is happening. I assumed it was the same problem you described above, multiple threads can’t write to the database file at the same time?