What are the guidelines for required bandwidth and latency for XDCR between sites?
The Couchbase 3.0 manual contains sizing formulas for RAM and disk, but not for XDCR.
Hi @penacho, both latency and bandwidth will determine how fast we replicate data and how in sync both ends of replication will be. However there are no strict guidelines. Recommendation is to lower latency as much as possible (but you are typically tied with laws of physics) and at least have bandwidth to be able to carry all the changes that happen in all sources to all destinations.
thanks
-cihan
Right, I understand that and that’s the reason for my question: how much is needed?
For RAM and disk size there are formulas provided (in the ‘sizing guidelines’, if I’m not mistaken) to calculate the actual numbers needed (including overhead and such). What I’m looking are similar formulas to calculate the actual network bandwidth based on no. records, updates, etc.
Total bandwidth can be calculated roughly by the rate of mutations and size of data (key+document) you are manipulating.
Latency isn’t something you can control but faster the better as I said before. There is one recommendation if you have to work with high latencies like >600ms, we recommend you dial up the XDCR Max Replications per Bucket - this will increase parallelization per node and can help reduce replication lap under high latencies.
hope this helps.
For inserts and modifications of records into the store we can guestimate the size, although an indication of the overhead besides key+document would be nice to know.
How about deletes and gets (i.e. updating the last access timestamp) of records? I assume key size + overhead, how big is the overhead?