Overriding the 10 buckets limit (or not)

First, I’ve seen some posts here stating that it’s a bad idea and all, and to describe the scenario so you guys can help find an alternative.

I have a retail store application and I’m considering having a bucket per store, this way I can configure a multi-master replication between a central node and N local (intranet) nodes for each specific bucket.

Usually N=1: local node <-> central node for bucket ‘client1’.
Sometimes, N>1: N local nodes <-> central node for bucket ‘client1’.

With most buckets the RAM comsumption is expected to be very low. I need to test it further and use the resources accordingly, of course.

Finally, is this a scenario that may use a higher bucket limit? Should I consider an alternative?

I’ve seen in the editions comparison an Enterprise feature “Filtering replication with cross data-center replication”, I don’t know if it would change anything.

To use more than 10 buckets is the below the right way of setting it (say it in another post)?
curl -X POST -u user:pass -d maxBucketCount=15 http://localhost:8091/internalSettings


Hi, renato,

We do have a limit on how many buckets we commended per cluster.
Currently, the default is 10:

Once the number of buckets exceeds that limit, it may introduce some performance difficulties such as rebalance. But these all depend on the size of the bucket and the # of nodes in the cluster. We are working on improving this in future releases.

For your use case, I think you are most interested in XDCR filtering:

If that’s the case, you don’t need to create 1 bucker per region. Instead, you can prefix your documents with regionName and then created XDCR filtering based on prefix of the key.



Thanks for the answer.

I said XDCR filtering would be an option, but it’s only available in Enterprise Edition, right?

Yes, that’s true.