I’m looking at deploying 100 cluster nodes each with 32G RAM. And another 100 for fail over, all this is duplicated on another data center using XSDR.
Anyone have similar size cluster? Do you usually configure each node with lesser RAM and higher number of instances?
Have you encountered any issues running a large size cluster ( 100+ nodes ).
First of all I am inviting your to look at the following links:
- Documentation : Sizing : http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-bestpractic…
- Couchbase in production : http://www.couchbase.com/presentations/couchbase-server-24x7-production
This will help you to design you deployment topology.
I will be very in interested if you can share with us the volume of data/user and type of operation. This could explain why you “want” to have this type of cluster. Remember that data, replicad and indexes are distributed on the different nodes of the cluster, so you need a very large volume of data in your working set to justify 100 nodes.
Also I see JVM in the title of the post, why that? (Couchbase server does not use Java)
Appreciate very much your response and the links… they were very useful
I do have in fact a very large data volume about 4 Tera Bytes and they all need to be residing in RAM for fast read operations. Since Couchbase Server is not Java then I 'm assuming it have no problem with nodes with very large RAM ( no GC or alike ). So Assuming a node with 128 G RAM Then I need 8 nodes for a 1 Tera Bytes= 32 Nodes + same for replica, total = 64.
( PS: I had a question on Java SDK, I posted itin the SDK forum… but no replies… wonder if you can help!)