I am trying to figure out the best solution for my setup. I set up 3 CBS nodes, 4 SGW nodes and a Load Balancer in front of them. Now that raises questions.
This article http://blog.couchbase.com/couchbase-101-q-and-a
says that “it’s important to actually NOT put a load balancer between the application servers and Couchbase.”
while this one: http://developer.couchbase.com/documentation/server/4.1/developer-guide/connection-advanced.html
says that specifying multiple nodes is considered the best practice, now I am confused - I don’t want to hardcode anywhere anything but a load balancer!
During the tests I’ve done, my LB (nginx) does skip the offline SGW node as expected, but when a CBS node goes down, every SGW that used to be connected to it, stops responding too.
Also, some queries can’t be sent to SGW, they have to be sent directly to the CBS node. And if that particular node goes down, the request won’t even go through the LB and thus fail.
My only idea is to make nginx load balance BOTH SGW and CBS traffic under a certain domain (let’s call it my-lb.com) and make all requests go as follows:
client -> my-lb.com:4985 -> random SGW node:4985
client -> my-lb.com:8091 -> random CBS node:8091
Additionally, SGW’s config would point to my-lb.com:8091 so instead of dying as soon as “their” assigned server node goes down, they would be handled by my-lb.com and pointed to another working server node.
In other words, as I stated before - I don’t want to hardcode anything but a load balancer unless there’s a better idea.
That seems to work in theory, but what is your recommendation?