There could be an issue with your security groups in ec2 if you’ve deployed this application in aws as outlined there in the blog. Especially since the operations work with a single container - which may happen to be the same node as your application.
Another swarm specific issue is that your application itself needs to be started as a service within swarm. Otherwise it will not be on the same network as your couchbase containers. If you run a ‘docker inspect’ on your app and the containers you should see them both on an overlay network.
For the setup, I got 3 couchbase servers separately on 3 nodes of my swarm cluster mode. The docker swarm cluster is hosted on VMs of our data center. The application client does operations from my local pc with a vpn connection. On the swarm cluster, we have a reverse proxy to handle tcp and http. So the application client does not to be on the same overlay network with couchbase servers.
For the exposed container ports, I did open 8091, 8092, 11210, 11211. The other ports I wondered that they are not necessary because I dont have SSL or XDCR at this moment.
Oh I got what you meant now. I were thinking that I just provide an entrypoint to a server couchbase in the cluster. And couchbase does some internal mecanisms to get key may be from other servers and then send response to the client.
As what you said, I have to provide my client all couchbase servers in the cluster to make sure that it can request any key ? If thats true, indeed, I have to put my client inside swarm cluster with couchbase servers.
In your case it’s best to run the application inside of the swarm because unfortunately the issue will still happen even if you give your client all the servers. From Swarms documentation:
When a user or process connects to a service, any worker node running a service task may respond.
Say you have 3 swarm nodes IP with addresses 188.8.131.52, 184.108.40.206, 220.127.116.11. And the key you want is inside a container on the 3rd node (18.104.22.168). When you access 22.214.171.124:11210 the mesh networking allows any node in the swarm exposing port 11210 to respond which may not be that exact host. The sdk may attempt to smartly find the IP of the actual container with the key, but at that point it will be using IP’s on swarm’s overlay network which will timeout since you are outside of that network.
I’m not aware of a way around this from a swarm perspective, but having your app on the overlay network should resolve issues here.
I’m dealing with the same issue here. @tommie wouldn’t be possible to talk with the SDK to a sync gateway that is facing the open world and behind a proxy and the couchbase servers and gateway on a shared separate network?