Docker cloud sync gateway admin port access on localhost only

I’ve just re-setup an old sync gateway / cb docker cloud stack that I thought used to work fine. It’s built with a dockerfile that uses couchbase/sync-gateway:1.3.1-community as its base.

When it’s up, I would then run

ssh d1 -L 127.0.0.1:4985:127.0.0.1:4985 -L 127.0.0.1:4984:127.0.0.1:4984

from my mac to get access to the admin port to test things out. However for some reason I can’t access the admin port anymore without adding these keys explicitly to the sync config.

  "adminInterface": "<ADMIN_INTERFACE>",
  "interface": "<INTERFACE>",

Note my entry script replaces these with the env vars in the stackfile.

sync:
  autoredeploy: true
  deployment_strategy: every_node
  environment:
    - 'ADMIN_INTERFACE=:4985'
    - 'CORS_LOGIN_ORIGIN_1=http://localhost:9000'
    - 'CORS_LOGIN_ORIGIN_2=http://localhost:3000'
    - 'CORS_LOGIN_ORIGIN_3=http://localhost:8080'
    - 'CORS_ORIGIN_1=http://localhost:9000'
    - 'CORS_ORIGIN_2=http://localhost:3000'
    - 'CORS_ORIGIN_3=http://localhost:8080'
    - DATABASE_BUCKET=default
    - DATABASE_NAME=dd
    - 'DATABASE_SERVER=http://couchbase:8091'
    - 'INTERFACE=:4984'
  image: 'xxxxx/sync:latest'
  links:
    - couchbase
  ports:
    - '4984:4984'
    - '4985:4985'
  restart: on-failure
  tags:
    - database
  volumes:
    - '/tmp:/tmp/config'

This works, but the problem is I can also access the admin port from the public ip. To be honest it’s been awhile since I looked at this, so I’m not sure what way I had it setup. I may have just used firewall rules stopping it… but I wondered anyone could advise. I think my original setup was along the lines of App Server (Public IP) > Load Balancer (Public IP) > Sync Gateway (Private Network) > Couchbase (Private Network), and the load balancer could talk to the sync gateways via docker cloud links or something…

Any advice, resource links?

If you use :4985 for the admin interface, it will listen on all interfaces (as opposed to just the loopback interface, or a 2nd ethernet card that was dedicated for “private” access)

Normally that wouldn’t be recommended, but in the docker case, that still doesn’t make the container port accessible from the host, because container ports are not exposed to the host by default in docker.

However your line:

ports:
- ‘4985:4985’

sets up bidirectional port forwarding from the host to the container. Meaning, any traffic on port 4985 of the host will get forwarded to port 4985 in the container, making it accessible.

With the setup you currently have, your only option to protect port 4985 would be to use a firewall rule.

@traun thanks for the information. Sure that makes sense about the ports:

ports:
- '4985:4985'

What I found was without that I couldn’t access port 4985 at all after ssh’in into the droplet (node).

ssh d1 -L 127.0.0.1:4985:127.0.0.1:4985 -L 127.0.0.1:4984:127.0.0.1:4984

Is there a better way to set this up or maybe I’m missing something?

For now I’ve added a firewall rule to block 4985. I don’t remember having to do that a year ago when I first tried a similar setup. I’m sure 4985 wasn’t available from the internet, only when I ssh’d in.