Couchbase Cluster - Expose External IP


is there any possibility to use external IPs for the cluster set up by the Couchbase Operator?
It would be very nice to make the cluster usable to applications deployed outside the Kubernetes cluster.

We are currently testing the Couchbase Operator on Azure AKS.

Thank you!

Hi, I appreciate your time in trying out our software and would love to hear more about your use case. Are you trying to connect clients over the public Internet or via peered private networks?

We are trying to connect clients over the “public” internet. There are two different use cases for us at the moment:

  1. Connect a separate app which is deployed as Azure Web/API App to the couchbase cluster
  2. XDCR between two clusters in two different geographical regions.

I’ll address #2 first as it’s easiest. We are releasing 1.0.0 in very soon which will have support for XDCR across a site-to-site VPN. The essential things that need to work are that you have connectivity from one Kubernetes cluster to the other e.g. they need to have non-overlapping address ranges, the necessary routes installed via the VPN and finally one Kubernetes node in one region can ping another node in another region. There will be a new Couchbase Operator cluster option introduced called exposedFeatures; add “xdcr” to this and you will be able to establish the XDCR connection via a node port IP address. AKS should have VNet support to create the VPN.

Addressing #1, there is no easy way to perform this integration at present, although our clients are being updated to take advantage of the mechanism which allows XDCR to function (this should begin to be released in a few months time), again this would require a VPN from the App Service to the Kubernetes network, and the use of a node port IP address.

Usually we’d expect applications to be deployed in Kubernetes along side the Couchbase cluster and make use of the built in service discovery provided by Kubernetes’ network layer, hence my interest in your use case.

Feel free to discuss anything further.

I also need to connect external SDKs to a Couchbase cluster in Kubernetes.
In my situation, AWS Lambdas should be able to connect to the Couchbase cluster in Kubernetes.
I’m sure that because of this use case external SDKs should be supported.

Hi Robin,

Thanks for your feedback. There a couple deficiencies with Kubernetes which makes this use pattern less than perfect that you should be aware of.

We could need to expose ports externally to the internet with a Service per Couchbase node. As there is no provision for control over external DNS we’d have to expose the ports via IP only. During the life-cycle of the cluster, those IP addresses would change and eventually your lambda functions would cease to function.

Additionally you need to consider TLS as this is running over the public internet. Having no DNS makes this somewhat tricky as you’d have to supply us with a CA certificate and key so we could create server certificates with the relevant IP alternative names.

I guess all the problems go away if we were to be granted control over a DNS server so we could control A and SRV records via DDNS. Actually a quick google suggests you are not the first with this requirement:

So I believe this pattern will be possible at some point in the future but not immediately. We’re still left with the problem that the external-dns controller probably won’t support aggregating individual services into a single SRV record, so your Couchbase connection string will be prone to failure as names change. I’ll have a word with the developers and see if it’s realistic.

1 Like

Does this issue affect use of the Couchbase Kafka connector?

We are running CB in K8s using the Autonomous Operator. While testing this using Docker for Windows, I have had no success connecting an application from the host or from a docker container to the CB cluster in K8s. Now I understand why. (would be worth documenting this limitation on the Autonomous Operator Docs, its cost me hours).

While this is not an issue for our own part of the application as that will run within the K8s cluster, we also need to run a kafka service. Until confluent release their autonomous operator I was planning on running kafka in a docker container and connecting that to the CB cluster in K8s.

Will the kafka connector be able to connect to the CB cluster in K8s? Does the kafka connector use the SDK?



Hi Chris,

So 8 months is a long time in the world of Kubernetes, and we’ve made a lot of progress that I will share here as its relevant to the thread in general. 1.2.0 is out in around a month. What we have done is reuse the multi-network configuration stuff in Couchbase Server that is used to enable XDCR via IP and add in support for DNS names. Coupled with public addressability as a result of optionally using LoadBalancer rather than NodePort services we are now able to provide connectivity over the public internet, with full end-to-end TLS encryption.

As regards SDKs they should be able to connect using this mechanism from anywhere in the universe. I’m no expert on the Kafka connector but I can see it’s written in Java so highly likely uses the Java SDK under the hood.

I’ll revisit the networking documentation before 1.2.0 and ensure it’s very clear on networking options, never the easiest topic to describe :smile:

In the mean time if it’s holding up your testing, you can add client to the spec.exposedFeatures attribute. It’s up to you to ensure there is pure L3 routing (absolutely no DNAT, the IP address client connects to must be the same as what Couchbase Server advertises) between your Kafka container and the Kubernetes underlay network, a site-to-site VPN is usually the easiest to configure. Then to connect the Kafka connector to Couchbase Server follow the guide here

Any more help you need just give me a holler!


How can i use an ingress instead of the node port or load balancer. Is it also possible to use lets encrypt for tls.

I am using the following helm chart

You cannot use an Ingress for Couchbase as that only supports ports 80 and 443. Our ports are 8091, 8092… We also can expose up to a maximum of 6 different ports, these cannot be multiplexed over a single port 443 (the data service is a binary protocol for starters!), so a Service is the only option.

You can use anything you want to provide TLS certificates, the only constraints we need are the correct wildcard X.509 subject alternate names so clients can match the host name they are connecting to with the hosts the certificate is valid for.


Thanks for the quick reply. How can it work with lets encrypt?

Simply use lets encrypt to create a certificate and key pair then install with the normal server secret

As the root CA is public you’ll probably need to copy this from your browser and install it as the operator secret so as not to cause all the error checking we have in place to fail.


Thanks for the feedback. Ideally would be good if it can be done automatically like traefik.

So that couchbase will automatically renew the certs

Hello, I am running couchbase in kubernetes and it is now stuck at rebalancing at 40%.

No documents in Couchbase Servers. Just created them in kubernetes and is parked and Couchbase is unable to re-balance. Stuck at re-balancing

Totally disappointing

Hi We are using nodeport for our XDCR setup.
Now requirement is from different AKS we need to connect application server using Java SDK, to Couchbase.
How that will work?

I can use cbq RESTAPI remotely, all work with hostIP with nodeport.

What changes I need to do for my application program to connect to host IP of diff AKS with nodepod?

In my case, it is not PUBLIC, it is from one AKS to another AKS and both have their own IPs which are reachable via HostIP and nodeports.

XDCR is running fine and stable.
Now application server wants to connect to DB server in different AKS.

You connect in exactly the same way as XDCR e.g. client connect to connect to http://x.y.z.w:3xxxx where x.y.z.w is a node IP and 3xxxx is the node port.

Now the problem with using node ports is that if a node goes away or changes address, the clients will break. If the pod that generates the node port goes away, the clients will break. As you are using IP addresses and NodePorts you cannot encrypt the traffic. Be aware of these limitations.

The correct way to connect will be described in the upcoming Operator 2.0 documentation. The short version is that your clients talk to a DNS server that forwards the DNS zone %namespace%.svc.cluster.local to the remote Kubernetes DNS server where the cluster lives. The remote cluster must be using flat networking (no overlays). The client can then connect to couchbase://%clustername%.%namespace%.svc (and must have at least a cluster.local search domain configured for its stub resolver). This gives you high-availability, service discovery and the option of using TLS.

Hi, I have Couchbase deployed into OpenShift and need to access it via an application that is running outside of OpenShift. This is all internal network (on-prem), not Internet.
What is the recommended approach for this (range of suggestions I have found here:

The only approach that is available is as noted above. To connect a client use the following instructions

Thanks for your reply Simon. Will that approach work where access is required to a range of Couchbase ports (i.e. via the Java client library)? The example you gave of XSDR only requires one port, I beleive.

Actually it needs admin, data and index, so that’s half the job :smiley: Yes, any client service will work (just add client to the spec.exposedFeatures array to enable the correct ports.

1 Like