XDCR in K8s-CAO based deployment via F5 VIPs

Hello,

The setup looks something like below:

  • 2 sites. Each site has a k8s cluster.
  • Each k8s cluster is running a Couchbase cluster.
  • There is a F5 Load Balancer on each site which has allocated 2 VIPs that point to the Couchbase cluster on the corresponding site.
  • Couchbase cluster node (pod) IPs and F5 VIPs are IPv6.
  • CAO version 2.2. Couchbase server version 6.6.x.

The ask is to configure xdcr between the 2 couchbase clusters but site-1 cluster should communicate the F5 LB VIP on site-2 and vice versa. F5 LB will in turn do port forwarding and send the traffic to corresponding Couchbase cluster.

Can somebody confirm if this is supported? And if yes, has anybody tried it? Guess, this should be common across the industry though not very sure.

It would be really helpful if somebody can share the steps if available or the config changes required to the yamls.

Thanks

Putting a load balanced reverse proxy in front of Couchbase is not supported, in fact any kind of DNAT is troublesome. This all stems from the fact that memcached uses client-side sharding (cannot load balance), and that requires the cluster to advertise its topology to the client (a NAT means the client can’t see the internal addresses).

Therefore to work, you’d either need a flat network (no NAT or load balancing), and someway to provide DDNS to the client side. Or you have one NAT gateway per Couchbase pod, some DDNS magic on the external side, and you can use the Operator to modify the topology advertisement to use those addresses. Both of which are covered in our docs here Couchbase Networking | Couchbase Docs.

I do appreciate this isn’t industry standard, or what is expected – you should be able to use any old reverse proxy, with TLS termination and any other things supported by ingresses, ingress gateways etc. I do have a plan to remedy this, but that is some way off.

I hope the background I’ve provided helps you understand the networking a little better, and what you have to work with,

@simon.murray , sincere thanks. We’ll mull over and revert in case we need more help.

Thanks again!