we investigate the couchbase autonomous operator.
Wondering if it is already supported to connect clients via externIPs:
We already use this approach for other non HTTP applications in our openshift 3.11 cluster, please have a look at the openshift docs:
So we do not need the DNS stuff, cause we manage the externalIPs and dns by ourself outside the kubernetes cluster.
Is there any other limitation that will forbit the service type: clusterIP with externalIP ?
There should be no limitations. Operator 1.2.x support OpenShift 3.11, and it creates external IP based services as described in the docs https://docs.couchbase.com/operator/1.2/network-requirements.html#public-connectivity-ddns. All you have to do is copy the DNS host name (it’s annotated on the services with external-dns.alpha.kubernetes.io/hostname), and the IP address of the service, and create an A record.
Let me know if you need further details!
thanks for your reply.
How a externalIP can be specified on the couchbasecluster CR?
And the servicetype should be clusterIP, but i only see nodePort or LoadBalancer in the docs.
Here is our mcrouter service for our current memcache solution:
- name: memcache
Is the source code of the couchbase operator public available ?
I’m not sure I understand. The ClusterIP type of service is a virtual IP (VIP) only visible within the cluster nodes. It has no way of surfacing an externalIP on the public internet. To expose an external IP, you must either use a LoadBalancer service type, or an Ingress resource.
For various reasons we have to use a LoadBlanacer service. When used, this service type uses nginx, haproxy or similar to provision a reverse proxy and also associate it with a public IP address. The load balancer service controller is responsible for IP address management (IPAM) not you, but I’m sure some will let you explicitly state it (but beware of collisions!)
Does this help a bit more?
There is another solution to reach clusterIP services from outside the cluster: externalIP on a kubernetes service like described in the docs, please have a look at:
You just have to make the externalIPs (IP that is reachable outside of the sdn in the kuberntes cluster) available on on at least one node in the kubernetes cluster. Kubernetes than builds iptables rules that route to the clusterIP of the service.
We use keepalived via ipfha from openshift:
So basically the “externalIP” is always up on at least one kubernetes nodes and if there is a clusterIP service with spec.externalIPs, kubernetes routes traffic from the externalIP to the clusterIP of the service.
In that case, no it’s not possible. I’d be cautious about adding support as it would break if you scaled the cluster up (run out of IP addresses if you aren’t careful), and you’d actually need up to twice the number of IP addresses as the cluster size due to upgrades and other DR situations. That said I’ll mention it to product management.
Is there a reason why you cannot use the supported load-balancer method out of interest?
There is no need to change anything (upgrade or downgrade) on the externalIPs when the couchbase cluster or kubernetes cluster is scaled or updated.
The externalIP is bound to the kubernetes service and the loadbalancing is done on the sdn of the kubernetes cluster.
It does not matter how many couchbase pods are running.
If the couchbase cluster scales up, all pods will be available via the kuberntees service, which is reachable from outside via the externalIP.
This kind of setup is required for kubernetes clusters that:
- are on premise
- ip / dn is managed outside the cluster
- do not have loadbalancer cloud provider feature
Maybe this medium post help to make things clear: https://medium.com/swlh/kubernetes-external-ip-service-type-5e5e9ad62fcd
It would be great to see support for this kind of setup
It should be just a small change: support serviceType: clusterIP and add network.externalIPs field to the cr/d, service.
Is the operator source code public available ? Maybe we could contribute via PR ?
Nope, fully proprietary I’m afraid. Oh hold up, I have had a thought… So one of the things that is coming in 2.1, is the ability to specify a template for per-pod services. You can set what you want, in particular the type to ClusterIP. You would have to make a controller to spot services coming up and add an external IP. The model is that if we are told to control a field–via the CR–then we can do what we want with it. Any other field we ignore, so we wouldn’t touch external IP addresses.
Does this sound more interesting?
Too bad that it is not fully open source but company rules …
So the per pod service solution could be interesting but with that we do not have load balancing via the service.
Also we will need to scale the externalIPs when the couchbase cluster is scaled like you described before.
The simples solution in my opinion is still:
Support serviceType: clusterIP and add network.externalIPs field to the cr/d, service.
It would be great if you discuss the request in your team.
Thanks Simon for the fast and good help.
Couchbase isn’t load balanced, it uses client-side sharding so you have to use one external service per pod. That’s why the networking gets interesting for Kubernetes.
So it is not possible to use it as memcached drop-in and replicate all data?
If the data is replicated and couchbase is able to get it from it`s members if it is not present on the node where the clusterIP service forwarded the request, than clusterIP with externalIP should be good to go…
It can be used as a drop in replacement. However memcached buckets are not replicated, like standard memcached, and still undergo the same client-side hashing as a vanilla memcached client would perform.
If you want to use replication and failover, then you can use a couchbase or couchbase ephemeral bucket. Regardless of what you choose, all clients must hash and connect directly to the node on which the data resides. There is no load-balancing.
Okay thanks for this clarification.