I have been experimenting with Autonomous Operator as I move my core infrastructure into Kubernetes. Using the Helm provider for Terraform I am able to create a functional Couchbase cluster, however as soon as I try to add persistent volumes to the Couchbase server pods, the operator starts killing pods. The pod gets created, the image gets pulled, and the container gets started, then just under two minutes later Operator kills the pod. The issue does not seem to be with persistent volumes themselves, as I was able to create the cluster when assigning a PV to only the default mount. Trying to create a volume mount for data or index results in a killed pod.
The symptoms are very similar to what was discussed in another thread on the forums:
However, I am not receiving the “deadline exceeded” error in any logs I can pull from.
I have resolved this issue for the time being by removing Helm from the equation and setting up the Admission Controller, CRD, Operator, and cluster using kubectl. I set up the cluster using the instructions in the official Autonomous Operator 1.2 documentation (Overview | Couchbase Docs) and now have a couchbase cluster with persistent Data and Default volume mounts.
Unfortunately, try as I might, I have not been able to identify where in Helm the failure is occurring. I would be happy to provide logs and code snippets to try and resolve the Helm issue, if someone could point me in the right direction.
Hi, persistent volumes cannot be added to a deployed cluster, which sounds like what you’re trying to do. Also, the admission controller should be preventing this. The best practice then is to specify overrides for the cluster with persistent volumes at the time the chart is being installed.
I haven’t experimented with helm for terraform, but if you can reproduce this issue using just ‘helm’ commands, and ‘helm upgrade’ please share and I can further investigate.