Cannot create cluster with persistent volumes via helm

I’ve been trying to create a persistent storage as described in the documentation but it never succeeds, most of the time resulting in the error shown below about resources being too old. Is there something obvious I’m missing? If I omit anything to do with the persistent volumes then my cluster is created successfully.

I’m using the kubernetes cluster provided by docker for windows, but I’ve also tried using minikube’s and both produce the same unwanted result.

couchbaseOperator:
    commandArgs:
        pod-create-timeout: 60m
  install:
    syncGateway: true
  cluster:
    image: couchbase/server:6.6.0
    name: inform
    security:
      password: password
    securityContext:
      fsGroup: 1000
    servers:
      default:
        pod:
          volumeMounts:
            default: inform-couchbase-data-pvc
            data: inform-couchbase-data-pvc
    volumeClaimTemplates:
    - metadata:
        name: inform-couchbase-data-pvc
      spec:
        storageClassName: hostpath
        resources:
          requests:
            storage: 1Gi

> kubectl describe sc hostpath
Name:            hostpath
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"hostpath"},"provisioner":"docker.io/hostpath","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           docker.io/hostpath
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

> kubectl logs inform-couchbase-operator-56f86568cd-v5hwj -f -n inform
{"level":"info","ts":1606490153.1173308,"logger":"main","msg":"couchbase-operator","version":"2.0.2 (build 121)","revision":"release"}
{"level":"info","ts":1606490153.1182618,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1606490153.1637716,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1606490153.1673813,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1606490153.1969547,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"couchbase-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1606490153.2980573,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"couchbase-controller"}
{"level":"info","ts":1606490153.398477,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"couchbase-controller","worker count":4}
{"level":"info","ts":1606490169.4152517,"logger":"cluster","msg":"Watching new cluster","cluster":"inform/inform"}
{"level":"info","ts":1606490169.4155257,"logger":"cluster","msg":"Janitor starting","cluster":"inform/inform"}
{"level":"info","ts":1606490169.4258163,"logger":"cluster","msg":"Couchbase client starting","cluster":"inform/inform"}
{"level":"info","ts":1606490169.4654825,"logger":"cluster","msg":"UI service created","cluster":"inform/inform","name":"inform-ui"}
{"level":"info","ts":1606490169.4756458,"logger":"cluster","msg":"Cluster does not exist so the operator is attempting to create it","cluster":"inform/inform"}
{"level":"info","ts":1606490169.6203144,"logger":"cluster","msg":"Creating pod","cluster":"inform/inform","name":"inform-0000","image":"couchbase/server:6.6.0"}
{"level":"info","ts":1606490178.2968667,"logger":"cluster","msg":"Operator added member","cluster":"inform/inform","name":"inform-0000"}
{"level":"info","ts":1606490178.3002708,"logger":"cluster","msg":"Initial pod creating","cluster":"inform/inform"}
{"level":"info","ts":1606490183.5233734,"logger":"cluster","msg":"Running","cluster":"inform/inform"}
{"level":"info","ts":1606490183.5900555,"logger":"cluster","msg":"Updating auto compaction settings","cluster":"inform/inform"}
{"level":"info","ts":1606490183.6110203,"logger":"couchbaseutil","msg":"Cluster status","cluster":"inform/inform","balance":"balanced","rebalancing":false}
{"level":"info","ts":1606490183.6111305,"logger":"couchbaseutil","msg":"Node status","cluster":"inform/inform","name":"inform-0000","version":"6.6.0","class":"default","managed":true,"status":"active"}
{"level":"info","ts":1606490186.0447717,"logger":"cluster","msg":"Creating pod","cluster":"inform/inform","name":"inform-0001","image":"couchbase/server:6.6.0"}
{"level":"info","ts":1606490201.338938,"logger":"cluster","msg":"Pod added to cluster","cluster":"inform/inform","name":"inform-0001"}
{"level":"info","ts":1606490201.412151,"logger":"cluster","msg":"Creating pod","cluster":"inform/inform","name":"inform-0002","image":"couchbase/server:6.6.0"}
{"level":"info","ts":1606490214.0819612,"logger":"cluster","msg":"Pod added to cluster","cluster":"inform/inform","name":"inform-0002"}
{"level":"info","ts":1606490222.0687726,"logger":"couchbaseutil","msg":"Rebalancing","cluster":"inform/inform","progress":0}
{"level":"info","ts":1606490226.0734997,"logger":"couchbaseutil","msg":"Rebalancing","cluster":"inform/inform","progress":16.66666666666667}
{"level":"info","ts":1606490230.086189,"logger":"couchbaseutil","msg":"Rebalancing","cluster":"inform/inform","progress":33.33333333333334}
{"level":"info","ts":1606490234.1158228,"logger":"couchbaseutil","msg":"Rebalancing","cluster":"inform/inform","progress":33.33333333333334}
{"level":"info","ts":1606490238.1300144,"logger":"couchbaseutil","msg":"Rebalancing","cluster":"inform/inform","progress":33.33333333333335}
{"level":"info","ts":1606490247.1421285,"logger":"cluster","msg":"Rebalance completed successfully","cluster":"inform/inform"}
{"level":"info","ts":1606490247.2099483,"logger":"cluster","msg":"Reconcile completed","cluster":"inform/inform"}
{"level":"info","ts":1606490247.3654308,"logger":"cluster","msg":"Bucket created","cluster":"inform/inform","name":"construct"}
{"level":"info","ts":1606490249.0947871,"logger":"cluster","msg":"Resource updated","cluster":"inform/inform","diff":""}
{"level":"info","ts":1606490249.5506256,"logger":"couchbaseutil","msg":"Cluster status","cluster":"inform/inform","balance":"balanced","rebalancing":false}
{"level":"info","ts":1606490249.5506997,"logger":"couchbaseutil","msg":"Node status","cluster":"inform/inform","name":"inform-0000","version":"6.6.0","class":"default","managed":true,"status":"warmup"}
{"level":"info","ts":1606490249.550714,"logger":"couchbaseutil","msg":"Node status","cluster":"inform/inform","name":"inform-0001","version":"6.6.0","class":"default","managed":true,"status":"warmup"}
{"level":"info","ts":1606490249.5507398,"logger":"couchbaseutil","msg":"Node status","cluster":"inform/inform","name":"inform-0002","version":"6.6.0","class":"default","managed":true,"status":"warmup"}
{"level":"info","ts":1606490249.5884488,"logger":"cluster","msg":"Pods warming up, skipping","cluster":"inform/inform"}
{"level":"info","ts":1606490249.7874432,"logger":"cluster","msg":"Reconcile completed","cluster":"inform/inform"}
{"level":"info","ts":1606490253.205409,"logger":"couchbaseutil","msg":"Cluster status","cluster":"inform/inform","balance":"balanced","rebalancing":false}
{"level":"info","ts":1606490253.2054794,"logger":"couchbaseutil","msg":"Node status","cluster":"inform/inform","name":"inform-0000","version":"6.6.0","class":"default","managed":true,"status":"warmup"}
{"level":"info","ts":1606490253.2054925,"logger":"couchbaseutil","msg":"Node status","cluster":"inform/inform","name":"inform-0001","version":"6.6.0","class":"default","managed":true,"status":"warmup"}
{"level":"info","ts":1606490253.2096348,"logger":"couchbaseutil","msg":"Node status","cluster":"inform/inform","name":"inform-0002","version":"6.6.0","class":"default","managed":true,"status":"warmup"}
{"level":"info","ts":1606490253.5870886,"logger":"cluster","msg":"Pods warming up, skipping","cluster":"inform/inform"}
{"level":"info","ts":1606490253.788692,"logger":"cluster","msg":"Reconcile completed","cluster":"inform/inform"}
W1127 15:22:59.492753       1 reflector.go:270] github.com/couchbase/couchbase-operator/pkg/client/cache.go:56: watch of *v1.Job ended with: The resourceVersion for the provided watch is too old.

Ignore the docs, they appear to have bugs. Instead, use the source Luke!

Basically, the volumeMounts comes under “default” not pod e.g.

cluster:
  servers:
    default:
      volumeMounts:
        default: inform-couchbase-data-pvc
        data: inform-couchbase-data-pvc

Thank you! Unfortunately I think the siths are still at work as I’ve made the changes as expected but still unable to use any type of persistent storage…

At first just making the changes suggested does allow me to bring up the cluster, which I then add a document to but this document does not exist when I uninstall the k8s cluster via helm uninstall and then reinstall it. Is this just a misunderstanding or should i find this same document after this…

I’ve tried changing the storageClassName to a volumeName to use a persistent volume linked to a local directory but this doesn’t bring up the couchbase cluster…

apiVersion: v1
kind: PersistentVolume
metadata:
  name: inform-couchbase-pv
spec:
  capacity:
    storage: 4Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce  # the volume can be mounted as read-write by a single node
  persistentVolumeReclaimPolicy: Retain
  local:
    path: /C/Users/MS/Documents/dev/k8s/couchbase
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - docker-desktop

with updated couchbase config values to

servers:
      default:
        volumeMounts:
          default: inform-couchbase-data-pvc
          data: inform-couchbase-data-pvc
    volumeClaimTemplates:
    - metadata:
        name: inform-couchbase-data-pvc
      spec:
        volumeName: inform-couchbase-pv
        resources:
          requests:
            storage: 1Gi

Also tried to use a different storage class with the same failed results of the couchbase cluster pods not being brought up…

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: inform-sc
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

RIP Dave Prowse, Lord of the Sith… I digress.

So the key to understanding PVCs is this https://docs.couchbase.com/operator/current/concept-operator.html#couchbase-cluster all resources created by the Operator are linked to the cluster. Delete the cluster, Kubernetes deletes everything. Delete a pod, and the PVC is retained and can be recovered from.

Any help?

RIP Indeed…

I had a feeling that was the case, but for obvious reasons I’m looking for a way to keep the data stored in a bucket without worrying about it being deleted if the cluster were to crash. Is this supposed to be handled via regular backups and faith in your k8s cluster?

(Update: Just read the last line of your previous link which eludes to using pvc’s as disaster recovery, not storage)

Provided you don’t go around deleting CouchbaseCluster or PersistentVolumeClaim resources you should be fine.

If you are justifiably paranoid – this is the cloud after all – there are solutions like VMware Velero that allow you to snapshot both persistent volumes and all the resources, then you can truly come back from the grave: https://docs.couchbase.com/operator/current/tutorial-velero-backup.html.

1 Like

awesome thank you! I don’t plan on deleting them but I’ve worked with some sketchy “programmers” before …

1 Like

… the ones who use production as their development environment? I know them well :wink: