Issue with Persisting Data on Couchbase Autonomous Operator 1.2

I am Trying to Create an Couchbase autonomous operator 1.2 Deployment on DigitalOcean.
I am unable to Persist Data on StorageClass. Whenever i try to update by cluster or redeploy my Couchbase Cluster
My bucket data get wiped Out. It creates New PVC every time i redeploy my cluster.instead of attaching old PVC
My Couchbase-cluster file

apiVersion: couchbase.com/v1
kind: CouchbaseCluster
metadata:
  name: cb-example
spec:
  securityContext:
    fsGroup: 1000
  baseImage: couchbase/server
  version: enterprise-6.0.1
  authSecret: cb-example-auth
  exposeAdminConsole: true
  antiAffinity: true
  adminConsoleServices:
    - data
  cluster:
    dataServiceMemoryQuota: 2048
    indexServiceMemoryQuota: 512
    searchServiceMemoryQuota: 512
    eventingServiceMemoryQuota: 256
    indexStorageSetting: memory_optimized
    autoFailoverTimeout: 120
    autoFailoverMaxCount: 3
    autoFailoverOnDataDiskIssues: true
    autoFailoverOnDataDiskIssuesTimePeriod: 120
    autoFailoverServerGroup: false
    disableBucketManagement: true
  buckets:
    - name: default
      type: couchbase
      memoryQuota: 1024
      replicas: 1
      ioPriority: low
      evictionPolicy: fullEviction
      conflictResolution: seqno
      enableFlush: true
      enableIndexReplica: false
  servers:
    - size: 1
      name: data_services
      services:
      - data
      - query
      - search
      pod:
        volumeMounts:
          default: couchbase
          data:  couchbase
    - size: 1
      name: index_services
      services:
        - index
      pod:
        volumeMounts:
          default: couchbase
          index: couchbase
  volumeClaimTemplates:
    - metadata:
        name: couchbase
      spec:
        storageClassName: couchbase-storage
        resources:
          requests:
            storage: 10Gi

I’m not as much an expert here as I’d like to be, but I’m not sure what you mean by “redploy my cluster”. If you delete the CouchbaseCluster resource, I beleive we would expect to destroy all the associated PVs. At the moment, there is no ‘pause’ like functionality to stop and later redeploy a cluster. This is something we have in the roadmap.

Of course, for an update, we wouldn’t expect PVs to be “wiped out”. What do you see in the logs for the couchbase operator pod when a change ‘wipes out’ your PVs?

Note that we do not officially support Digital Ocean at the moment. Since we stick to K8S APIs it may work there, but we only officially support what we test to at the moment.