Disappearing buckets in Kubernetes using Couchbase Operator

In trying to provision a CouchbaseCluster resource using the k8s-operator:0.8.1-beta on Kubernetes I ran into a few issues.

  1. Only one bucket is created automatically when multiple bucket definitions are in the resource definition file.
  2. New buckets created via the command-line (I assume the same for the UI but have not tried) are deleted after a while. I assume this is due to some sort on consistency check where the operator makes sure that the cluster reflects the resource definition.

The result of this is that I have to include all bucket definitions in the YAML file for the CouchbaseCluster and then run the commands to create each bucket regardless of whether they have been created or not in order to get everything running in the state that I’d expect.

Ideally the fix is that the CouchbaseCluster correctly provisions multiple buckets specified in the definition, but not having the buckets delete after they are added manually would be an acceptable fix.

If this has been resolved in the 0.9.0-beta is there somewhere that I can see the release notes or anything else that could help me follow along with the development of this project.

Hi Joshua,

Thanks for trying out the Beta. I tried to reproduce the issue you’re seeing here by creating a cluster with 2 buckets in the Cluster Spec and they were both created as expected on cluster creation. Could you also share the bucket section of your spec? (Make sure the bucket names are unique)

As for bucket deletion, yes the operator does ‘undo’ any manual changes made by users in order to maintain consistency with the cluster spec. There is an option to pause the operator if you did not want this reconciliation to occur, but sounds like you were resorting to manual changes because the original issue (1) wasn’t working.

This is the config i was using. The intent is for it to create two buckets: “data” and “platform”

apiVersion: couchbase.database.couchbase.com/v1beta1
kind: CouchbaseCluster
  name: "handshake-couchbase-server"
  namespace: couchbase
  baseImage: couchbase/server
  version: enterprise-5.1.0
  paused: false
  antiAffinity: true
  authSecret: couchbase-secrets
  exposeAdminConsole: true
    - data
    - index
    - query
    dataServiceMemoryQuota: 256
    indexServiceMemoryQuota: 256
    searchServiceMemoryQuota: 256
    indexStorageSetting: plasma
    autoFailoverTimeout: 30
    - name: data
      type: couchbase
      memoryQuota: 128
      replicas: 0
      ioPriority: high
      eviction-policy: full-eviction
      conflictResolution: seqno
      enableFlush: true
      enableIndexReplica: false
    - name: platform
      type: couchbase
      memoryQuota: 128
      replicas: 0
      ioPriority: high
      eviction-policy: fullEviction
      conflictResolution: seqno
      enableFlush: true
      enableIndexReplica: false
    - size: 1
      name: master
        - data
        - index
        - query
      dataPath: /opt/couchbase/var/lib/couchbase/cbdata
      indexPath: /opt/couchbase/var/lib/couchbase/indexdata

noticed i was inconsistent with the eviction policy. i tried that a number of ways. when full-eviction did not work i switched to using fullEviction (as you would using couchbase-cli) to see if that would work any better. pretty sure that has no effect either way as far as provisioning the bucket.

any other fixes outside the bucket issue is also welcomed :slight_smile:

Thanks, the value there should be ‘fullEviction’ . Also, you’ll need to remove the ‘dash’ in eviction-policy, so that the spec is:
evictionPolicy: fullEviction
Try updating both bucket specs with that key/value pair and it should work.

For reference, we’ve documented all of the parameters here:

may want to update the example yaml at the top of that page then which has it as eviction-policy instead of evictionPolicy

fixing that seemed to do the trick. thank you!

1 Like

Oops good catch - thanks!