CouchbaseCluster doesnt run in namespace

I’ve got a strange issue where I’m trying to move my couchbase cluster into a namespace but adding the namespace metadatas to the couchbase resources stops the couchbase cluster from running, as well as the creation of any pods.

Is it possible to run a couchbase cluster in a namespace? or any extra logs? (describing the cluster shows no events)

I’m using the automous operator v2.0

apiVersion: v1
kind: Namespace
    name: myproject
        name: myproject
apiVersion: v1
kind: Secret
    namespace: myproject
    name: myproject-auth
type: Opaque
    username: QWRtaW5pc3RyYXRvcg==  # Administrator
    password: cGFzc3dvcmQ=          # password
kind: CouchbaseBucket
    namespace: myproject
    name: mybucket  
    memoryQuota: 128Mi
    ioPriority: low
    evictionPolicy: valueOnly
    conflictResolution: lww
    compressionMode: passive
kind: CouchbaseCluster
    namespace: myproject
    name: myproject
    image: couchbase/server:6.6.0
        adminSecret: myproject-auth
        exposeAdminConsole: true
        - data
        managed: true
    - size: 3  
      name: all_services
      - data
      - index
      - query
      - search
      - eventing
      - analytics

Output with namespace

kubectl get CouchbaseCluster
myproject                                 26m

Output without namespace

 kubectl get CouchbaseCluster
myproject6.6.0            Creating          24s

Do you have an instance of the Operator running in the myproject namespace? The Operator is namespace scoped so it doesn’t have access to all resources on the platform!

If it is running, then you can get logs with kubectl -n myproject logs -f deployment/couchbase-operator and that should tell you what;s going wrong.

Thanks for your reply! I realised that I’d missed the ---namespace flag for the cbopcfg tool …

Now I’m getting null pointer errors when i check the logs… any ideas where to start looking?

{"level":"info","ts":1600264411.7974472,"logger":"main","msg":"couchbase-operator","version":"2.0.2 (build 121)","revision":"release"}
{"level":"info","ts":1600264411.797769,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1600264411.879145,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1600264411.8824651,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1600264411.908651,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"couchbase-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1600264412.0091164,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"couchbase-controller"}
{"level":"info","ts":1600264412.1093469,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"couchbase-controller","worker count":4}
{"level":"info","ts":1600264431.4483192,"logger":"cluster","msg":"Watching new cluster","cluster":"inform/inform"}
{"level":"info","ts":1600264431.451022,"logger":"cluster","msg":"Janitor starting","cluster":"inform/inform"}
{"level":"info","ts":1600264431.4589272,"logger":"cluster","msg":"Couchbase client starting","cluster":"inform/inform"}
{"level":"info","ts":1600264431.537096,"logger":"cluster","msg":"UI service created","cluster":"inform/inform","name":"inform-ui"}
{"level":"info","ts":1600264431.5606093,"logger":"cluster","msg":"Cluster does not exist so the operator is attempting to create it","cluster":"inform/inform"}
{"level":"info","ts":1600264431.710984,"logger":"cluster","msg":"Creating pod","cluster":"inform/inform","name":"inform-0000","image":"couchbase/server:6.5.0"}
{"level":"info","ts":1600264442.4236314,"logger":"cluster","msg":"Operator added member","cluster":"inform/inform","name":"inform-0000"}
{"level":"info","ts":1600264442.4261036,"logger":"cluster","msg":"Initial pod creating","cluster":"inform/inform"}
E0916 13:54:07.530429       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

Excellent error :smiley: That means that it’s trying to get the auto-failover on disk issues time period, but it hasn’t been filled in. Ensure the dynamic admission controller is correctly installed and it will fill the required fields in for you.

awesome thanks so much!

The issue was because i’d tried to include the required yaml files for the crd and operator within my project so I was able to bring everything up with a single kubectl create -f ./kubernetes_folder and I think this meant there wasn’t enough time for the operator to get up running before the couchbase configuration started trying to create pods…

Since you need the namespace to exist before running the operator, is there no way to include these yamls within your own project so the creation of the namespace can be done, followed by the operator, then the rest of my project resources?

Easily, when you kuebctl a directory, the resources will get created in whatever order the files are listed on the inode, or whatever getdents() returns… however when you have them all in a single file:

apiVersion: v1
kind: ServiceAccount
kind: Role
... etc

then it will create the resources in order listed!

Ah sorry yes thats correct, I have a simple _base.yaml that creates the namespace (underscored so this file is created first), followed by the crd.yaml and a cfg.yaml that was generated from the cbopcfg and then my couchbase cluster which is pretty much verbatim from the first post of this thread except for the namespace which is now in the base yaml.

What i find is if i create the base yaml first, then the crd and then the cfg yamls and wait for the operator to be in a running state before creating the rest of the assets, everything works as expected.

However, If i just try to run them all via create -f .\kubernetes_folder then I receive the null pointer issue…

The output of the create seems to confirm that when creating via the folder the resources are being created in the order i’d expect

 kubectl create -f .\kubernetes
namespace/inform created
secret/github created
configmap/inform-base created created created created created created created created created created
serviceaccount/couchbase-operator-admission created created created
secret/couchbase-operator-admission created
deployment.apps/couchbase-operator-admission created
service/couchbase-operator-admission created created created
serviceaccount/couchbase-operator created created created
deployment.apps/couchbase-operator created
service/couchbase-operator created
deployment.apps/api created
secret/inform-auth created created created
secret/sync-gateway created
deployment.apps/sync-gateway created
service/sync-gateway-service created

Here’s where it gets tricky… when you install the DAC there is a small amount of time before the deployment becomes ready, you can actually block with something like kubect wait --for=condition=Available deployment/couchbase-operator-admission. The Kubernetes API will happily accept CouchbaseCluster resources even when the DAC isn’t available and servicing requests yet, at which point the resource hasn’t had defaults applied, and you get the crash.

absolutely, thank you for your help and patience :slight_smile:

1 Like

“It’s Kubernetes, everything will sort itself out” they say!

The good news is perhaps in the Operator 2.2 time frame, we’ll be able to upgrade to CRD V1, and that has defaulting built into the types themselves, you won’t need the DAC to do that. Things will get better next year, I promise :smiley:

1 Like