CouchbaseCluster doesnt run in namespace

I’ve got a strange issue where I’m trying to move my couchbase cluster into a namespace but adding the namespace metadatas to the couchbase resources stops the couchbase cluster from running, as well as the creation of any pods.

Is it possible to run a couchbase cluster in a namespace? or any extra logs? (describing the cluster shows no events)

I’m using the automous operator v2.0

apiVersion: v1
kind: Namespace
metadata:
    name: myproject
    labels: 
        name: myproject
---
apiVersion: v1
kind: Secret
metadata:
    namespace: myproject
    name: myproject-auth
type: Opaque
data:
    username: QWRtaW5pc3RyYXRvcg==  # Administrator
    password: cGFzc3dvcmQ=          # password
---
apiVersion: couchbase.com/v2
kind: CouchbaseBucket
metadata:
    namespace: myproject
    name: mybucket  
spec:
    memoryQuota: 128Mi
    ioPriority: low
    evictionPolicy: valueOnly
    conflictResolution: lww
    compressionMode: passive
---
apiVersion: couchbase.com/v2
kind: CouchbaseCluster
metadata:
    namespace: myproject
    name: myproject
spec:
    image: couchbase/server:6.6.0
    security:
        adminSecret: myproject-auth
    networking:
        exposeAdminConsole: true
        adminConsoleServices:
        - data
    buckets:
        managed: true
    servers:
    - size: 3  
      name: all_services
      services:
      - data
      - index
      - query
      - search
      - eventing
      - analytics

Output with namespace

kubectl get CouchbaseCluster
NAME     VERSION   SIZE   STATUS   UUID   AGE
myproject                                 26m

Output without namespace

 kubectl get CouchbaseCluster
NAME     VERSION   SIZE   STATUS     UUID   AGE
myproject6.6.0            Creating          24s

Do you have an instance of the Operator running in the myproject namespace? The Operator is namespace scoped so it doesn’t have access to all resources on the platform! https://docs.couchbase.com/operator/2.0/concept-operator.html

If it is running, then you can get logs with kubectl -n myproject logs -f deployment/couchbase-operator and that should tell you what;s going wrong.

Thanks for your reply! I realised that I’d missed the ---namespace flag for the cbopcfg tool …

Now I’m getting null pointer errors when i check the logs… any ideas where to start looking?

{"level":"info","ts":1600264411.7974472,"logger":"main","msg":"couchbase-operator","version":"2.0.2 (build 121)","revision":"release"}
{"level":"info","ts":1600264411.797769,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1600264411.879145,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1600264411.8824651,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1600264411.908651,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"couchbase-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1600264412.0091164,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"couchbase-controller"}
{"level":"info","ts":1600264412.1093469,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"couchbase-controller","worker count":4}
{"level":"info","ts":1600264431.4483192,"logger":"cluster","msg":"Watching new cluster","cluster":"inform/inform"}
{"level":"info","ts":1600264431.451022,"logger":"cluster","msg":"Janitor starting","cluster":"inform/inform"}
{"level":"info","ts":1600264431.4589272,"logger":"cluster","msg":"Couchbase client starting","cluster":"inform/inform"}
{"level":"info","ts":1600264431.537096,"logger":"cluster","msg":"UI service created","cluster":"inform/inform","name":"inform-ui"}
{"level":"info","ts":1600264431.5606093,"logger":"cluster","msg":"Cluster does not exist so the operator is attempting to create it","cluster":"inform/inform"}
{"level":"info","ts":1600264431.710984,"logger":"cluster","msg":"Creating pod","cluster":"inform/inform","name":"inform-0000","image":"couchbase/server:6.5.0"}
{"level":"info","ts":1600264442.4236314,"logger":"cluster","msg":"Operator added member","cluster":"inform/inform","name":"inform-0000"}
{"level":"info","ts":1600264442.4261036,"logger":"cluster","msg":"Initial pod creating","cluster":"inform/inform"}
E0916 13:54:07.530429       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/1.13.3/go/src/runtime/panic.go:679
/usr/local/go/1.13.3/go/src/runtime/panic.go:199
/usr/local/go/1.13.3/go/src/runtime/signal_unix.go:394
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/pkg/util/k8sutil/k8sutil.go:447
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/pkg/cluster/reconcile.go:664
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:296
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:209
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:142
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/pkg/controller/controller.go:71
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/1.13.3/go/src/runtime/asm_amd64.s:1357

Excellent error :smiley: That means that it’s trying to get the auto-failover on disk issues time period, but it hasn’t been filled in. Ensure the dynamic admission controller is correctly installed and it will fill the required fields in for you.

awesome thanks so much!

The issue was because i’d tried to include the required yaml files for the crd and operator within my project so I was able to bring everything up with a single kubectl create -f ./kubernetes_folder and I think this meant there wasn’t enough time for the operator to get up running before the couchbase configuration started trying to create pods…

Since you need the namespace to exist before running the operator, is there no way to include these yamls within your own project so the creation of the namespace can be done, followed by the operator, then the rest of my project resources?

Easily, when you kuebctl a directory, the resources will get created in whatever order the files are listed on the inode, or whatever getdents() returns… however when you have them all in a single file:

apiVersion: v1
kind: ServiceAccount
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
----
... etc

then it will create the resources in order listed!

Ah sorry yes thats correct, I have a simple _base.yaml that creates the namespace (underscored so this file is created first), followed by the crd.yaml and a cfg.yaml that was generated from the cbopcfg and then my couchbase cluster which is pretty much verbatim from the first post of this thread except for the namespace which is now in the base yaml.

What i find is if i create the base yaml first, then the crd and then the cfg yamls and wait for the operator to be in a running state before creating the rest of the assets, everything works as expected.

However, If i just try to run them all via create -f .\kubernetes_folder then I receive the null pointer issue…


The output of the create seems to confirm that when creating via the folder the resources are being created in the order i’d expect

 kubectl create -f .\kubernetes
namespace/inform created
secret/github created
configmap/inform-base created
storageclass.storage.k8s.io/couchbase-lazy-bound created
customresourcedefinition.apiextensions.k8s.io/couchbasebuckets.couchbase.com created
customresourcedefinition.apiextensions.k8s.io/couchbasememcachedbuckets.couchbase.com created
customresourcedefinition.apiextensions.k8s.io/couchbaseusers.couchbase.com created
customresourcedefinition.apiextensions.k8s.io/couchbasegroups.couchbase.com created
customresourcedefinition.apiextensions.k8s.io/couchbaserolebindings.couchbase.com created
customresourcedefinition.apiextensions.k8s.io/couchbaseclusters.couchbase.com created
customresourcedefinition.apiextensions.k8s.io/couchbasebackups.couchbase.com created
customresourcedefinition.apiextensions.k8s.io/couchbasebackuprestores.couchbase.com created
serviceaccount/couchbase-operator-admission created
clusterrole.rbac.authorization.k8s.io/couchbase-operator-admission created
clusterrolebinding.rbac.authorization.k8s.io/couchbase-operator-admission created
secret/couchbase-operator-admission created
deployment.apps/couchbase-operator-admission created
service/couchbase-operator-admission created
mutatingwebhookconfiguration.admissionregistration.k8s.io/couchbase-operator-admission created
validatingwebhookconfiguration.admissionregistration.k8s.io/couchbase-operator-admission created
serviceaccount/couchbase-operator created
role.rbac.authorization.k8s.io/couchbase-operator created
rolebinding.rbac.authorization.k8s.io/couchbase-operator created
deployment.apps/couchbase-operator created
service/couchbase-operator created
deployment.apps/api created
secret/inform-auth created
couchbasebucket.couchbase.com/construct created
couchbasecluster.couchbase.com/inform created
secret/sync-gateway created
deployment.apps/sync-gateway created
service/sync-gateway-service created

Here’s where it gets tricky… when you install the DAC there is a small amount of time before the deployment becomes ready, you can actually block with something like kubect wait --for=condition=Available deployment/couchbase-operator-admission. The Kubernetes API will happily accept CouchbaseCluster resources even when the DAC isn’t available and servicing requests yet, at which point the resource hasn’t had defaults applied, and you get the crash.

absolutely, thank you for your help and patience :slight_smile:

1 Like

“It’s Kubernetes, everything will sort itself out” they say!

The good news is perhaps in the Operator 2.2 time frame, we’ll be able to upgrade to CRD V1, and that has defaulting built into the types themselves, you won’t need the DAC to do that. Things will get better next year, I promise :smiley:

1 Like