Namespace support?

Is is possible to run the operator in one k8s namespace and a cluster in another namespace? When I try this, the cbopctl create -f cluster.yaml (with a metadata.namespace defined in my yaml), the operator returns ‘couchbaseclusters “cb-demo” created’, but the cb-demo cluster never appears. The operator logs show nothing.

Is there a way to increase the log level of the operator?

The operator and the cluster have to be run within the same namespace.

You can get additional logging from operator by adding the ‘-log-level’ arg to the operator spec… ie:
args:
- -create-crd
- -enable-upgrades=false
- -log-level=debug

Same problem here. I added the “- -log-level=debug” but the log level does not increase.

The only thing I see in the logs when I try to create the cluster with a different namespace is:

E0910 09:20:47.143936 1 streamwatcher.go:109] Unable to decode an event from the watch stream: stream error: stream ID 93; INTERNAL_ERROR

When you change log level you will also need to redeploy the operator.
Also, if you’ve been using an earlier version of the operator you’ll
need to move to the 1.0 image - image: couchbase/operator:1.0.0

I’m not sure about the ‘stream error’ here, but if you can open a separate ticket
with some additional logging via cbopinfo we can look into it:
https://docs.couchbase.com/operator/1.0/cbopinfo.html

As @tommie mentioned before that it required to run / locate operator and cluster resources in the same names space.
I’m curious what is the mature of this limitation? We are thinking about the solution with multi-tenancy customers and want to separate operator and deployment for target clusters via namespaces.

Do you have any plans to support multi-tenancy case?

Well the problem is that as soon as you want to cross namespace boundaries then you are suddenly using ClusterRoleBindings and naturally the more security conscious users don’t want a bit of software that can list and delete pods and services wherever it wants.

Technically it’s probably not that hard to make it work on a global basis, however there are other drawbacks. With just a single controller you cannot do staged upgrades, so if it goes wrong somehow – not that it will :wink: – all couchbase instances could go wrong. I don’t want to force people to do things in the way we prescribe, but there’s a greater good to consider, and a method behind the madness that I’ll admit should be documented better.

@simon.murray that totally makes sense,

lemme explain the solution that I’m thinking about:

  • We have a cluster, this cluster created for different teams (in one org), typically we separate Team A and Team B workloads with namespace capabilities (let’s cut down the story about Dev/Stage/Prod environments for simplicity).

  • Team A and Team B have to have Couchbase clusters (with theirs own config) and at the same time, they don’t want to deal with autonomous operator operations.

My idea is:

  • to deploy ONE instance of the operator into a specific namespace (let’s call it couchbase-fabric) and then Team A and B the will consume CR in their project (means the will deploy couch base clusters into their own namespaces ).

Does it mean that you recommend deploying an operator for each namespace (hmm) or couchbase cluster must be shared (out of control - Team A and B) resources managed by Team X (who operates operator)?

PS. Foof, I hope you can get what I’m talking about… :exploding_head:

On your head be it…

Well, this is unofficial, undocumented and unsupported (anyone reading this read the above comments!!), but when you install the operator you can change the Role to a ClusterRole and the RoleBinding to a ClusterRoleBinding, then specify the namespace environment variable in the Deployment as “”. This should cause the operator to watch for CouchbaseClusters in all namespaces and create resources in those namespaces associated with the clusters. Just tested this with the 2.0 branch and it does work, cannot guarantee it for 1.x, but the same assumptions hold.

thank you so much @simon.murray , We’re ready to play… For sure I will share our experience.

@ssimmons I’ve tried to apply your recommendations about role/role bindings for 1.2.0 and 1.2.x operator and it looks like it doesn’t work.

Maybe I’m doing something wrong but nothing happens in operator-logs (in debug mode) when I apply CR in a namespace that doesn’t match operators…

I deploy the operator by helm chart (helm 3) from your official repo.

 ❯ h list -n couchbase-fabric                                                                                                                         ◼ 
NAME                    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
couchbase-fabric        couchbase-fabric        1               2020-02-04 16:38:20.767921549 +0800 +08 deployed        couchbase-operator-0.1.2        1.2   

Currently, my roles/role bindings are:

❯ h fullstatus couchbase-fabric -n couchbase-fabric | grep -B1 role
NAME                                                                                    AGE
clusterrole.rbac.authorization.k8s.io/couchbase-fabric-couchbase-admission-controller   19h
clusterrole.rbac.authorization.k8s.io/couchbase-fabric-couchbase-operator               19h
NAME                                                                                           AGE
clusterrolebinding.rbac.authorization.k8s.io/couchbase-fabric-couchbase-admission-controller   19h
NAME                                                                               AGE
clusterrolebinding.rbac.authorization.k8s.io/couchbase-fabric-couchbase-operator   19h

Second, you mentioned that 2.x is work in progress, I cannot find any public information about the 2.x version of the operator. Could you guide me on how to participate in the 2.x program?

That’s a shame. In all honesty the underlying operator library is completely different from the one present in 2.0 so not too surprising.

There is no 2.0 program, however 2.0 beta images should appear on public docker hub in the next few months at least. For now the only thing I can suggest is to deploy an operator per-namespace in the mean time.