Panic: No Auth Provider found for name "oidc"

I’ve been trying to get a Couchbase cluster up and running in a POC environment of Kubernetes, each step seems like a new hurdle. The latest one is this error I’m getting trying to create the cluster with the Couchbase provided yaml file.

And yes, I’m on Windows, using PowerShell.

Below is my error and the command I’m running, any ideas why I’m getting the error and what it means?

panic: No Auth Provider found for name "oidc"

> PS H:\> cbopctl create -f .\couchbase-autonomous-operator-kubernetes_1.1.0-541_windows-amd64\couchbase-cluster.yaml

panic: No Auth Provider found for name “oidc”

goroutine 1 [running]:
github.com/couchbase/couchbase-operator/pkg/client.MustNew(0xc042488e00, 0xc042488e00, 0x0)
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/pkg/client/client.go:21 +0x78
main.(*CreateContext).Run(0xc04240be00)
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/cmd/cbopctl/create.go:65 +0x37e
main.(*CreateContext).Run-fm()
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/cmd/cbopctl/main.go:106 +0x31
github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag.(*Command).parseFlags(0xc042472700, 0xc042555c20, 0xc04204e0a0, 0x2, 0x2)
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag/command.go:220 +0x1514
github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag.(*Command).parse(0xc042472700, 0xc042555c20, 0xc04204e0a0, 0x2, 0x2)
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag/command.go:91 +0x100
github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag.(*Command).parseCommands(0xc0424727e0, 0xc042555c20, 0xc04204e090, 0x3, 0x3)
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag/command.go:103 +0x143
github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag.(*Command).parse(0xc0424727e0, 0xc042555c20, 0xc04204e090, 0x3, 0x3)
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag/command.go:89 +0x135
github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag.(*CLI).Parse(0xc0420ce5a0, 0xc04204e080, 0x4, 0x4)
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/github.com/couchbase/cbflag/cli.go:59 +0x1cb
main.main()
/home/couchbase/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/cmd/cbopctl/main.go:204 +0x183a

Hi,

The short answer is you can use cbopctl create --dry-run to validate your cluster, and then kubectl to actually submit the resource to the Kubernetes API.

In Operator version 1.2.0 (out soon!) we do away with that command altogether, instead using dynamic admission controllers to do the same functionality without special tooling, kubectl and oc will work just fine.

That said our support tool still uses authentication plugins to work, we will support - in 1.2.0 - plain Kubernetes authentication mechanisms, Azure and GCP. OIDC isn’t available in the client library version we use, so cannot sneak it in quite yet. We can upgrade client libraries in the release after and support OIDC in that time frame.

Regards Si

1 Like

Hi Simon!
Thanks for the information, can you tell me what my errors are actually about, why is it complaining about OIDC and where is that stemming from? I didn’t setup the Kubernetes infrastructure I’m just consuming it, so I don’t know a lot about it yet. Is there something I can talk to that admin about that might make this OIDC stuff go away? Something to do differently etc?
Thanks again for replying.

Unsure about making it go away you’ll have to talk to your admins :smiley:

So if you look at your ~/.kube/config file your current context points to a user. Typically for things like minikube/minishift they will use X.509 certificates or tokens for authentication. Yours, I’ll make a wager, contains an auth-provider section. This is where oidc is coming from.

The issue comes from the fact that the providers are not part of the library by default and need to be manually compiled in.

What you could do to bypass oidc (if allowed!!), is create a service account, this will have a secret associated with it that will contain a token that can be used in your kubeconfig…

$ kubectl create serviceaccount test
$ kubectl get serviceaccount test -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2019-03-14T13:56:36Z"
  name: test
  namespace: default
  resourceVersion: "1170564"
  selfLink: /api/v1/namespaces/default/serviceaccounts/test
  uid: fb907791-4660-11e9-b220-080027222af3
secrets:
- name: test-token-cg9sh
$ kubectl get secret test-token-cg9sh -o yaml | grep token
  token: ZXlKaGJH....
$ echo -n ZXlKaGJH... | base64 -d
$ kubectl config set-credentials test --token=eyJhbGciO...

Then it’s simply a case of creating a context that links the cluster with the credentials and you should be able to use cbopctl once all the correct roles are bound to that service account.

Half of the fun of this industry is bypassing security measures :wink:

1 Like