Can't login couchbase UI

Hi All,

We just deployed couchbase cluster via kubernetes through the operator and operator-admission.
The image we use is couchbase/server:enterprise-6.0.1.
Everything works well until we expose the ui service via nginx-ingress.

The UI can be loaded correctly, but when entered the login information, it shows the message:

Login failed. Please try again.

Tried tail the /opt/couchbase/var/lib/couchbase/logs/debug.log to seek for help. But what I only see is a login success audit message like this:

[ns_server:debug,2019-07-17T04:19:55.841Z,ns_1@cb-cluster-prod-0000.cb-cluster-prod.default.svc:ns_audit<0.926.0>:ns_audit:handle_call:111]Audit login_success: [{roles,[<<“admin”>>]},
{real_userid,{[{domain,builtin},
{user,<<“Administrator”>>}]}},
{sessionid,<<“4bbda47fbd4d23f98e3db213dbc21329”>>},
{remote,{[{ip,<<“172.20.0.140”>>},{port,39160}]}},
{timestamp,<<“2019-07-17T04:19:55.841Z”>>}]

And having look at the browser console, it only show an 401 unauthorized when trying to access the pools.

Accessing the server by couchbase-cli is working properly but just the UI doesn’t work.
Could you please help me to debug this issue?

Many Thanks,
Xi

Edit:

  • even strange, I found the UI on other deployment that is used to be able to login now cannot login with the same issue.

In short you cannot use an ingress as it doesn’t support client session affinity. Please see https://issues.couchbase.com/browse/MB-31756

Yeah that is a bummer, I also banged my head on this one, we’ve tried quite a few things to implement cookie affinity / stickiness with our nginx ingress, nothing worked…

Our workaround for the moment is to configure our ingress to only route to the service for the first cluster node, i.e. my-cluster-0000.

It seems like something that really should be easier…

Have you seen https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/ ? Things have changed quite a bit this appears to support what’s required these days.

Yes, we did experiment with quite a few combinations of the nginx ingress controller’s affinity settings, nothing seemed to work for us - although I agree that on paper it seems like it should have…
What I suspect is that the ingress affinity settings only implements stickiness on the back-ends that the ingress itself routes to - but in our case, that back-end is one single k8s service (the mycluster-ui service created by the Couchbase AO). I don’t think the ingress knows about the individual pods behind that service, so it cannot make these flows sticky.
Basically we need stickiness not for the ingress, but also for the k8s service.

At least that’s my theory, but not 100% sure.

I’m no expert, but I thought the ingress can use the service for load-balancing, or it can use it for service discovery (give me all your endpoints), and then do affinity that way based on client cookie and perform its own routing. The regular service IP based affinity cannot possibly work because the ingress’ IP address is the same all the time, so all packets end up on the same pod!

Perhaps ask the ingress controller authors for clarity, I’d be interested in their answer.

Yes, I would have thought that’s how it works - otherwise there really isn’t much point to this affinity setting… We’ll drill a bit further and post updates here…

OK, so after a bit more poking around, I can confirm that this actually works for nginx ingress controllers with the cookie-based affinity, by setting the nginx.ingress.kubernetes.io/affinity: cookie annotation on the ingress.

The reason it initially didn’t seem to work was because, while testing over both http and https URLs, the initial cookie set via https was automatically marked Secure, which meant the http URLs couldn’t use it, but also couldn’t re-set it, meaning all http responses tried to set-cookie, but this was always rejected by the browser, which sent requests with no cookie, causing a new one to be re-issued on each response. I suppose a solution would be to use a different cookie name for http and https - but in practice for this ingress we probably only want to support https anyway.

1 Like

I’d hope everyone used TLS for everything these cloudy days! Glad you’ve got it working.