Couchbase Operator pod restarts by itself for no apparent reason

Hi every one,
I run Couchbase server on 3 flavor: dev, beta and UAT and deploy Couchbase cluster using Helm chart. Couchbase operator version is 2.41.1
In addition to normal services, I also config backup using object storage.
Today I enter to Rancher and it’s quite surprising to see the pods Couchbase operator restart by itself for no apparent reason. While Couchbase services like query are used normally.
It is worth noting that the most recent restarts of the 3 Couchbase operator pods corresponding to dev, beta, and uat environments are the same in time. (At this time i write this topic is 3d6h ago).

I can’t think of any reason why pod Couchbase operator restarts by itself. Please help me huhu.
(Sorry if my English grammar isn’t correct :D).
P/s: because I’m a new user so I only upload one media item in this post).

Thank you.

Update: I find a way to see logs of previous Couchbase operator pod, result below:

Summary error: Failed to get API Group-Resources.
I wonder if this is the Couchbase operator’s fault? please help me :frowning:

Hello!

That error indicates an error attempting to retrieve information from the Kubernetes API. Periodically the Couchbase Operator must cache resources from the Kubernetes API to reconcile the user resources to the cluster definition. With a timeout of 32 seconds, that would indicate that the API server in use became unavailable to operator for a considerable amount of time.

Thanks,

Justin Ashworth

1 Like

Thank you for the answer! :smiley:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.