Operator pods are restarting serveral time

Hi , My couchbase operator deployed in own cluster pods are restarting several times a day. After changing the log level to error the number of restart is reduced but still it is restarting frequently. And below is the details and some debug logs getting from the operator. And I have deployed multiple operators in the cluster using single Admission controller cluster wide, and noticed that all the operators restarting count is same.
Restarts
dev-couchbase-operator-6c85678b7c-95xrx 1/1 Running 1962 16d

/jenkins/workspace/couchbase-operator-build/goproj/src/github.com/couchbase/couchbase-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
W0921 06:28:10.696199 1 reflector.go:270] github.com/couchbase/couchbase-operator/pkg/client/cache.go:56: watch of *v1.Service ended with: too old resource version: 402987602 (402990745)
log: exiting because of error: log: cannot create log: open /tmp/couchbase-operator.dev-couchbase-operator-6c85678b7c-95xrx.unknownuser.log.WARNING.20200921-062810.1: no such file or directory

The Operator version you are using is too old. From what I can guess, you are using 2.0.0, you need at least 2.0.1 to avoid this “crash”. Let me know if my guess was wrong :smiley:

Thanks for the response. You are right. I am using the operator version 2.0.0. I will upgrade it to 2.0.1 and let you know the details.

Hey Murray,

Thanks , after upgrading to 2.0.1 the issue has been resolved. I am checking the same for last few weeks. :blush:

after upgrading the operator still same issue pod is restarting