Running Couchbase on Minikube

I am trying to run Couchbase on Minikube, How do I get it running.
I’m getting the following error:

Error updating Endpoint Slices for Service default/cb-example: Error updating cb-example-5npm6 EndpointSlice for Service default/cb-example: EndpointSlice.discovery.k8s.io “cb-example-5npm6” is invalid: ports: Too many: 237: must have at most 100 items
endpoint-slice-controller

Failed to update endpoint default/cb-example: Operation cannot be fulfilled on endpoints “cb-example”: StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/default/cb-example, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0bcdbdcf-292e-43c9-b116-b3b0b94033f3, UID in object meta:

I am using this to configure the cluster
apiVersion: v1
kind: Secret
metadata:
name: cb-example-auth
type: Opaque
data:
username: QWRtaW5pc3RyYXRvcg== # Administrator
password: XXXXXX # password

apiVersion: couchbase.com/v2
kind: CouchbaseCluster
metadata:
name: cb-example
spec:
image: couchbase/server:6.5.0
security:
adminSecret: cb-example-auth
paused: false
softwareUpdateNotifications: true
securityContext:
runAsUser: 1000
runAsNonRoot: true
fsGroup: 1000
platform: aws
cluster:
clusterName: cb-example
dataServiceMemoryQuota: 512Mi
indexServiceMemoryQuota: 256Mi
searchServiceMemoryQuota: 256Mi
indexStorageSetting: memory_optimized
autoFailoverTimeout: 120s
autoFailoverMaxCount: 3
autoFailoverOnDataDiskIssues: true
autoFailoverOnDataDiskIssuesTimePeriod: 120s
autoFailoverServerGroup: false
autoCompaction:
databaseFragmentationThreshold:
percent: 30
size: 1Gi
autoFailoverMaxCount: 3
autoFailoverOnDataDiskIssues: true
autoFailoverOnDataDiskIssuesTimePeriod: 120s
autoFailoverServerGroup: false
autoCompaction:
databaseFragmentationThreshold:
percent: 30
size: 1Gi
viewFragmentationThreshold:
percent: 30
size: 1Gi
parallelCompaction: false
timeWindow:
start: 02:00
end: 06:00
abortCompactionOutsideWindow: true
tombstonePurgeInterval: 72h
servers:

  • size: 2
    name: all_services
    services:
    • data
    • index
    • query
    • search
      buckets:
      managed: false
      selector:
      matchLabels:
      cluster: cb-example
      monitoring:
      prometheus:
      enabled: true
      image: couchbase/exporter:1.0.1
      resources:
      requests:
      cpu: 100m
      memory: 100Mi

Nothing can be done about that, couchbase server uses a lot of ports and they are all hard coded, they are all required for correct function. The number is actually reduced in 6.5.0+ and this will be reflected in Operator 2.1. I count 90, which is better and within range!

I’m assuming you are using an uncertified platform? I can see from the docs that endpoint slices are from 1.17 onwards, operator 2.0 is only certified to work with 1.13 and 1.14 (https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#endpointslice-controller).

However, if you follow that documentation, there is a command line flag (–max-endpoints-per-slice) that you can modify to allow it to correctly function.

Let me know if this works and how you get on!

I am running on Ubuntu 20.

is this issue resolve ?

I am also seeing the same issue in my env .

This is not an issue, it is merely a warning from Kubernetes. The Operator 2.0.2 is fully tested with Kubernetes 1.18 and works as intended regardless of the event.

The warning is from the Kubernetes’ endpoint controller. It will go away, as I stated earlier, when using operator 2.1 (coming soon!)

Those logs suggest the cluster was already broken when the operator was restarted. Can you try deleting the cluster it and recreating. The first error, if any, will be the root cause of the problem and we can go from there. Better still, if you encounter an error, use the cbopinfo support tool distributed with the operator to collect logs.

I already tried to re-create the cluster but getting the same issue, Also the operator was not restarted any time.

The logs posted say, it’s seen a cluster:

{“level”:“info”,“ts”:1602250251.6542253,“logger”:“cluster”,“msg”:“Watching new cluster”,“cluster”:“isds-nosql-couchbase/rsinghal15”}

and its state is already failed:

{“level”:“error”,“ts”:1602250251.7151635,“logger”:“cluster”,“msg”:“Cluster setup failed”,“cluster”:“rohit-couchbase/rsinghal15”,“error”:“unexpected cluster phase: Failed”

The only way that can happen is if the cluster already exists and the status is set to failed. Check if the cluster exists already kubectl get cbc -n rohit-couchbase rsinghal15”, if it does delete it. (I should point out this will not happen in 2.1 when it is released).

Start tailing the logs, and make a note of the timestamp. Recreate the cluster. When the failure occurs, go back to the start timestamp, and find the first entry after that timestamp with level=“error”. If you can post the whole log output somewhere that would be good, or a full cbopinfo archive would be better,

there is only single cluster with this name and there is no existing cluster. I can’t share the logs file with you due to security issue in public but i have raised case with support so i would request you please check the ticket

Thanks, I’ll await anything they send our way!