Kubenetes Operator Helm setup example for backup and user?

am using operator v240, and have a bucket and user specified in the values file for the helm chart; cluster comes up ok, but neither the bucket nor the user are created, and I don’t find any errors anywhere either.
Does anyone have a working example of this, or can shed light on why those pieces aren’t being created?

On the backup issue: on the UI, I get "The backup interface is only available on Couchbase clusters running the backup service. No Backup nodes were found on this cluster. " with the pop up error at the bottom “Could not check if backup service is accessible on this node”

Thanks in advance
values file is here:



# TLS Certs that will be used to encrypt traffic between operator and couchbase
tls:
  # -- Enable to auto create certs
  generate: true
  # -- Legacy TLS configuration with static format which requires PKCS#1 formatted keys.
  # Legacy format is used implicitly during upgrade when old static keys exist.
  # The default is 'false' which supports additional formats and multiple root CAs.
  legacy: false
  # -- Expiry time of CA in days for generated certs
  expiration: 365
  # -- This field defines whether node-to-node encryption is enabled.
  # Must be either 'All' or 'ControlPlaneOnly'.
  # If not specified, data between Couchbase Server nodes is not encrypted.
  nodeToNodeEncryption: 


# @default -- will be filled in as below
# -- Controls the generation of the CouchbaseCluster CRD
cluster:
  networking:
    adminConsoleServiceTemplate:
      # -- ServiceSpec describes the attributes that a user creates on a
      # service.
      spec:
        type: LoadBalancer
    dns:
          domain: executeinc.com
    exposedFeatureServiceTemplate:
      spec:
        type: LoadBalancer
    exposedFeatures:
      - client
      
#    tls:
#      rootCAs:
#        - couchbaseoperator-couchbasedev-ca-tls
#      secretSource:
#        serverSecretName: couchbaseoperator-couchbasedev-server-tls
  security:
    adminSecret: ''
    # -- Cluster administrator pasword, auto-generated when empty
    password: somepassword
    # -- Couchbase RBAC Users
    rbac:
      # -- Managed defines whether RBAC is managed by us or the clients.
      managed: true
    # -- Cluster administrator username
    username: someadmin
  # -- SecurityContext allows the configuration of the security context for all
  # Couchbase server pods.  When using persistent volumes you may need to set
  # the fsGroup field in order to write to the volume.  For non-root clusters
  # you must also set runAsUser to 1000, corresponding to the Couchbase user in
  # official container images.  More info:
  # https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

  servers:
    # -- Name for the server configuration. It must be unique.
    default:
      # -- AutoscaledEnabled defines whether the autoscaling feature is enabled
      # for this class. When true, the Operator will create a
      # CouchbaseAutoscaler resource for this server class.  The
      # CouchbaseAutoscaler implements the Kubernetes scale API and can be
      # controlled by the Kubernetes horizontal pod autoscaler (HPA).
      autoscaleEnabled: false
      # -- Env allows the setting of environment variables in the Couchbase
      # server container.
      env: []
      # -- EnvFrom allows the setting of environment variables in the Couchbase
      # server container.
      envFrom: []
      # -- Pod defines a template used to create pod for each Couchbase server
      # instance.  Modifying pod metadata such as labels and annotations will
      # update the pod in-place.  Any other modification will result in a
      # cluster upgrade in order to fulfill the request. The Operator reserves
      # the right to modify or replace any field.  More info:
      # https://kubernetes.io/docs/reference/generated/kubernetes-
      # api/v1.21/#pod-v1-core
      pod:
        spec: {}
      services:
      - data
      - index
      - query
      - search
      - analytics
      - eventing
      - backup
      size: 1
buckets:
  # -- Name of the bucket to create. @default -- will be filled in as below
  #default: 
  devtn: 
    # -- CompressionMode defines how Couchbase server handles document
    # compression.  When off, documents are stored in memory, and transferred to
    # the client uncompressed. When passive, documents are stored compressed in
    # memory, and transferred to the client compressed when requested.  When
    # active, documents are stored compresses in memory and when transferred to
    # the client.  This field must be "off", "passive" or "active", defaulting
    # to "passive".  Be aware "off" in YAML 1.2 is a boolean, so must be quoted
    # as a string in configuration files.
    compressionMode: passive
    # -- ConflictResolution defines how XDCR handles concurrent write conflicts.
    # Sequence number based resolution selects the document with the highest
    # sequence number as the most recent. Timestamp based resolution selects the
    # document that was written to most recently as the most recent.  This field
    # must be "seqno" (sequence based), or "lww" (timestamp based), defaulting
    # to "seqno".
    conflictResolution: seqno
    # -- EnableFlush defines whether a client can delete all documents in a
    # bucket. This field defaults to false.
    enableFlush: false
    # -- EnableIndexReplica defines whether indexes for this bucket are
    # replicated. This field defaults to false.
    enableIndexReplica: false
    # -- EvictionPolicy controls how Couchbase handles memory exhaustion.  Value
    # only eviction flushes documents to disk but maintains document metadata in
    # memory in order to improve query performance.  Full eviction removes all
    # data from memory after the document is flushed to disk.  This field must
    # be "valueOnly" or "fullEviction", defaulting to "valueOnly".
    evictionPolicy: valueOnly
    # -- IOPriority controls how many threads a bucket has, per pod, to process
    # reads and writes. This field must be "low" or "high", defaulting to "low".
    # Modification of this field will cause a temporary service disruption as
    # threads are restarted.
    ioPriority: low
    # -- The type of the bucket to create by default. Removed from CRD as only
    # used by Helm.
    kind: CouchbaseBucket
    # -- MemoryQuota is a memory limit to the size of a bucket.  When this limit
    # is exceeded, documents will be evicted from memory to disk as defined by
    # the eviction policy.  The memory quota is defined per Couchbase pod
    # running the data service.  This field defaults to, and must be greater
    # than or equal to 100Mi.  More info:
    # https://kubernetes.io/docs/concepts/configuration/manage-resources-
    # containers/#resource-units-in-kubernetes
    memoryQuota: 100Mi
    # -- Replicas defines how many copies of documents Couchbase server
    # maintains.  This directly affects how fault tolerant a Couchbase cluster
    # is.  With a single replica, the cluster can tolerate one data pod going
    # down and still service requests without data loss.  The number of replicas
    # also affect memory use.  With a single replica, the effective memory quota
    # for documents is halved, with two replicas it is one third.  The number of
    # replicas must be between 0 and 3, defaultiv2ng to 1.
    replicas: 1
    # -- Scopes defines whether the Operator manages scopes for the bucket or
    # not, and the set of scopes defined for the bucket.
    scopes:
      # -- Managed defines whether scopes are managed for this bucket. This
      # field is `false` by default, and the Operator will take no actions that
      # will affect scopes and collections in this bucket.  The default scope
      # and collection will be present.  When set to `true`, the Operator will
      # manage user defined scopes, and optionally, their collections as defined
      # by the `CouchbaseScope`, `CouchbaseScopeGroup`, `CouchbaseCollection`
      # and `CouchbaseCollectionGroup` resource documentation.  If this field is
      # set to `false` while the  already managed, then the Operator will leave
      # whatever configuration is already present.
      managed: false
      # -- Resources is an explicit list of named resources that will be
      # considered for inclusion in this bucket.  If a resource reference
      # doesn't match a resource, then no error conditions are raised due to
      # undefined resource creation ordering and eventual consistency.
      resources: []

users:
  dev1:
    autobind: true
    # password to use for user authentication
    # (alternatively use authSecret)
    password: somepassword
    # optional secret to use containing user password
    authSecret:
    # domain of user authentication
    authDomain: local
    # roles attributed to group
    roles:
      #- name: bucket_admin
      #admin, analytics_admin, analytics_manager, analytics_reader, analytics_select, 
      #backup_admin, bucket_admin, bucket_full_access, cluster_admin, data_backup, 
      #data_dcp_reader, data_monitoring, data_reader, data_writer, eventing_admin, 
      #external_stats_reader, fts_admin, fts_searcher, mobile_sync_gateway, 
      #mobile_sync_gateway_application, mobile_sync_gateway_application_read_only, 
      #mobile_sync_gateway_architect, mobile_sync_gateway_dev_ops, mobile_sync_gateway_replicator, 
      #query_delete, query_execute_external_functions, query_execute_functions, 
      #query_execute_global_external_functions, query_execute_global_functions, 
      #query_external_access, query_insert, query_manage_external_functions, 
      #query_manage_functions, query_manage_global_external_functions, 
      #query_manage_global_functions, query_manage_index, query_select, 
      #query_system_catalog, query_update, replication_admin, replication_target, 
      #ro_admin, scope_admin, security_admin, security_admin_external, 
      #security_admin_local, views_admin, views_reader
      - name: bucket_admin
#        - views_reader
#        - query_select
#        - query_update
#        - data_reader
#        - data_writer
#        - search_reader
#        - application_access
        #bucket: devtn

Regarding the backup issue, we have implemented backup functionality within the Operator natively and so it’s not needed to use the backup service that is built in to Couchbase Server.

Can you share any error output you’re getting when trying to apply the user? Keep in mind there are some requirements for the password characters.

If the backup is within the Operator, am I missing how to point it, say, at a S3 bucket to backup to? Since the UI is not available, command line commands are needed?

I’ll try the user create again, but as I recall there weren’t any errors noticed, it just didn’t create the user

Apologies, I should’ve shared a link to the docs: Configure Automated Backup and Restore | Couchbase Docs

Thanks!

~WRD3478.jpg