Using Couchbase operator 1.1 with Istio Service Mesh

I am trying to create a Couchbase cluster in GKE with Istio (envoy proxy) using the Autonomous Operator 1.1.
The operator starts up fine, and after running the yaml to create the couchbasecluster the first node starts up and then the 2nd node starts. Issue is that the 2nd node appears to fail to join the cluster and additional nodes are not being started.

I am not sure how to debug what is happening or what needs to be done to get the cluster to start up in my gke cluster. Any assistance is appreceated.

Thank you

Can’t say for sure, but it sounds like a resource issue – i.e. the cluster of available resources isn’t meeting what’s being requested for the CouchbaseCluster.

Can you get the output of kubectl describe cbc ?

It’s fairly easy to spot what is going wrong, if you know how…

The short answer is switch istio off in your Couchbase namespace with

kubectl label namespace <namespace> istio-injection-

Basically istio uses envoy which is very invasive. Envoy steals all the ingress traffic for the pod then resurfaces it as having come from 127.0.0.1 after doing what it does in the sidecar. Obviously as soon as you add SNAT into the system Couchbase server cannot successfully do a PTR record lookup to determine the correct name to use for the peer node, and it all ends up very unhappy.

Here is the describe output:

$ kubectl describe cbc -n couchbase
Name:         cb-example
Namespace:    couchbase
Labels:       <none>
Annotations:  <none>
API Version:  couchbase.com/v1
Kind:         CouchbaseCluster
Metadata:
  Creation Timestamp:  2019-04-02T14:48:26Z
  Generation:          1
  Resource Version:    34236273
  Self Link:           /apis/couchbase.com/v1/namespaces/couchbase/couchbaseclusters/cb-example
  UID:                 5f24d672-5556-11e9-8209-42010a8a023d
Spec:
  Auth Secret:  cb-example-auth
  Base Image:   couchbase/server
  Buckets:
    Conflict Resolution:  seqno
    Enable Flush:         true
    Eviction Policy:      fullEviction
    Io Priority:          high
    Memory Quota:         128
    Name:                 testxdcr
    Replicas:             3
    Type:                 couchbase
    Conflict Resolution:  seqno
    Enable Flush:         true
    Eviction Policy:      noEviction
    Io Priority:          high
    Memory Quota:         128
    Name:                 testmem
    Replicas:             3
    Type:                 ephemeral
    Conflict Resolution:  seqno
    Enable Flush:         true
    Eviction Policy:      fullEviction
    Io Priority:          high
    Memory Quota:         128
    Name:                 cachebucket
    Replicas:             3
    Type:                 couchbase
    Conflict Resolution:  seqno
    Enable Flush:         true
    Eviction Policy:      noEviction
    Io Priority:          high
    Memory Quota:         128
    Name:                 wcsession
    Replicas:             3
    Type:                 ephemeral
  Cluster:
    Analytics Service Memory Quota:                 1024
    Auto Failover Max Count:                        3
    Auto Failover On Data Disk Issues:              true
    Auto Failover On Data Disk Issues Time Period:  120
    Auto Failover Server Group:                     false
    Auto Failover Timeout:                          120
    Cluster Name:                                   cb-example
    Data Service Memory Quota:                      512
    Eventing Service Memory Quota:                  256
    Index Service Memory Quota:                     256
    Index Storage Setting:                          memory_optimized
    Search Service Memory Quota:                    256
  Expose Admin Console:                             true
  Servers:
    Name:  data_index_query_services
    Pod:
      Node Selector:
        800 - Flowers . Net / Kube . Node . Type:  couchbase
      Resources:
        Requests:
          Cpu:     2
          Memory:  2000Mi
    Services:
      data
      index
      query
    Size:  1
    Name:  data_services
    Pod:
      Node Selector:
        800 - Flowers . Net / Kube . Node . Type:  couchbase
      Resources:
        Requests:
          Cpu:     2
          Memory:  2000Mi
    Services:
      data
    Size:                         3
  Software Update Notifications:  false
  Version:                        enterprise-5.5.2
Status:
  Admin Console Port:      31515
  Admin Console Port SSL:  32048
  Buckets:                 <nil>
  Cluster Id:              5457b845b177fc89aede8178f521b49e
  Conditions:
    Available:
      Last Transition Time:  2019-04-02T14:48:53Z
      Last Update Time:      2019-04-02T14:48:53Z
      Reason:                Cluster available
      Status:                True
    Balanced:
      Last Transition Time:  2019-04-02T14:48:44Z
      Last Update Time:      2019-04-02T14:48:44Z
      Message:               Data is equally distributed across all nodes in the cluster
      Reason:                Cluster is balanced
      Status:                True
    Scaling:
      Last Transition Time:  2019-04-02T14:48:56Z
      Last Update Time:      2019-04-02T14:48:56Z
      Message:               Current cluster size: 1, desired cluster size: 4
      Reason:                Scaling up
      Status:                True
  Control Paused:            false
  Current Version:           enterprise-5.5.2
  Members:
    Index:  4
    Ready:
      Name:  cb-example-0000
    Unready:
      Name:  cb-example-0003
  Phase:     Running
  Reason:
  Size:      2
Events:      <none>

@simon.murray probably identified it. I’m not familiar with Istio, so can’t comment on it.

Looks like the output got truncated tho. It appears to be showing cb-example-0000 is Ready and cb-example-0003 is Unready, but isn’t showing any events. Did the event output get truncated? You can also try

kubectl logs cb-example-00003
kubectl describe pod cb-example-0003

the Events seem to so there wasn’t anything copied.

Here is the cb-example-003 describe:

Name:               cb-example-0003
Namespace:          couchbase
Priority:           0
PriorityClassName:  <none>
Node:               gke-kip-sandbox-1-couchbase-std-4-1-2df5db02-rjwm/10.33.48.45
Start Time:         Tue, 02 Apr 2019 07:56:26 -0700
Labels:             app=couchbase
                    couchbase_cluster=cb-example
                    couchbase_node=cb-example-0003
                    couchbase_node_conf=data_services
                    couchbase_service_data=enabled
Annotations:        couchbase.version=enterprise-5.5.2
                    sidecar.istio.io/status={"version":"50128f63e7b050c58e1cdce95b577358054109ad2aff4bc4995158c06924a43b","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status:             Running
IP:                 10.36.9.15
Controlled By:      CouchbaseCluster/cb-example
Init Containers:
  istio-init:
    Container ID:  docker://f57286af7491207824d368cf1901aa0a7f65e40325a08d6e61d45bd65cc5df39
    Image:         docker.io/istio/proxy_init:1.0.5
    Image ID:      docker-pullable://istio/proxy_init@sha256:6acdf7ffa6b6615b3fd79028220f0550f705d03ba97b66126e0990639a9f3593
    Port:          <none>
    Host Port:     <none>
    Args:
      -p
      15001
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      8091,8092,8093,8094,8095,8096,9100,9101,9102,9103,9104,9105,9110,9111,9112,9113,9114,9115,9116,9117,9118,9119,9120,9121,9122,11207,11210,11211,11214,11215,18091,18092,18093,18094,18095,18096
      -d

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 02 Apr 2019 07:56:28 -0700
      Finished:     Tue, 02 Apr 2019 07:56:28 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Containers:
  couchbase-server:
    Container ID:   docker://5df896243ceaff677f85287d69797274536ed36d5f546e127ecd7f038477ef28
    Image:          couchbase/server:enterprise-5.5.2
    Image ID:       docker-pullable://couchbase/server@sha256:e950ba00ffb5b2979a0c8650fa9e4e18718b8e0b44e3b132cdcba02e716c5894
    Ports:          8091/TCP, 8092/TCP, 8093/TCP, 8094/TCP, 8095/TCP, 8096/TCP, 9100/TCP, 9101/TCP, 9102/TCP, 9103/TCP, 9104/TCP, 9105/TCP, 9110/TCP, 9111/TCP, 9112/TCP, 9113/TCP, 9114/TCP, 9115/TCP, 9116/TCP, 9117/TCP, 9118/TCP, 9119/TCP, 9120/TCP, 9121/TCP, 9122/TCP, 11207/TCP, 11210/TCP, 11211/TCP, 11214/TCP, 11215/TCP, 18091/TCP, 18092/TCP, 18093/TCP, 18094/TCP, 18095/TCP, 18096/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 02 Apr 2019 07:56:29 -0700
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        2
      memory:     2000Mi
    Readiness:    tcp-socket :8091 delay=10s timeout=5s period=20s #success=1 #failure=1
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z2qn9 (ro)
  istio-proxy:
    Container ID:  docker://0621aa92eff27851f3b41f5e6114c737e35cc78648885d5408277add99d7f368
    Image:         docker.io/istio/proxyv2:1.0.5
    Image ID:      docker-pullable://istio/proxyv2@sha256:8b7d549100638a3697886e549c149fb588800861de8c83605557a9b4b20343d4
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      couchbase
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15005
      --discoveryRefreshDelay
      1s
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --proxyAdminPort
      15000
      --controlPlaneAuthPolicy
      MUTUAL_TLS
    State:          Running
      Started:      Tue, 02 Apr 2019 07:56:30 -0700
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      POD_NAME:                      cb-example-0003 (v1:metadata.name)
      POD_NAMESPACE:                 couchbase (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      ISTIO_META_POD_NAME:           cb-example-0003 (v1:metadata.name)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_METAJSON_ANNOTATIONS:    {"couchbase.version":"enterprise-5.5.2"}

      ISTIO_METAJSON_LABELS:  {"app":"couchbase","couchbase_cluster":"cb-example","couchbase_node":"cb-example-0003","couchbase_node_conf":"data_services","couchbase_service_data":"enabled"}

    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  couchbase-data:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  default-token-z2qn9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z2qn9
    Optional:    false
  istio-envoy:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  Memory
  istio-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio.default
    Optional:    true
QoS Class:       Burstable
Node-Selectors:  800-flowers.net/kube.node.type=couchbase
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

And the log is here for couchbase-server container:

$ kubectl logs cb-example-0003 -n couchbase -c couchbase-server
Starting Couchbase Server -- Web UI available at http://<ip>:8091
and logs available in /opt/couchbase/var/lib/couchbase/logs

and the istio-proxy container

$ kubectl logs cb-example-0003 -n couchbase -c istio-proxy

[2019-04-02 14:56:31.162][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|9093||istio-pilot.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.163][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|9091||istio-telemetry.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.165][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|15004||istio-telemetry.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.166][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|9093||istio-telemetry.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.166][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|42422||istio-telemetry.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.168][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|9090||prometheus.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.171][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8060||istio-citadel.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.173][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|9093||istio-citadel.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.175][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||istio-sidecar-injector.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.177][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8088||servicegraph.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.179][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|14267||jaeger-collector.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.181][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|14268||jaeger-collector.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.183][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|16686||jaeger-query.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.185][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|9411||zipkin.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.187][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|80||tracing.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.189][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8888||fd-cloud-config-server.common-services.svc.cluster.local during init
[2019-04-02 14:56:31.191][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8080||proxyserver.common-services.svc.cluster.local during init
[2019-04-02 14:56:31.193][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8080|ga|proxyserver.common-services.svc.cluster.local during init
[2019-04-02 14:56:31.195][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8080|canary|proxyserver.common-services.svc.cluster.local during init
[2019-04-02 14:56:31.197][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8081||proxyserver.common-services.svc.cluster.local during init
[2019-04-02 14:56:31.199][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8081|ga|proxyserver.common-services.svc.cluster.local during init
[2019-04-02 14:56:31.201][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8081|canary|proxyserver.common-services.svc.cluster.local during init
[2019-04-02 14:56:31.203][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8080||sessionfacade.retention-services.svc.cluster.local during init
[2019-04-02 14:56:31.206][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8080||enterprise-sms-solution.retention-services.svc.cluster.local during init
[2019-04-02 14:56:31.208][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8080||florist-delivery-apis.retention-services.svc.cluster.local during init
[2019-04-02 14:56:31.210][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8088||jwtauthenticationserver.retention-services.svc.cluster.local during init
[2019-04-02 14:56:31.212][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8088|ga|jwtauthenticationserver.retention-services.svc.cluster.local during init
[2019-04-02 14:56:31.214][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8088|canary|jwtauthenticationserver.retention-services.svc.cluster.local during init
[2019-04-02 14:56:31.216][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|8080||testapp.retention-services.svc.cluster.local during init
[2019-04-02 14:56:31.218][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|80||ext-tracing.istio-system.svc.cluster.local during init
[2019-04-02 14:56:31.219][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||oauth2.googleapis.com during init
[2019-04-02 14:56:31.220][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||datastore.googleapis.com during init
[2019-04-02 14:56:31.221][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|80||ecommerce.800-flowers.net during init
[2019-04-02 14:56:31.222][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||ecommerce.800-flowers.net during init
[2019-04-02 14:56:31.223][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|80||gitlab.com during init
[2019-04-02 14:56:31.224][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||gitlab.com during init
[2019-04-02 14:56:31.225][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|6379||instance.redis during init
[2019-04-02 14:56:31.226][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|80||flowerstest.auth0.com during init
[2019-04-02 14:56:31.227][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||flowerstest.auth0.com during init
[2019-04-02 14:56:31.227][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|80||www2.1800flowers-qa.net during init
[2019-04-02 14:56:31.229][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||www2.1800flowers-qa.net during init
[2019-04-02 14:56:31.230][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|80||18f-addressservice-dev.azurewebsites.net during init
[2019-04-02 14:56:31.231][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster 
...
[2019-04-02 14:56:31.442][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_18091'
[2019-04-02 14:56:31.445][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_8091'
[2019-04-02 14:56:31.448][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_11210'
[2019-04-02 14:56:31.450][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_11207'
[2019-04-02 14:56:31.451][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '35.233.225.165_9092'
[2019-04-02 14:56:31.452][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.9.101_443'
[2019-04-02 14:56:31.452][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.7.158_8080'
[2019-04-02 14:56:31.453][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_9092'
[2019-04-02 14:56:31.454][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_11210'
[2019-04-02 14:56:31.455][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.0.10_53'
[2019-04-02 14:56:31.456][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.7.112_8060'
[2019-04-02 14:56:31.457][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.12.25_18091'
[2019-04-02 14:56:31.458][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.10.32_14267'
[2019-04-02 14:56:31.458][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.5.241_16686'
[2019-04-02 14:56:31.460][19][warning][config] external/envoy/source/server/listener_manager_impl.cc:262] adding listener '0.0.0.0:443': filter chain match rules require TLS Inspector listener filter, but it isn't configured, trying to inject it (this might fail if Envoy is compiled without it)
[2019-04-02 14:56:31.460][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_443'
[2019-04-02 14:56:31.461][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_8091'
[2019-04-02 14:56:31.462][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.7.112_443'
[2019-04-02 14:56:31.462][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_6379'
[2019-04-02 14:56:31.463][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.2.59_44134'
[2019-04-02 14:56:31.464][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.0.1_443'
[2019-04-02 14:56:31.465][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.0.31_443'
[2019-04-02 14:56:31.466][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.5.173_15011'
[2019-04-02 14:56:31.467][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.15.132_42422'
[2019-04-02 14:56:31.467][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_18091'
[2019-04-02 14:56:31.468][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.15.38_80'
[2019-04-02 14:56:31.469][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.15.92_443'
[2019-04-02 14:56:31.470][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.10.32_14268'
[2019-04-02 14:56:31.471][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.12.25_8091'
[2019-04-02 14:56:31.472][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.7.112_853'
[2019-04-02 14:56:31.473][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.7.112_15011'
[2019-04-02 14:56:31.473][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.3.231_443'
[2019-04-02 14:56:31.483][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '35.197.42.119_8082'
[2019-04-02 14:56:31.484][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_11207'
[2019-04-02 14:56:31.485][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.33.7.112_31400'
[2019-04-02 14:56:31.487][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_9091'
[2019-04-02 14:56:31.488][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_15030'
[2019-04-02 14:56:31.490][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_15031'
[2019-04-02 14:56:31.491][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_15010'
[2019-04-02 14:56:31.493][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_9411'
[2019-04-02 14:56:31.495][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_80'
[2019-04-02 14:56:31.497][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_8080'
[2019-04-02 14:56:31.499][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_8888'
[2019-04-02 14:56:31.501][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_8081'
[2019-04-02 14:56:31.502][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_15004'
[2019-04-02 14:56:31.504][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_8060'
[2019-04-02 14:56:31.505][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_9090'
[2019-04-02 14:56:31.507][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_8088'
[2019-04-02 14:56:31.508][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_3000'
[2019-04-02 14:56:31.510][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_9093'
[2019-04-02 14:56:31.510][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener 'virtual'
[2019-04-02 14:56:31.562][19][info][config] external/envoy/source/server/listener_manager_impl.cc:908] all dependencies initialized. starting workers
[2019-04-02T14:56:35.518Z] UF 0 0 2 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45146 10.36.9.15:8091 10.36.9.12:48804
[2019-04-02T14:56:40.518Z] UF 0 0 1 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45166 10.36.9.15:8091 10.36.9.12:48824
[2019-04-02T14:56:43.890Z] - 0 0 0 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45188 10.36.9.15:8091 10.36.9.1:57844
[libprotobuf INFO src/istio/mixerclient/check_cache.cc:160] Add a new Referenced for check cache: Absence-keys: destination.service, source.uid, Exact-keys: context.protocol, context.reporter.kind, destination.namespace, destination.port, destination.uid, source.ip,
[libprotobuf INFO src/istio/mixerclient/check_cache.cc:160] Add a new Referenced for check cache: Absence-keys: destination.service, source.uid, Exact-keys: context.protocol, context.reporter.kind, destination.namespace, destination.port, destination.uid, source.ip,
[libprotobuf INFO src/istio/mixerclient/check_cache.cc:160] Add a new Referenced for check cache: Absence-keys: destination.service, source.uid, Exact-keys: context.protocol, context.reporter.kind, destination.namespace, destination.port, destination.uid, source.ip,
[2019-04-02T14:56:45.718Z] - 2686 2752 71 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45218 10.36.9.15:8091 10.36.8.13:53984
[2019-04-02T14:56:50.707Z] - 2686 2752 23 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45238 10.36.9.15:8091 10.36.8.13:53994
[libprotobuf INFO src/istio/mixerclient/check_cache.cc:160] Add a new Referenced for check cache: Absence-keys: destination.service, source.uid, Exact-keys: context.protocol, context.reporter.kind, destination.namespace, destination.port, destination.uid, source.ip,
[2019-04-02T14:56:55.706Z] - 2685 2752 16 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45258 10.36.9.15:8091 10.36.8.13:54004
[2019-04-02T14:57:20.706Z] - 2687 2752 17 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45344 10.36.9.15:8091 10.36.8.13:54058
[2019-04-02T14:57:23.890Z] - 0 0 0 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45354 10.36.9.15:8091 10.36.9.1:58010
[2019-04-02 14:57:31.553][19][info][main] external/envoy/source/server/drain_manager_impl.cc:63] shutting down parent after drain
[2019-04-02T14:57:25.708Z] - 2687 2751 18 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45364 10.36.9.15:8091 10.36.8.13:54066
[2019-04-02T14:57:30.712Z] - 2686 2752 19 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45386 10.36.9.15:8091 10.36.8.13:54082
[2019-04-02T14:57:35.709Z] - 2686 2751 16 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45400 10.36.9.15:8091 10.36.8.13:54092
[2019-04-02T14:57:40.707Z] - 2686 2752 15 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45416 10.36.9.15:8091 10.36.8.13:54104
[2019-04-02T14:57:43.890Z] - 0 0 0 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45426 10.36.9.15:8091 10.36.9.1:58082
[2019-04-02 14:57:55.098][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_18091'
[2019-04-02 14:57:55.101][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_8091'
[2019-04-02 14:57:55.104][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_8091'
[2019-04-02 14:57:55.107][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_18091'
[2019-04-02T14:57:45.709Z] - 2687 2751 18 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45438 10.36.9.15:8091 10.36.8.13:54114
[2019-04-02T14:57:50.708Z] - 2686 2752 18 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45450 10.36.9.15:8091 10.36.8.13:54126
[2019-04-02T14:58:03.890Z] - 0 0 0 "127.0.0.1:8091" inbound|8091||cb-example-ui.couchbase.svc.cluster.local 127.0.0.1:45486 10.36.9.15:8091 10.36.9.1:58142
[2019-04-02T14:58:23.890Z] - 0 0 0 "127.0.0.1:8091" inbound|8091||cb-example-ui.couchbase.svc.cluster.local 127.0.0.1:45530 10.36.9.15:8091 10.36.9.1:58186
[2019-04-02T14:56:45.518Z] - 751 678 113207 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45204 10.36.9.15:8091 10.36.9.12:48862
[2019-04-02T14:58:43.890Z] - 0 0 0 "127.0.0.1:8091" inbound|8091||cb-example-ui.couchbase.svc.cluster.local 127.0.0.1:45586 10.36.9.15:8091 10.36.9.1:58242
[2019-04-02 14:58:48.764][19][info][upstream] external/envoy/source/server/lds_api.cc:72] lds: remove listener '10.33.12.25_18091'
[2019-04-02 14:58:48.764][19][info][upstream] external/envoy/source/server/lds_api.cc:72] lds: remove listener '10.33.12.25_8091'
[2019-04-02 14:58:48.770][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_8091'
[2019-04-02 14:58:48.773][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_18091'
[2019-04-02 14:59:17.066][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_8091'
[2019-04-02 14:59:17.070][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_18091'
[2019-04-02 21:27:39.221][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:553] removing cluster outbound|8080|ga|aes-pull-secret.retention-services.svc.cluster.local
[2019-04-02 21:27:39.221][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:553] removing cluster outbound|8080|canary|aes-pull-secret.retention-services.svc.cluster.local
[2019-04-02 21:27:39.222][19][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:553] removing cluster outbound|8080||aes-pull-secret.retention-services.svc.cluster.local
[2019-04-02 21:27:40.454][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_8091'
[2019-04-02 21:27:40.458][19][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.36.9.15_18091'

Thank you so much for your help in looking into this.

Will switching istio sidecar injection off for the couchbase cluster cause any issues with my apps that have the sidecar injected? Will they still be able to talk to the cluster?

Can’t make any promises as I didn’t write it!

We use endpoint DNS entries to address pods, not service DNS entries, which is my only concern. If the istio control plane is only concerned with services it probably wont work. I can see from the logs that there is no mention of individual endpoints, just services. I’d check up on what istio actually does with this in mind.

I tried to exclude my couchbase namespace from istio (so no side car on couchbase pods in couchbase namespace). Couchbase finished its startup, but now I am no longer able to access couchbase from my service running inside the istio mesh. I am seeing errors when it tries to connect, and the following errors in the itio-proxy for my service:

I  [2019-04-05T15:38:13.605Z] UF 0 0 5 "35.247.96.101:11210" outbound|11210||cb-example-srv.couchbase.svc.cluster.local 10.36.10.6:36062 35.247.96.101:11210 10.36.10.6:36060
 
I  [2019-04-05T15:38:16.486Z] UF 0 0 3 "35.247.96.101:11210" outbound|11210||cb-example-srv.couchbase.svc.cluster.local 10.36.10.6:36070 35.247.96.101:11210 10.36.10.6:36068
 
I  [2019-04-05T15:38:16.522Z] UF 0 0 4 "35.247.96.101:8091" outbound|8091||cb-example.couchbase.svc.cluster.local 10.36.10.6:38794 35.247.96.101:8091 10.36.10.6:38792
 

Does anyone know what I need to do to allow a service in the istio mesh to access my couchbase outside the istio mesh in the same kubernetes cluster?

I also tried creating a separate gke cluster for couchbase and installed the couchbase operator 1.1. I then tried to connect to that couchbase cluster from my service running in a different cluster and received many errors similar to those above.

Okay back to networking basics, lets see where the problem occurs. So you are going to “connect” from your client (in the service mesh) to Couchbase (outside the service mesh).

Fist up on the client, check the DNS is working

dig +short cb-example-0000.cb-example.couchbase.svc.cluster.local. A

Does this resolve to the Pod IP address?

Next up basic TCP connectivity, on the client run

nc -vz cb-example-0000.cb-example.couchbase.svc 8091

Doe that connection succeed?

If it doesn’t this is where things will get interesting :slight_smile: We need to determine where the packet is going missing. So on the client to exec in on two different shells and monitor two different interfaces. In one shell run (do this on cb-example-0000 too)

tcpdump -i eth0 port 8091 -n -v -A

and in the other

tcpdump -i lo port 8091 -n -v -A

Where do the packets disappear when you run that netcat command again?

I am trying to get back to working through my issues again.

I was able to resolve the dns for each of the 4 nodes in the couchbase cluster from my client app.

When I run nc for each of the 4 nodes I get no error just responds with ip and ‘open’

I added tcpdump to my client app pod and ran the first tcpdump.
I don’t see tcpdump installed on the couchbase pods and i don’t seem to be able to figure out how to add tcpdump to that image. Would you be able to help me how to get that added to the couchbase pod?

Sorry for the delay, been out and about. You should be able to kubectl exec -ti <pod> bash and then apt-get update && apt-get -y install tcpdump. If you are using Openshift then that’s impossible.

I’ve actually got the next release working with istio 1.1, but that’s a long way off. Seemingly you need to reference each port (and Couchbase server has about 230 of them!!) in a service and it’ll work. Here’s a list of all the ports ever. Basically when you subnet a CouchbaseCluster resource also create a Service in the same namespace that selects every pod that will be created. Add a port entry in for every one of these https://docs.couchbase.com/server/6.0/install/install-ports.html