Cbimport in K8s Node

Hi CB Team
I did single node data load using cbimport in simple Docker container for Couchbase and it was successful .
Now we have 4 node clusters in K8s platform and I get list of Nodeports with ‘kubectl get svc’ and see list of IP’s from ‘kubectl describe pods | grep IP’ .
My problem is same cbimport doesn’t work same way that single node did .
What is the recommended approach to use cbimport in k8s ? And currently I am stuck on nodeportIP definition (whether I need to use this only IP or include the port from kubernetes as well) . And how can use multiple couchbase node in cluster to do cbimport ?
this is simple cbimport command I am doing .

> cbimport csv -c couchbase:// <nodeportIP> -u Administrator -p password -b <bucket name> -d file://<csvname> -g #MONO_INCR# -t 10

Thanks

Mostly correct :smiley: We add an SRV record for you to do service discovery, so you just need to change your connection string to read couchbase://${clustername}-srv.${namespace}.svc. Full documentation is provided here https://docs.couchbase.com/operator/1.2/couchbase-clients.html

Thanks Simon . Still needs little clarity on the cluster connection definition .

This is what cluster I am in :
kubectl get cbc

NAME AGE
tccb-cluster 28d

This is what showing in Web Admin Console:
Name: tccb-cluster-0000.tccb-cluster.bi-cb.svc
Name: tccb-cluster-0001.tccb-cluster.bi-cb.svc
Name: tccb-cluster-0002.tccb-cluster.bi-cb.svc

This is what namespaces:

SELECT * FROM system:namespaces
id name
default default

I used below and it doesn’t work :
cluster = Cluster(‘couchbase://tccb-cluster-srv.bi-cb.svc.cluster.local’)

I am not sure as per the documents what format(cluster_name) means :

Cluster(‘couchbase://{}-srv.default.svc.cluster.local’.format(cluster_name))

Can you help to point me what I am doing wrong ?
Also can I be able to use same connection for Python SDK query and also N1QL ?

thanks

I am able to connect using below finally … YAY …

cluster = Cluster(‘couchbase://tccb-cluster-0000.tccb-cluster.bi-cb.svc’)

Above works if my client / App PODS in same K8s Cluster …
But my question surrounds if this POD is killed then 0000 will not more be assigned by K8s and it is spin up something with the name 0009 . then my cluster connection will fail correct ?
How to handle that situation ?

Also I am looking more for generic cluster definition outside the POD access from different K8s cluster as well . How can this be happen ?

With above Cluster definition N1QL works but for outside K8s cluster access I need to expose port 8093 for query service no ?

@arungupta @geraldss can you please help on N1QL query access as well for K8s infrastructure ?

thanks and appreciate it …

Works fine for me, see the example below. First up check that your DNS is actually working:

root@python-sdk:/# dig _couchbase._tcp.cb-example-srv.default.svc.cluster.local SRV

; <<>> DiG 9.10.3-P4-Ubuntu <<>> _couchbase._tcp.cb-example-srv.default.svc.cluster.local SRV
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40402
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;_couchbase._tcp.cb-example-srv.default.svc.cluster.local. IN SRV

;; ANSWER SECTION:
_couchbase._tcp.cb-example-srv.default.svc.cluster.local. 30 IN	SRV 0 33 11210 172-17-0-7.cb-example-srv.default.svc.cluster.local.
_couchbase._tcp.cb-example-srv.default.svc.cluster.local. 30 IN	SRV 0 33 11210 172-17-0-8.cb-example-srv.default.svc.cluster.local.
_couchbase._tcp.cb-example-srv.default.svc.cluster.local. 30 IN	SRV 0 33 11210 172-17-0-9.cb-example-srv.default.svc.cluster.local.

;; ADDITIONAL SECTION:
172-17-0-8.cb-example-srv.default.svc.cluster.local. 30	IN A 172.17.0.8
172-17-0-9.cb-example-srv.default.svc.cluster.local. 30	IN A 172.17.0.9
172-17-0-7.cb-example-srv.default.svc.cluster.local. 30	IN A 172.17.0.7

;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Mon Oct 14 11:52:35 UTC 2019
;; MSG SIZE  rcvd: 667

then compare your client with my quick test.

root@python-sdk:/# cat test 
#!/usr/bin/env python

from couchbase.cluster import Cluster
from couchbase.cluster import PasswordAuthenticator
cluster = Cluster('couchbase://cb-example-srv.default.svc.cluster.local')
authenticator = PasswordAuthenticator('Administrator', 'password')
cluster.authenticate(authenticator)
cb = cluster.open_bucket('default')
cb.upsert('u:king_arthur', {'name': 'Arthur', 'email': 'kingarthur@couchbase.com', 'interests': ['Holy Grail', 'African Swallows']})
root@python-sdk:/# ./test

Hi
I think its not working for me for my client :
Do you mind to send me the details one how the container service looks like for you ?
Is it appearing like below as headless service ? when you execute ‘kubectl get svc’ ?
cb-example-srv.default.svc.cluster.local

For my case this is what shows in ‘kubectl get svc’
tccbadc-cluster-srv ClusterIP None 11210/TCP,11207/TCP

And inside my client POD I am using dig as below and I assume this is not successful . So what I am missing here ?

root@client-middleware-9fb5c9657-nkrrc:/app# dig _couchbase._tcp.tccbadc-cluster-srv.default.svc.cluster.local SRV

; <<>> DiG 9.10.3-P4-Debian <<>> _couchbase._tcp.tccbadc-cluster-srv.default.svc.cluster.local SRV
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 40121
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;_couchbase._tcp.tccbadc-cluster-srv.default.svc.cluster.local. IN SRV

;; AUTHORITY SECTION:
cluster.local. 60 IN SOA ns.dns.cluster.local. hostmaster.cluster.local. 1571176800 28800 7200 604800 60

;; Query time: 0 msec
;; SERVER: 10.43.0.10#53(10.43.0.10)
;; WHEN: Tue Oct 15 22:29:56 UTC 2019
;; MSG SIZE rcvd: 172

All my clusters servers are appearing as below from web Admin console :

Name: tccbadc-cluster-0000.tccbadc-cluster.bi-cb.svc
then 0001 , 0002 etc

Here’s my service, nothing out of the ordinary

kubectl get svc cb-example-srv
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)               AGE
cb-example-srv   ClusterIP   None         <none>        11210/TCP,11207/TCP   56s

Check your /etc/resolv.conf:

root@cb-example-0000:/# cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

Here you can see the base domain is definitely cluster.local for this particular cluster. If nothing seems out of the ordinary I’d consult whoever provides your Kubernetes DNS to check that SRV records are in fact created for you like they should be.