Hi CB Team
I did single node data load using cbimport in simple Docker container for Couchbase and it was successful .
Now we have 4 node clusters in K8s platform and I get list of Nodeports with ‘kubectl get svc’ and see list of IP’s from ‘kubectl describe pods | grep IP’ .
My problem is same cbimport doesn’t work same way that single node did .
What is the recommended approach to use cbimport in k8s ? And currently I am stuck on nodeportIP definition (whether I need to use this only IP or include the port from kubernetes as well) . And how can use multiple couchbase node in cluster to do cbimport ?
this is simple cbimport command I am doing .
Above works if my client / App PODS in same K8s Cluster …
But my question surrounds if this POD is killed then 0000 will not more be assigned by K8s and it is spin up something with the name 0009 . then my cluster connection will fail correct ?
How to handle that situation ?
Also I am looking more for generic cluster definition outside the POD access from different K8s cluster as well . How can this be happen ?
With above Cluster definition N1QL works but for outside K8s cluster access I need to expose port 8093 for query service no ?
I think its not working for me for my client :
Do you mind to send me the details one how the container service looks like for you ?
Is it appearing like below as headless service ? when you execute ‘kubectl get svc’ ?
For my case this is what shows in ‘kubectl get svc’
tccbadc-cluster-srv ClusterIP None 11210/TCP,11207/TCP
And inside my client POD I am using dig as below and I assume this is not successful . So what I am missing here ?
Here you can see the base domain is definitely cluster.local for this particular cluster. If nothing seems out of the ordinary I’d consult whoever provides your Kubernetes DNS to check that SRV records are in fact created for you like they should be.