Hi,
cbbackup failed from a remote server, which has full connectivity to the Couchbase running on K8s.
Telnet response is success for all the ports.
- Couchbase Server version: couchbase: community-6.0.0
- Couchbase running on Kubernetes 1.17.
Adding two scenarios below,
Exposed port details,
kubectl get svc |grep couchbase
10.96.255.151 8091:61006/TCP,8092:61070/TCP,8093:61046/TCP,8094:61107/TCP,11210:61059/TCP
Cbbackup command in both the scenarios,
cbbackup -m full http://{IP}:61006 /backup-1 -u admin -p password -vvv
Scenario 1.,
*cbbackup within the same pod - backup success here.
mt cbbackup...
mt source : http://{IP}:61006
mt sink : /backup-1
mt source_class: <class 'pump_dcp.DCPStreamSource'>
mt Starting new HTTP connection (1): {IP}
mt "GET /pools/default/buckets HTTP/1.1" 200 19969
mt sink_class: <class 'pump_bfd.BFDSink'>
mt source_buckets: bucket-1
mt bucket: bucket-1
mt source_nodes: 192.168.72.10:8091
mt enqueueing node: 192.168.72.10:8091
mt rest_request: <ud>admin</ud>@{IP}:61006/pools/default/buckets/bucket-1/stats/curr_items; reason: total_msgs
w0 node: 192.168.72.10:8091
w0 sink_bucket: bucket-1
w0 DCPStreamSource connecting mc: 192.168.72.10:11210
mt rest_request: <ud>admin</ud>@{IP}:61006/pools/default/buckets/bucket-1/stats/vb_active_resident_items_ratio; reason: total_msgs
s0 create_db: /backup-1/2020-07-28T082630Z/2020-07-28T082630Z-full/bucket-bucket-1/node-192.168.72.10%3A8091/data-0000.cbb
s0 connect_db: /backup-1/2020-07-28T082630Z/2020-07-28T082630Z-full/bucket-bucket-1/node-192.168.72.10%3A8091/data-0000.cbb
bucket: bucket-1, msgs transferred...
mt rest_request: <ud>admin</ud>@{IP}:61006/pools/default/buckets/bucket-1/ddocs; reason: provide_design
mt Starting new HTTP connection (1): {IP}
mt "GET /pools/default/nodeServices HTTP/1.1" 200 268
mt Starting new HTTP connection (1): 127.0.0.1
mt "GET /getIndexMetadata?bucket=bucket-1 HTTP/1.1" 200 788
mt Starting new HTTP connection (1): {IP}
mt "GET /pools/default/nodeServices HTTP/1.1" 200 268
mt Starting new HTTP connection (1): 127.0.0.1
mt "GET /api/index HTTP/1.1" 200 None
Backup successful here.
Scenario 2.
cbbackup from a remote server,
mt cbbackup...
mt source : http://{IP}:61006
mt sink : /backup-1
mt source_class: <class 'pump_dcp.DCPStreamSource'>
mt Starting new HTTP connection (1): {IP}
mt "GET /pools/default/buckets HTTP/1.1" 200 19966
mt sink_class: <class 'pump_bfd.BFDSink'>
mt source_buckets: bucket-1
mt bucket: bucket-1
mt source_nodes: 192.168.72.10:8091
mt enqueueing node: 192.168.72.10:8091
mt rest_request: <ud>admin</ud>@{IP}:61006/pools/default/buckets/bucket-1/stats/curr_items; reason: total_msgs
w0 node: 192.168.72.10:8091
w0 sink_bucket: bucket-1
w0 DCPStreamSource connecting mc: 192.168.72.10:11210
s0 create_db: /backup-1/2020-07-28T083342Z/2020-07-28T083342Z-full/bucket-bucket-1/node-192.168.72.10%3A8091/data-0000.cbb
s0 connect_db: /backup-1/2020-07-28T083342Z/2020-07-28T083342Z-full/bucket-bucket-1/node-192.168.72.10%3A8091/data-0000.cbb
mt rest_request: <ud>admin</ud>@{IP}:61006/pools/default/buckets/bucket-1/stats/vb_active_resident_items_ratio; reason: total_msgs
w0 pump (http://{IP}:61006(bucket-1@192.168.72.10:8091)->/backup-1(bucket-1@192.168.72.10:8091)) done.
w0 source : http://{IP}:61006(bucket-1@192.168.72.10:8091)
w0 sink : /backup-1(bucket-1@192.168.72.10:8091)
w0 : total | last | per sec
w0 node: 192.168.72.10:8091, done; rv: (u'error: SASL auth socket error: 192.168.72.10:11210, timed out', None)
(u'error: SASL auth socket error: 192.168.72.10:11210, timed out', None)
Backup failed, with SASL auth socket error