Cbbackupmgr restore - error found in backup.log

Dear Couchbase Team,

I am currently doing backup restore using cbbackupmgr. The backup is done, but when trying to restore, found this error in backup.log

this is the command i use to run the restore:

cbbackupmgr restore -a /opt/couchbase/var/backup_bucket -r repo002 -c couchbase://10.171.212.167:8091 -u Administrator -p xxxxxxx -t 6 --disable-eventing

And below is the error found in the log.

2020-05-08T06:48:23.855+00:00 ERRO: Timeout occurred (180 sec) on operation -- couchbase.(*Callbacks).handleError() at callbacks.go:117
2020-05-08T06:48:23.855+00:00 ERRO: Timeout occurred (180 sec) on operation -- couchbase.(*Callbacks).handleError() at callbacks.go:117
2020-05-08T08:44:11.111+00:00 [INFO][FDB] Forestdb closed database file /opt/couchbase/var/backup_bucket/repo002/2020-04-28T07_33_49.123386326Z/Care-fa2c31a7359376fe370ab533ddb9484a/data/shard_0.fdb
2020-05-08T08:44:11.136+00:00 [INFO][FDB] Forestdb closed database file /opt/couchbase/var/backup_bucket/repo002/2020-04-28T07_33_49.123386326Z/Care-fa2c31a7359376fe370ab533ddb9484a/data/shard_1.fdb
2020-05-08T08:44:11.201+00:00 [INFO][FDB] Forestdb closed database file /opt/couchbase/var/backup_bucket/repo002/2020-04-28T07_33_49.123386326Z/Care-fa2c31a7359376fe370ab533ddb9484a/data/shard_2.fdb
2020-05-08T08:44:11.375+00:00 [INFO][FDB] Forestdb closed database file /opt/couchbase/var/backup_bucket/repo002/2020-04-28T07_33_49.123386326Z/Care-fa2c31a7359376fe370ab533ddb9484a/data/shard_3.fdb
2020-05-08T08:44:11.439+00:00 [INFO][FDB] Forestdb closed database file /opt/couchbase/var/backup_bucket/repo002/2020-04-28T07_33_49.123386326Z/Care-fa2c31a7359376fe370ab533ddb9484a/data/shard_4.fdb
2020-05-08T08:44:11.577+00:00 [INFO][FDB] Forestdb closed database file /opt/couchbase/var/backup_bucket/repo002/2020-04-28T07_33_49.123386326Z/Care-fa2c31a7359376fe370ab533ddb9484a/data/shard_5.fdb
2020-05-08T08:44:12.293+00:00 ERRO: Not all data was sent to Couchbase due to connectivity issues. Check to make sure there were no server side failures. See the logs for for more details on what wasn't sent. -- plan.(*data).exe
cute() at data.go:89
2020-05-08T08:44:12.936+00:00 (Plan) Transfer plan failed due to error Not all data was sent to Couchbase due to connectivity issues. Check to make sure there were no server side failures. See the logs for for more details on wh
at wasn't sent.
2020-05-08T08:44:12.936+00:00 (Cmd) Error restoring cluster: Not all data was sent to Couchbase due to connectivity issues. Check to make sure there were no server side failures. See the logs for for more details on what wasn't 
sent.

The couchbase is running as container in OCP, and I’m doing backup/restore from other container.
I’ve checked the connectivity as mentioned in the log, but i don’t see any connectivity issue within couchbase or from the container i use during the restore.

Please help to resolve this issue.

Regards,
Hartono

Hi Hartono,
Thanks for using our product.
I have some questions about your issue. Here are question:

  1. What is your Couchbase server version?
  2. What is your container operating system?
  3. Are the number of container in backup and restore cluster the same?

Base on those information, I will try to reproduce on my system to debug your issue.
Thanks

Hi Thuan,

Thanks for your response. Below are my answer:

  1. Couchbase version: 6.0.1 Build 2037
  2. Container operating system: Linux
  3. I have couchbase with 7 servers/container (2 data nodes, 2 index nodes, 2 query nodes, and 1 eventing). I do backup and restore to the same couchbase cluster/servers. But i run the backup/restore from separate container.

Thanks

Hi Couchbase Team / Thuan,

Any update on this?

Thanks

Hello @hartono.mochtar,

The error that caused the restore to fail was a timeout:

2020-05-08T06:48:23.855+00:00 ERRO: Timeout occurred (180 sec) on operation -- couchbase.(*Callbacks).handleError() at callbacks.go:117

There can be a number of different causes for times outs from:

  • network issues
  • under size clusters
  • hardware issues
  • etc.

To debug this further we would have to look at the cluster logs and the backup logs.