Thank you thuan .
I got error
Usage: cbbackup [options] source backup_dir
Online backup of a couchbase cluster or server node.
Take a full backup of a cluster. cbbackup http://HOST:8091 /backup-42
Take a full backup for a single node. cbbackup couchbase://HOST:8091 /backup-43 --single-node
-h, --help show this help message and exit -b BUCKET_SOURCE, --bucket-source=BUCKET_SOURCE single bucket from source to backup --single-node use a single server node from the source only, not all server nodes from the entire cluster; this single server node is defined by the source URL -i ID, --id=ID Transfer only items that match a vbucketID -k KEY, --key=KEY Transfer only items with keys that match a regexp -n, --dry-run No actual transfer; just validate parameters, files, connectivity and configurations -u USERNAME, --username=USERNAME REST username for source cluster or server node -p PASSWORD, --password=PASSWORD REST password for source cluster or server node -s, --ssl Transfer data with SSL enabled -t THREADS, --threads=THREADS Number of concurrent workers threads performing the transfer
-v, --verbose verbose logging; more -v's provide more verbosity. Max is -vvv
--silent Reduce logging verbosity to only include errors
-x EXTRA, --extra=EXTRA Provide extra, uncommon config parameters; comma-separated key=val(,key=val)* pairs
Available extra config parameters (-x): backoff_cap=10 (Max backoff time during rebalance period); batch_max_bytes=400000 (Transfer this # of bytes per batch); batch_max_size=1000 (Transfer this # of documents per batch); cbb_max_mb=100000 (Split backup file on destination cluster if it exceeds MB); conflict_resolve=1 (By default, enable conflict resolution.); data_only=0 (For value 1, only transfer data from a backup file or cluster); dcp_consumer_queue_length=1000 (A DCP client needs a queue for incoming documents/messages. A large length is more efficient, but memory proportional to length*avg. doc size. Below length 150, performance degrades significantly.); design_doc_only=0 (For value 1, transfer design documents only from a backup file or cluster); flow_control=1 (For value 0, disable flow control to improve throughput); max_retry=10 (Max number of sequential retries if transfer fails); mcd_compatible=1 (For value 0, display extended fields for stdout output.); nmv_retry=1 (0 or 1, where 1 retries transfer after a NOT_MY_VBUCKET message); recv_min_bytes=4096 (Amount of bytes for every TCP/IP call transferred); rehash=0 (For value 1, rehash the partition id's of each item; this is needed when transferring data between clusters with different number of partitions, such as when transferring data from an OSX server to a non-OSX cluster); report=5 (Number batches transferred before updating progress bar in console); report_full=2000 (Number batches transferred before emitting progress information in console); seqno=0 (By default, start seqno from beginning.); try_xwm=1 (Transfer documents with metadata. 0 should only be used if you transfer from 1.8.x to 1.8.x); uncompress=0 (For value 1, restore data in uncompressed mode)
Error: please provide both a source and a backup_dir
I am using CB 5.1 .