How does rest api deal with creating buckets for big data sets that can`t fit in one bucket?

Hey … My question is … If we have a big data set that we need to insert into couchbase and if the bucket size is not enough … and I am using rest API to make that happen … Is there a way I can do something like …

if(bucket has reached capacity)

create another bucket dynamically createbucket( )

and insert data now into this newly created bucket … What are the functions in the API?

Also, what is the parameter in cbstats that gives me reads/sec on a particular view??


Can you tell me which REST command are you using?
I guess this is the command to create/edit a bucket:

However, here the answer regarding the sizing.

So the first answer, do not use “buckets” to manage space. The DD are the same for all buckets. (You can just select DD for data and another DD for indexes).

So the limit is the size of your disk, remember that you distribute the data on many nodes, so we can say that the space for you database is the sum of the free space on all nodes. This means you should not create new bucket to "get"more space.

It is also important to remember that couchbase has two types of files:

(and some replicas)

All data in files are managed using a append only approach, this means that the files grow, then it is compacted. You can find more information about this here:
Also when you are creating a bucket you have to set the RAM quota, to limit the size of the cache that is used to store all metadata, and cache the values. Once again this is distributed on all nodes of the cluster, for example if you have a 5 nodes cluster and you put 2GB or RAM quotas for your bucket, you have 10GB or RAM available for this bucket. The space is managed automatically by Couchbase that removes (after data has been persisted on disk) data from the RAM when necessary.