We are experiencing same issues with index data. Here’s our cluster: 9 nodes, 256G mem each, and on each node we have 3 disks, 1T each. My setup when configuring the server:
After the cluster was set up, we loaded it with some data, not much comparing to the capacity of the cluster:
Then I created primary index for metrics-metadata, which worked fine. When I created a GSI on that bucket, and after the index was 100% ready, I got warning about disk space full. One of the nodes (and only one) is 95% full on the index disk. Here’s the output of du:
/mnt/storage2/couchbase/data$ sudo du --max-depth=2 -h
4.0K ./.delete
932G ./@2i/metrics-metadata_metadata_type_5261382729763826900_0.index
932G ./@2i
2.3M ./@indexes/travel-sample
2.3M ./@indexes
932G .
While the data itself takes only 22G on the data disk, the index file is filling up the index disk. And I noticed it’s still growing even the index showed 100% ready. I had to drop the index. After a while, the index disk space is freed up.
Is it normal to have this huge index file with such a small amount of data? And why is this only happening on one of the 9 nodes? Can we spread the index file across the cluster?