I’m using Couchbase Server Enterprise Edition 7.6.2 and i have a problem with secondary indexes.
As i read couchbase makes auto clean on indexes and i only have maximum 500Mb data on all buckets.
Somehow my secondary index folder is 3.7Gb.
root@mycbserver:/opt/couchbase/var/lib/couchbase/data# du -cha --max-depth=1 3.7G ./@2i
And i need to clean it because it takes so much disk space, what should i do?
AFAIK, indexes do not contain unnecessary data (such as indexes for documents that have been deleted). There is no “compact” or anything else like that for indexes. If you want to test that theory, you could record the size of an index, then deleted it and recreate it and check the new size to see if it is smaller.
As i read couchbase documentation they said:
”Flush buckets that have indexes: flushing a bucket deletes all data in a bucket; and even if there are pending updates not yet processed, flushing causes all indexes to drop their own data.”
But our developers cleans data on buckets with “Delete From BucketName”, may be this reason index data does not clean?
I was right, this problem solved by using flush button on the bucket, our developers was using delete from bucketName, this reason bucket indexes wasn’t clear old data and they keep growing.