Index size on disk >> index data size

Hi, I just started playing with Couchbase Community 7.1.1 (Docker)

After I inserted < 50M docs to a bucket, the disk consumption was 21 GB, which was fine.
I was surprised to find out one of GSIs for a text field < 20 chars consumed 260 GB on disk when the data size reported was just 11 GB. What can I do to use the disk more efficiently?

After I dropped the index and re-created it, the disk consumption dropped to 30 GB (with the same data size of 11 GB) It’s hard to decide if I can handle the 3x higher disk usage than the actual data size, but it’s an improvement from the >20x bloat I observed previously.

Obviously I don’t want to repeat “manual compaction” like this manually. I’m not sure if this REST API is relevant or not Set GSI Settings | Couchbase Docs
I guess there’s already a sensible automation built into the indexing service? Why wasn’t it working for me?

Index management in Enterprise Edition is much more efficient than in Community Edition.

Index size depends on the number of indexes, the size of the keys and the distribution of the keys. Keep in mind that it contains not only the keys, but also pointers to the documents. The IndexAdvisor (in the Query tab of the WebConsole) can help determine what indexes are needed for querys.

Settings for Auto Compaction for Community Edition can be found as described here:

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.