Index Fragmentation


After spending some time testing how to efficiently using primary and secondary indexes, where I keep adding / removing documents, I can see the index fragmentation percentage in the UI growing quite high - and the more it grow, the more the performance seems to degrade.

For example, at the moment, the UI report:

  • 95.7% index fragmentation
  • 9.15MB index data size
  • 249MB index disk size

My expectation will be that the index compaction should kick off and sort out this - I can see the data compaction happening, but not the index compaction.

In the settings, index compaction is set to 30% - there are no tick next to it.

And the buckets do not override the default settings.

Am I missing something somewhere to make sure compaction happen?

Many thanks,

@lbertrand, which version are you using 4.1 or 4.0?

I am using 4.0 at the moment but as 4.1 is GA will do my testing with 4.1

Using 4.1 I can see the same kind of behaviour…

My test consist of loading a lot of documents into a bucket, testing time execution of queries and different access to this data and then remove all of this documents…

And what I can see is that the index fragmentation goes up all the time mainly when I remove all the documents… And a little when I add the documents.

So after running few times the same test, the UI report 86.3% index fragmentation, 725 MB index disk size and 95.8MB index data size.

What I cannot see and any time where this goes down indicating a compaction will happen, which is set to happen at 30% based on the UI (defaults as I have not change anything explicitly when deploying). But I can see this compaction happening on the data side.

There is a minimum size that you have to hit before compaction kicks in. This is a configuration we don’t expose, but it is right now set to 500MB of disk size for the index per index. so until you hit 500MB we won’t kick in compaction. is this a problem?

500MB for the index size in memory, is this right… Because based on the numbers I gave I reached this point for the disk size…

Will have to see in production if this becomes an issue.

To be clear, kick off point for compaction for indexes are triggered based on index size on disk. So 500MB applies to index size on disk.

In this case, based on the data I had returned in the UI, I should have seen some form of compaction, as the disk size was 725MB


I am seeing the same behaviour on my Couchbase 4.1 test environment. Some of my indexes are 900Mb+.

980M core_coreIndexOnKey
824M core_coreIndexOnLowerUserId
791M core_coreIndexOnRefreshTokenId
881M core_coreIndexOnUserId

Is there any way to trigger a manual compaction? I already tried to reduce the threshold to 5% in the UI but it does not seem to change anything.