FTS Error(s) context deadline exceeded

Testing with 20M records using FTS, and sometimes we get

“context deadline exceeded”, this mean the partition timeout was exceeded?

That is right. Your system is taking longer than the default of 10s to fetch results from each the index partitions.
This value is configurable within the query though. Here’s an example with the timeout set to 20s …

  "query": {
  "ctl": {
    "timeout": 20000

Thanks another question, is the FTS index in Ram or disk?

The index is taking 969 GB on disk but ram on the fts node has only 12 GB out 96 GB used.


FTS index is residing on disk, memory mapped to RAM as well as FTS also uses RAM for the lifecycle operations.
969GB for 20M documents looks like an unacceptable scale size bloat happening here.

Can you tell us which Couchbase server version you are using?
What is the indexType property set to? (if the type is not scorch then use it)
Is there a custom type mapping defined? (update your index definition with custom type mapping if not already done)

Mostly, you might need some deeper index definition tunings here to get this right as your index size is way too unexpected. If you are a licensed user, please go ahead and create a customer support ticket so that we can help you better.

Few potentially useful links:


Using 6.5 with 2 FTS servers.
the indexType is scotch and the index has defined mappings.

If you have further mutations/updates happening in the system, then you could try to update your index definition with the below store options to see any difference in index size,

"store": {
      "indexType": "scorch",
      "scorchMergePlanOptions": {
          "maxSegmentSize" : 100000

One could use the options like “Show curl command to modify this index definition” and “Copy” on the index listing page to get the current index definition and update it.

The assumption here is that you have the default index partition count of 6 here.

Thanks will try it.

Question on HA and Clustering FTS.

We have 3 FTS servers, how is the index deployed in a HA/Clustering enviorment? With N1QL you can specify the servers you want

The Search Service automatically assigns partitions and their replicas across all Search nodes in the cluster. And it ensures that during rebalance, the distribution across all nodes remains balanced. (except the failover cases)

ref - https://docs.couchbase.com/server/current/learn/services-and-indexes/services/search-service.html

One needs to have replicaCount + 1 number of nodes in the cluster to provide the minimum HA requirements specified over the replica configuration.