LCB_HTTP_ERROR memory optimization

Hi,

using the Full Text Search we are currently running into an “LCB_HTTP_ERROR” with code 59.

Searching the logs makes us think it is a memory issue:

“2019-03-04T10:06:06.501+01:00 [INFO] rest: error code: 429, msg: rest_index: Query, indexName: ibe, err: query request rejected\n”,
“2019-03-04T10:06:06.792+01:00 [INFO] app_herder: querying over queryQuota: 21705523200, estimated size: 4320001008672, runningQueryUsed: 4320008565365, memUsed: 4320185675032\n”,
“2019-03-04T10:06:06.792+01:00 [INFO] rest: error code: 429, msg: rest_index: Query, indexName: ibe, err: query request rejected\n”,
“2019-03-04T10:06:06.983+01:00 [INFO] bleve: remoteClient: 10.20.27.171:8094 query reject, statusCode: 429, err: \n”

Are there any ways we can optimize memory usage?
Reducing the amount of data delivered in a response for example. There are multiple fields we don’t actually need, like the name of the index, maxScore etc.

We are currently searching 135 fields in our JSON documents.

Thanks in advance.

It seems like this is a server issue. @keshav_m, could you take a look?

The issue as you noted here is the huge memory requirement for running the incoming queries. If you can afford more RAM/ FTS quota, that is one way to buy some short term relief to the problem.

Reducing the amount of data delivered in a response for example. There are multiple fields we don’t actually need, like the name of the index, maxScore etc.

Currently there is no way to do this. One option you could do this is to paginate the results. But higher order page requests could still consume memory.

We are currently searching 135 fields in our JSON documents.

Does it mean, you are trying to retrieve stored values of 135 fields with query ?
Or, you have 135 conjuncts/disjuncts conditions in query?
Or , You have indexed 135 fields of the document?

Looking at the high memory estimate for the query (may be too many concurrent/slow queries?), it’s worth checking whether you can rewrite the same query in a better way.

regards,
Sreekanth

Currently there is no way to do this.

Thanks that is what I wanted to know.

Looking at the high memory estimate for the query (may be too many concurrent/slow queries?)

We managed to improve the performance and amount of queries are not running into the issue anymore.

Thank you!