N1QL select limit?

I’m doing some more N1QL benchmarking. Using couchnode, a simple script that does roughly 7k lookups using get takes about half a second. Doing the same with use keys seems to be overwhelming the query engine. All the queries return null. Is this a known limitation in 4.6?

Subsequent queries return:
{
“status”: “Unexpected server error”
}

though the views / get engine seems to continue to work.

hi @james, thanks for the benchmarking effort. Can you provide more details on your setup/query/index etc to dig this further. With our internal test setup, I saw almost 25K queries/sec. Your mileage may vary depending on h/w, but you may be hitting some other setup/config issue… Did you check logs?? Does basic queries work fine?

For this test, I’m only searching on known keys so the only index is the primary of the bucket.

This is my first time using the node sdk, so I’m still getting the hang of it, but querying works fine for small batches (though it’s slower than using get). The issue arises when I send more than somewhere between 3 and 4k requests at “once” (which node tends to do with it’s async architecture). This is when I start getting null results from couchbase. After that, the query engine seems to have crashed and I have to restart the server.

I’m testing this on a laptop with a single node, so we may just be running up against the hardware limitations, but it works fine using get.

As far as I can tell, the logs say nothing about this at all.

Hi @keshav_m, can you help here… Query engine seems to be crashing.

Hi @james, can you share the code, query, sample doc.

thanks.

Hi @james, can you test this using curl or the cbq shell. We want to isolate server from client issues.

@james,

What’s your client API?
What’s your N1QL statement?
What’s the size of your document?
Are you using prepared statement? If not, please do.

Thanks for the input…

Here’s the info I have so far:

  • I’m reading a stream from a file and churning through a list of 7k ids to pull the doc for each one.
  • The query is “select * from master use keys '” + key + “’;”
  • The only index is the primary key.
  • The bucket has roughly 1 million docs in it.
  • The docs are small. A couple of lines of json each.
  • I’m using the node couchbase sdk 2.3.0

I’m not currently using a prepared statement (this is one of the things I want to test). Will try that. And I’ll try using curl to see if I get the same results.

Thanks again,

jk

I ran the same script using http on /query/service with the same select statement (still no prepared statement) and got the same result. Crashed the query service. Will look into prepared statements now.

Could be running out of memory. Please keep us posted. If you can send a log, that would be great. We will also investigate here. Thanks.

Hi @james,

As @geraldss suggested you might be running out of memory due to https://issues.couchbase.com/browse/MB-21332. This has been fixed in 4.6.0.

In current version try using use keys as query parameter as follows avoid this.
curl -v http://localhost:8093/query/service -d ‘statement=SELECT * FROM `beer-sample` USE KEYS $keys&$keys=[“21st_amendment_brewery_cafe”, “21st_amendment_brewery_cafe-21a_ipa”, “21st_amendment_brewery_cafe-563_stout”, “21st_amendment_brewery_cafe-amendment_pale_ale”]’

@james - When you start getting the “Unexpected server error”, take a look at the query.log file in the logs directory. That error indicates some kind of failure in cbq-engine, and the query.log file should provide some indication for exactly what the error is.

Hi @eben,

No error shows up in the log. When I tail the log as I hit the server with the queries, it outputs nothing. About a minute later I get this:

2017-01-06T10:07:21.909-08:00 [Info] connected with 1 indexers
2017-01-06T10:07:21.909-08:00 [Info] index 888953509990670674 has 1 replicas
2017-01-06T10:07:21.909-08:00 [Info] index 4587708231930850925 has 1 replicas
2017-01-06T10:07:21.909-08:00 [Info] index 3690421396432686491 has 1 replicas
2017-01-06T10:07:21.909-08:00 [Info] index 16058547888580282740 has 1 replicas
2017-01-06T10:07:21.909-08:00 [Info] index 13657486897771911805 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 5537269090228572604 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 11935510860176870538 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 2788569104019062836 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 5403007295249949810 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 827455538094078486 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 404323776046443831 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 5680121944644456978 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 15284640728131106329 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 1817303213165886115 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 17714864654974316501 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 16405832858653931768 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 5259384830661023708 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] index 1413667164524792368 has 1 replicas
2017-01-06T10:07:21.910-08:00 [Info] client load stats {}

And this keeps repeating every couple of minutes (at this point the query engine is no longer responding).

@vsr1

This is 4.6.0 Dev Preview (community edition).

  • When I do an explain, maxParallelism is set to 1.
  • When I pass “max_parallelism=8” (or 4 or 0), I still get the same result.
  • When I try with $key&$key=… Also gets the same result.
  • When I try with prepared statements (or adhoc(false) ) again, same result.

@james,

Fix should have been in 4.6.0 Dev Preview.
Can you post the EXPLAIN output.

I have posted sample explain from 4.6.0 Dev Preview build.
There is no maxParallelism set in the plan when USE KEYS is array or query parameter or named parameter. In this case execution time max_parallelism is used.
All other cases the plan forces to value 1 (same will be used execution time).

 explain SELECT * FROM `beer-sample` USE KEYS ["21st_amendment_brewery_cafe", "21st_amendment_brewery_cafe-21a_ipa", "21st_amendment_brewery_cafe-563_stout", "21st_amendment_brewery_cafe-amendment_pale_ale"];
    {
    "requestID": "23cd61bc-900c-4175-8ba6-af601b682563",
    "signature": "json",
    "results": [
        {
            "plan": {
                "#operator": "Sequence",
                "~children": [
                    {
                        "#operator": "KeyScan",
                        "keys": "[\"21st_amendment_brewery_cafe\", \"21st_amendment_brewery_cafe-21a_ipa\", \"21st_amendment_brewery_cafe-563_stout\", \"21st_amendment_brewery_cafe-amendment_pale_ale\"]"
                    },
                    {
                        "#operator": "Fetch",
                        "keyspace": "beer-sample",
                        "namespace": "default"
                    },
                    {
                        "#operator": "Parallel",
                        "~child": {
                            "#operator": "Sequence",
                            "~children": [
                                {
                                    "#operator": "InitialProject",
                                    "result_terms": [
                                        {
                                            "expr": "self",
                                            "star": true
                                        }
                                    ]
                                },
                                {
                                    "#operator": "FinalProject"
                                }
                            ]
                        }
                    }
                ]
            },
            "text": "SELECT * FROM `beer-sample` USE KEYS [\"21st_amendment_brewery_cafe\", \"21st_amendment_brewery_cafe-21a_ipa\", \"21st_amendment_brewery_cafe-563_stout\", \"21st_amendment_brewery_cafe-amendment_pale_ale\"];"
        }
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "2.367343ms",
        "executionTime": "2.32614ms",
        "resultCount": 1,
        "resultSize": 1696
    }
    }

Here’s the explain:

[
{
    "plan": {
        "#operator": "Sequence",
        "~children": [
            {
                "#operator": "KeyScan",
                "keys": "$key"
            },
            {
                "#operator": "Fetch",
                "keyspace": "master",
                "namespace": "default"
            },
            {
                "#operator": "Parallel",
                "~child": {
                    "#operator": "Sequence",
                    "~children": [
                        {
                            "#operator": "InitialProject",
                            "result_terms": [
                                {
                                    "expr": "self",
                                    "star": true
                                }
                            ]
                        },
                        {
                            "#operator": "FinalProject"
                        }
                    ]
                }
            }
        ]
    },
    "text": "select * from master use keys $key"
}

]

Plan looks good. There is no maxParallelism.

Can you run your query in cbq shell and see if still have issue.

There’s never a problem running the query once (or up to about 3-4k times). It’s when the requests pile up that the query engine crashes (whereas the using .get() and the views engine have no problem with this at all). I tried running my script with cbq, but node itself crashed at about 500 requests.

In that case running out of memory. Query engine needs to get the data and process and send to client. .get() is direct fetch of one document at a time.

Turn off the pretty and change the query as follows and see if that helps.

curl -u Administrator:passwd http://localhost:8093/admin/settings -XPOST -d ‘{“pretty”:false}’

select m from master m use keys $key"