BULK N1QL FETCH via PYTHON

IS it possible to bulk fetch the results rather than itreate row by row ?
eg

Doc shows [https://docs.couchbase.com/python-sdk/2.0/n1ql-queries.html]

for row in bkt.n1ql_query(‘SELECT * FROM default’):
print row

Is there a way to populate a python array for example in 1 fetch - rather than iteratively.

https://docs.couchbase.com/python-sdk/2.5/start-using-sdk.html

@vsr1
Yes - but thats the same row by ow iterative approach in a loop. Dont see anything that suggests a BULK fetch is possible via N1QL - only via KV.

I believe calling the execute() on the returned N1QLRequest object should do what you are asking for. Practically speaking, this is no more or less efficient than iterating through it, but it should do what you want.

Thanks…
I was going to test… if it existed - but if its no more efficient … then :slight_smile:

Well, feel free to benchmark, but behind the scenes, the HTTP fetching code is the same (it fetches the results in chunks into a list of decoded JSON rows) - the iterator just wraps this. But you might theoretically see a small burst of speed, though I haven’t benchmarked it. Bear in mind this will disappear if collating all the results in one list, rather than ‘slurping’ the data, starts to create memory pressure on your OS, encouraging it to swap. Generally most of our APIs are designed to stream large result sets for this reason.

Thanks !
Yes idea was to poluate say 50k small json chunks into a dict/list and process on ptyhing side.
Then fetch next 50k. So on.

But if only slight increase - we will look at parellization at python layer to chunk it up.