Error: An unknown N1QL error occured. This is usually related to an out-of-memory condition

I am getting this error from couchbase server. How would I investigate the issues? What should I look for in the logs, and what metrics should I be looking at to verify memory issues?

1 Like

The errCode is 21, coming from node_modules/couchbase/lib/bucket.js:680:15. How do I debug this? Please help.

Queries via the HTTP requests (http://*:8093/query?statement ) work without an issue.

Anyone have an answer for this? Still facing same problem

Hey @gcosta1992,
Could you provide some example code of the query you are running and how you are running it?
Cheers, Brett

I am facing with the same issue, how can you resolve this

Hey @ewandrego,

Can you provide some more details on the environment and application that you are using this for?

Cheers, Brett

i running through node.js, records are more than 40k.

node --max_old_space_size=4096 flight.js
events.js:154
throw er; // Unhandled ‘error’ event
^

Error: An unknown N1QL error occured. This is usually related to an out-of-memory condition.
at /Users/ewan/node_modules/couchbase/lib/bucket.js:680:15

Hey @ewandrego,

Can you post the section of code responsible for querying and handling the results? It seems likely that you have a large number of rows and are processing the rows as a single callback, which means the entire result set must be cached in memory (which is failing). Try using the ‘row’ event rather than the callback.

Cheers, Brett

Hey @brett19

Code for your reference,

var couchbase = require(‘couchbase’);
var json2csv = require(‘json2csv’);
var cluster = new couchbase.Cluster(‘couchbase://XXXXXXXXX/’);
var bucket = cluster.openBucket(‘data’);
var N1qlQuery = couchbase.N1qlQuery;
var fs = require(‘fs’);
var wstream = fs.createWriteStream(‘xxxxxx.csv’);
var FlightFields = [‘Flight’,‘Class’];

var req = bucket.query(N1qlQuery.fromString(‘SELECT * FROM data WHERE Type = $1 LIMIT 10’),[‘Flight’]);

req.on(‘row’, function(row) {
console.log(row.data.Type);
//console.log(row);
wstream.write(json2csv({data : row.data , fields: FlightFields,hasCSVColumnTitle:false}) + ‘\n’);

});

req.on(‘end’, function(meta) {
console.log('Query Result Flight Info ');
wstream.end(function () { console.log(‘done’); });
});

Above code works fine on the staging env where the bucket docs size is around 500,000

Regard,
Ewan

Hey @ewandrego,

It’s crashing with LIMIT 10 in the query?

Cheers, Brett

Hey @brett19, Yes it does with LIMIT 10.

it works fine through Query Workbench…

Regards,
Ewan

Hey @brett19, in Node.js still the issue persist…

I’m dealing with the same issue. Did anyone solve it?

FYI, we had a similar issue where we were running through a few 1000 rows in a loop to update a load of records.
We resolved it by breaking the updated into pages.

I thought we’d resolved this issue. It appears that it has reared it’s ugly head again. We have implemented paging in our updates / migrations, but the system falls over no matter how we run the queries - sync or async.
My guess is a memory leak in the SDK. I’ve upgraded to the latest - 2.4.3 and the error is still occurring. It has improved slightly in that I can run more queries before it fails, but not dramatically, e.g. 400 instead of 200 out of 4000+.

Current code is below (commented lines show different async approaches):

const N1qlQuery = couchbase.N1qlQuery;

const asyncBucketQuery = async (query, values = [], _bucket = bucket) =>
    new Promise((resolve, reject) => {
        _bucket.query(query, values, (err, result) => {
            if (err) reject(err);
            else resolve(result);
        });
    });

export default async () => {
    const countQuery = 'select COUNT(*) as total from cb where _type!="" and updatedAt is missing';
    const countResult = await asyncBucketQuery(N1qlQuery.fromString(countQuery), []);
    const pageSize = 200;
    const pages = Math.ceil(countResult[0].total / pageSize);
    logger.debug(`Need to migrate ${countResult[0].total} records to add createdAt / updatedAt`);

    for (let page = 0; page < pages; page += 1) {
        const skip = pageSize * page;
        const getQuery = `select *, _type, meta().cas/1000000000 as cas from cb where _type!="" and updatedAt is missing LIMIT ${pageSize} OFFSET ${skip}`;
        const rows = await asyncBucketQuery(N1qlQuery.fromString(getQuery), []);

        logger.debug(`page ${page}, skip ${skip}, results = ${rows.length}, total ${countResult[0].total}`);
        const promises = [];

        rows.forEach(async (row) => {
            const UTCCreatedAtTimestamp = Math.ceil(row.cas);
            const UTCUpdatedAtTimestamp = Math.ceil(new Date().getTime() / 1000);
            const { _type: type, cb: { _id: id } } = row;
            const updateQuery = `UPDATE cb SET createdAt=${UTCCreatedAtTimestamp}, updatedAt=${UTCUpdatedAtTimestamp} WHERE updatedAt is missing AND _id='${id}'`;
            // logger.debug(`Migrating ${type}.${id}: adding createdAt:${UTCCreatedAtTimestamp} AND updatedAt:${UTCUpdatedAtTimestamp}`);
            // promises.push(asyncBucketQuery(N1qlQuery.fromString(updateQuery), []));
            await asyncBucketQuery(N1qlQuery.fromString(updateQuery), [])
        });
        // await Promise.all(promises);
    }
    return Promise.resolve();// Promise.all(promises);
};

We have similar issue reported. We are on [libcouchbase/2.7.5-njs, couchnode/2.3.7]
The N1QL query is an update query.

Final working workaround for this problem is below. There is an issue with using async in a forEach loop - it shouldn’t be done - see reasons / solutions here:


The following code is what is working, asyncBucketQuery is as previous comment:

export default async () => {
    const countQuery = 'select COUNT(*) as total from cb where _type!="" and updatedAt is missing';
    const countResult = await asyncBucketQuery(N1qlQuery.fromString(countQuery), []);
    const pageSize = 400;
    const pages = Math.ceil(countResult[0].total / pageSize);
    logger.debug(`Need to migrate ${countResult[0].total} records to add createdAt / updatedAt...`);

  let remain = countResult[0].total;
    for (let page = 0; page < pages; page += 1) {
        // if we await each record we don't need an offset because we modify whole page at a time rather than async!
        const getQuery = `select *, _type, meta().cas/1000000000 as cas from cb where _type!="" and updatedAt is missing LIMIT ${pageSize}`;
        const rows = await asyncBucketQuery(N1qlQuery.fromString(getQuery), []);

        logger.debug(`page ${page}: updating ${rows.length} rows, (remaining ${remain} of total ${countResult[0].total})`);

        // don't use forEach - messes with async
        for (const row of rows) {
            const UTCCreatedAtTimestamp = Math.ceil(row.cas);
            const UTCUpdatedAtTimestamp = Math.ceil(new Date().getTime() / 1000);
            const { cb: { _id: id }/* , _type: type */ } = row;
            const updateQuery = `UPDATE cb SET createdAt=${UTCCreatedAtTimestamp}, updatedAt=${UTCUpdatedAtTimestamp} WHERE updatedAt is missing AND _id='${id}'`;
            await asyncBucketQuery(N1qlQuery.fromString(updateQuery), []);
        }
        remain -= rows.length;

    }
    return Promise.resolve();
};

same here, 2.4.3, page size = 200, on the admin console works but the SDK throws 'An unknown N1QL error occured. This is usually related to an out-of-memory ’ no matter what

@brett19 any hint? is it a known bug? any workaround?

Please help

update: it happens only when enforcing consistency levels using REQUEST_PLUS or STATEMENT_PLUS