This means we get keys in bulk but again we are deleting all the entities one-by-one using a loop. If I have 10000+ entities then this for-loop will bombard couchbase server with those many parallel requests instead of sending all keys in a list and let couchbase delete all those internally. Won’t these many requests affect other requests which are being fired by application for other use cases? Can’t I DELETE using views?
You have a valid concern, however, with Couchbase Server operating as efficiently as it does and NodeJS being asynchronous, this isn’t really an issue.
Every delete you issue in NodeJS will be non-blocking so the application layer won’t lock up. When a delete request hits Couchbase Server, the document is then marked for deletion and is then later deleted when compaction happens.
You are not able to delete data via a view directly.
I wouldn’t be too worried on this, but let me know if you have further questions
I believe: Delete on server should be delete on server, rather than requiring three steps like get data from server, iterate over it on client side and finally for each record fire delete on the server again.
In this regards, I think old fashioned RDBMS were better all you need to do is ‘DELETE * from database where something=something’.
Query to be used: DELETE from b where META(b).id LIKE “%”
layer_name_prefix = cb_layer_key + "|" + "%"
query = ""
query = N1QLQuery('DELETE from `test-feature` b where META(b).id LIKE $1', layer_name_prefix)
except CouchbaseError, e:
To achieve the same thing: alternate query could be as below if you are storing ‘type’ and/or other meta data like ‘parent_id’.
DELETE from <bucket_name> where type=‘Feature’ and parent_id=8;
But I prefer to use first version of the query as it operates on key, and I believe Couchbase must have some internal indexes to operate/query faster on key (and other metadata).