Eventing function runs quite late after the document expires

Hello

Eventing function is trigger tool late case like this, at 13:00 a hundred thousand documents expired and the eventing function onDelete handler is triggered starting at 13:30. why such a delay occurs. this delay is more than expected.

and eventing function code like this

function OnDelete(meta) {
    try {
        // sample start id = "InventoryStart:" + listingId
        // sample end id = "InventoryEnd:" + listingId
        log("meta", meta)

        const documentId = meta.id;
        log("documentId", documentId)
        const splicedDocumentId = documentId.split(":");

        if (splicedDocumentId.length > 0 && (splicedDocumentId[0] === "InventoryStart" || splicedDocumentId[0] === "InventoryEnd")) {
            let type = splicedDocumentId[0];
            let listingId = splicedDocumentId[1];
            log("listingId, type ", listingId, type)

            let request = {
                path: "/api/" + listingId + "/call",
                headers: {
                    "X-AgentName": "couchbase-eventing-functions"
                }
            };

            let response = curl("GET", apiUrl, request);
            if (response.status === 200) {
                log("request is successfully processed:", request);
            }
        }
    } catch (e) {
        log("OnDelete Exception: ", e)
    }
}

this line in the code starts to appear at 13:30 in application log window

log("meta", meta)

Rohat, the key point is the the documents you are asking about were expired (a true delete would have been a mutation and feed immediately to eventing). Think of an “expired document” as something that is unchanged it just has a TTL past the current wall clock e.g. a unix epoch timestamp, there is no mutation until something happens (1) direct access, (2) “tombstone scanner”, or (3)compaction).

Case 1: When you directly access an expired document it will then deleted, but if you don’t access an expired document you have to wait for the “tombstone scanner” or the “compaction process” to actually deleted and reclaim space.

Case 2: For each bucket there is a scanner that finds expired tombstones controlled by an option under:

  • Buckets/Select a Bucket/Edit
    “Advanced Bucket Settings”
  • “Override the default auto-compaction settings?”
  • Metadata Purge Interval - this dial is at the bottom

(i) from the help
Sets the time period during which metadata on deleted items may remain on disk. The acceptable range of values is 0.04 – 60, where 0.04 = 1 hour and 1 = 1 day. Setting this value too high may significantly delay reclaiming of disk space; setting it too low may cause data inconsistencies in XDCR.

https://docs.couchbase.com/server/current/learn/buckets-memory-and-storage/expiration.html#post-expiration-purging

Case 3: Compaction is typically triggered by a fragmentation level, but can be set to a size threshold in MB. This is controlled options under:

  • Buckets/Select a Bucket/Edit
    “Advanced Bucket Settings”
  • “Override the default auto-compaction settings?”
  • there are several controls to choose from.

Compaction https://docs.couchbase.com/server/5.5/settings/configure-compact-settings.html#accessing-the-auto-compaction-settings

thanks @jon.strabala 3 worked for me and I give 10 mb to compaction threshold . However, I am afraid that this configuration will cause any performance issues on the server. Can it create any problems?

@Rohat_Sahin I’m no expert on the Bucket parameters, however if your dateset isn’t very massive … back to “Case 1” … what if you did a (non-covered index) query in N1QL every 5 minutes (or whatever you want) to force the deletions of the tombstones.

SELECT meta(t).id FROM travel-sample AS t WHERE meta(t).expiration > 0 AND never_matched IS NOT MISSING

You could read https://blog.couchbase.com/how-to-manage-ttl-with-couchbase-n1ql/ for more information about working with TTL from N1QL