Resolved: see the update at the end.
I have an issue similar to some issues reported in the past, but the root cause does not seem to be the same.
After a first successful get with lock (30 seconds), the next immediate concurrent get with lock fails with “Temporary failure received from server. Try again later”.
I am indeed expecting the second get to fail, but not with that error?
My buckets are allocated an ample amount of memory (2Gb each), and I am just manipulating a few documents, so it is an utterly simple configuration (lots of views and indexes though).
I have at times an “Internal server error” on the Console UI, and I have found an erl_crash.dump file in /Users/me/Library/Applications Support/Couchbase directory. Not sure if any of these is related, although it looks very concerning. But it might date back from before I increased buckets’ memory allocation.
I have a fresh install of Couchbase Server 5.5.2 on macOS Mojave 10.14 (40Gb RAM), Erlang 21.1.1 (if it matters), libcouchbase 2.10.0 (same).
Any idea what’s going on?
Thanks a lot!
Update: I read error.h in libcouchbase, and found the answer. The header contains:
/This error is received in callbacks from the server itself to indicate
that it could not perform the requested operation. This is usually due to memory and/or
resource constraints on the server. This error may also be returned if a
** key has been locked (see lcb_get()) and an operation has been performed on it
** without unlocking the item** (see lcb_unlock(), or pass the correct CAS value
to a mutation function). */
X(LCB_ETMPFAIL, 0x0B, LCB_ERRTYPE_TRANSIENT|LCB_ERRTYPE_SRVLOAD|LCB_ERRTYPE_SRVGEN,
“Temporary failure received from server. Try again later”)
So it’s just the string returned by lcb_strerror which misled me. The comment in error.h should also read “This error may also be returned if a key has been locked and a concurrent subsequent lock was unsuccessful”. The server is not in resource distress mode in that case. It just cannot lock the item.
In hope this report can help other users…