How to handle Uncaught Couchbase\TimeoutException

I am wondering if there is either any sample avail or documentation on how to handle connection issues via the PHP SDK 3.2. I have scripts which pull records from remote systems and each batch can be between 10K to 500K in records. And each record has at least a sub query to check if this is new or update. I have been seeing several of the below errors which cause my script to halt and fail in middle of import.

[cb,EROR] (cccp L:187 I:3731719334) NOHOST:NOPORT (CTX=(nil),) Could not get configuration: LCB_ERR_TIMEOUT (201)
[cb,WARN] (server L:821 I:3731719334) Failing command with error LCB_ERR_TIMEOUT (201): {“i”:“d2ae74ebde6d84a6/7719ce3de8a584ab/3cb”,“l”:“”,“r”:“”,“s”:“kv:get_cluster_config”,“t”:2500000}
[cb,WARN] (server L:821 I:1569695126) Failing command with error LCB_ERR_TIMEOUT (201): {“b”:“rets”,“i”:“ad48a7665d8fa596/5ce0591e853e23fd/1320c”,“l”:“”,“r”:“”,“s”:“kv:set”,“t”:1500000}
PHP Fatal error: Uncaught Couchbase\TimeoutException: LCB_ERR_TIMEOUT (201) in /var/www/html/couchbase.php:57
Stack trace:
#0 /var/www/html/couchbase.php(57): Couchbase\Collection->upsert()
#1 /var/www/html/retsphp.php(345): Couchbase->upsert()
#2 /var/www/html/retsphp.php(261): dbInsert()
#3 {main}
thrown in /var/www/html/couchbase.php on line 57

And the line 57 is the actual upsert of a doc

  public function upsert($id = '', $doc)
        $opts = new UpsertOptions();
        $opts->timeout(1500000 /* milliseconds */);
  >>> Line 57      return $res = $this->collection->upsert($id, $doc, $opts);

So the question is how can i catch these errors and force a retry vs just failing

I’m not a PHP programmer, but just tried to find “any sample”.
I’m checking Couchbase Labs repositories if I need to do that.

Here is an example of catching Couchbase exception to do something when any exception occurs:

Here is an example of using TimeoutException:

I suspected you might expect a more sophisticated sample but as I could not see any exception handling in the code snippet, I hope this is helpful to get started considering it. The actual implementation should depend on the requirement.

Hi YoshiyukiKono,

thanks for going thru the GitHub Repository. For now i was able to isolate the instance when this occurs which is when my backup software takes a snap shot of the VM’s running the Couchbase cluster. Even so i have a 4 Node cluster it seems to fail if the one we try to access is taking the snapshot. I wish there was some build in failover procedure that if a member in cluster object timeout the call is redirected / retried to another member in the cluster connection.
But the PHP Sdk seems to be the ugly stepchild here as it has little traction in this support forum. I only use it because i have 3 party code that i have to use to ingest data from api and its written in php. Otherwise i use NodeJs.

Thanks anyway

Thanks for letting us know the situation.

When using Couchbase SDK, a client directly accesses to the node that has the active copy of the targeted document, i.e. the active vBucket. It is realized by the cluster map that is provided to a client in the bootstrap phase, then correspondingly updated and distributed to clients. Till the cluster map that is updated after failover is distributed, the access to the document stored in the node that went down must be failed. Couchbase SDK provides “getFromReplica” method to access a replica vBucket, which is for read-only access.

In my view, this is a choice between convenience and performance.
Couchbase’s architecture is designed to realize sub-mill second latency while providing the flexibility to access to the replica whenever needed.
For the mentioned automatic redirection, a central controller is needed for the consistency for write-access. Even for read-only access, some sort of overhead is inevitable.

Just FYI. Hope this helps.