Python program showing timeout error

This is my python program:

import time
from couchbase.cluster import Cluster
from couchbase.cluster import PasswordAuthenticator


def writehandler(event, context):
cluster = Cluster('couchbase://h1,h2')
    authenticator = PasswordAuthenticator('abc', 'abc')
    cluster.authenticate(authenticator)
    cb = cluster.open_bucket('firstbucket')
    a = 1
    start_time = time.time()
    list1 ={}
    while a <= 50000:
        list1[str('doc_id_' + str(a))]={'airlines': a}
        a = a + 1
    # print(list1)
    cb.upsert_multi(list1)
    print("Write time = %s seconds " % (time.time() - start_time))


def readhandler(event, context):
    cluster = Cluster('couchbase://h1,h2')
    authenticator = PasswordAuthenticator('abc', 'abc')
    cluster.authenticate(authenticator)
    cb = cluster.open_bucket('firstbucket')
    a = 1
    start_time = time.time()
    list1 =[]
    while a <= 50000:
        list1.append(str('doc_id_' + str(a)))
        a = a + 1
    cb.get_multi(list1)
    print("Read time = %s seconds " % (time.time() - start_time))


if __name__ == '__main__':
    writehandler(None, None)
    readhandler(None,None)

It is showing following error.

Traceback (most recent call last):
File “multiupsert_couchbase.py”, line 38, in 
writehandler(None, None)
File “multiupsert_couchbase.py”, line 18, in writehandler
cb.upsert_multi(list1)
File “/usr/local/lib/python3.5/dist-packages/couchbase/bucket.py”, line 1143, in upsert_multi
replicate_to=replicate_to)
couchbase.exceptions._TimeoutError_0x17 (generated, catch TimeoutError): <Key=‘doc_id_29901’, RC=0x17[Client-Side timeout exceeded for operation. Inspect network conditions or increase the timeout], Operational Error, Results=50000, C Source=(src/multiresult.c,316)>

main error is timeout error.

Hello @ImPurshu
The reason why a timeout is being generated is because the program is trying to write 50000 documents at once, the Python SDK has a default timeout value of 2.5 seconds.

There are a few options here:

  • Increase the timeout
  • Reduce the batch size of the multi operations.

The batching-operations chapter has some good guide lines around batch size and has sample code in Python on how to handle errors with large batch sizes.


Looking at the program it looks like it is trying to benchmark the system, as it’s timing how long the operations are taking. If that is the case, then async approach would give better results, this is also covered in the batching-operations chapter.

3 Likes

Hello @pvarley

Thank you so much for your reply.

I just fixed this by putting timeout value in program which is as follows:

def writehandler(event, context):
# cluster = Cluster(‘couchbase://172.31.39.193,172.31.46.79’)
# authenticator = PasswordAuthenticator(‘Proxima’, ‘proxima’)
# cluster.authenticate(authenticator)
cb = Bucket(‘couchbase://172.31.39.193/firstbucket?operation_timeout=40&http_poolsize=0&fetch_mutation_tokens=1’,username=‘Proxima’,password=‘proxima’)
a = 1
start_time = time.time()
list1 ={}
while a <= 100000:
list1[str(‘doc_id_’ + str(a))]={‘airlines’: a}
a = a + 1
# print(list1)
cb.upsert_multi(list1)
print("Write time = %s seconds " % (time.time() - start_time))

Maybe it will help someone.

and again thank you, Patrick. I will take a look on that.

2 Likes