CouchBase Client Fails to Cache Object less than 20mb. Time Out Error

I am caching my serialized POJO s(4mb-8mb size objects) concurrently into Couchbase server with CouchBase client (couchbase-client-1.4.3).

for(upto 20 itertarions){
new Thread().start()… //this thread cache the objects
Thread.sleep(500); // the less sleep time, the more cache failures :frowning:

I have 2 replicated servers. The client can cach small size objects, but when the object size increases, it throws exceptions.

Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed out waiting for operation - failing node:
at net.spy.memcached.internal.OperationFuture.get(
at net.spy.memcached.internal.OperationFuture.get(`

I found similar questions and answers in StackOverflow. How ever, I am not a position to upgrade my memory as the applications which use the couchbase client have their concerns of memory. How ever I tried adding JVM arguments such as -XX:+UseConcMarkSweepGC -XX:MaxGCPauseMillis=500

This is how I create couchBase cache client

CouchbaseConnectionFactoryBuilder cfb = new CouchbaseConnectionFactoryBuilder();
client=new CouchbaseClient(cfb.buildCouchbaseConnection(

I tried with maximum time gaps to make caching successful and avoid time outs.But it also doesn’t work.In our real live applications, usual 7 or 8 caches can happen within a second. The applications cannot hold the process until the cache happens successfully.(it if waits, then caching is useless because of its time consumption. Getting direct Database is always cheaper!!!)

Pleas any one let me know how can improve my couchbase client(since I have limitations of hardware and JVM limitations, I am looking a way to improve the client) to avoid such time outs and improve the performance? Cannot I do serializations compressions out of the couchbase client and do it myself ?