Needing to set very long timeouts

I have a Java program that does simple key/value upserts and gets. It works great. But sometimes, for unknown reasons, I have to set very long timeouts as shown below. This does seem somewhat related to how far away the server is. For servers across the ocean I need longer timeouts. But even in those cases, the timeout I need to use is far longer than the actual connection ever takes. I have experienced this for US servers also (200 miles away) where true latency is only like 50ms, but I still need to set much larger timeouts.

Once I set the big timeouts, everything works very fast.

Has anyone else bumped into this?

final CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
    .socketConnectTimeout((int) TimeUnit.SECONDS.toMillis(30))
    .connectTimeout(TimeUnit.SECONDS.toMillis(30))
    .kvTimeout(TimeUnit.SECONDS.toMillis(5)).build();

final Cluster cluster = CouchbaseCluster.create(env, cluster_nodes);

This feels very environmental to me, because timeouts here are just arbitrary timers on your side to “time out” if an operation takes longer than expected. The timer set has no influence on the operation itself when it goes over the wire.

I would recommend you to use wireshark or other monitoring tools to make sure your network latencies are in order over longer periods of time, since it might very well be that you get spikes and this is what you see as a timeout (and a timeout is always just an effect).

Thanks, I will look into that. The odd things is that I need to set the timeouts to much larger values than the actual times ever used. Meaning, I am setting some of the values to 30 seconds, but nothing ever takes anywhere close to that. I am wondering if the SDK is misinterpreting the numbers, to make them seem smaller than they are…??