DurabilityLevel - acknowledgment, performance

I have switched to latest server 6.6 and wish to verify server persistence of my upserts.
Now setting durabilityLevel to MAJORITY_AND_PERSIST_TO_ACTIVE .
I have no errors but doing it like this is much faster than when it was not set! Which seems incorrect.
Upsert is done on AsyncCollection and timing is stopped when I have gotten the mutationToken from the future MutationResult.
What could be wrong? When is the server persistence verified?

@malin did you use PersisTo and/or replicateTo before you switched to DurabilityLevel? can you show us the before and after code so we can compare?

1 Like

Actually no, we did not use persist/replicate to, tried it shortly but had some problem (I think we got this: reactor.core.Exceptions$OverflowException: Could not emit value due to lack of requests) so decided to bump the server instead.
Current production code is

final CompletableFuture<MutationResult> observable = bucket.upsert(id.toString(), serialized, UpsertOptions.upsertOptions().expiry(Duration.ofSeconds(timeToLiveSeconds)));

and what we are trying now is

final CompletableFuture<MutationResult> observable = bucket.upsert(id.toString(), serialized, UpsertOptions.upsertOptions().expiry(Duration.ofSeconds(timeToLiveSeconds)).durability(durabilityLevel));

in both cases we finalize things with
final MutationToken single = resultFuture.get(timeout, timeUnit).mutationToken().get();

@malin could it be that maybe the .get() throws an exception which gets swallowed so you think it is completing but it is actually failing? There are two more ways to double check:

  • run with and without the durability level and capture a tcpdump and compare the latencies in wireshark
  • enable trace level logging on your logger and on the IoConfig enable captureTraffic() either for all services or for just the kv service. Then look at the logs and compare request to response times of the payload.

You can also post the logs here and we can take a look (ideally: same environment, same setup, just one time with the durability level and once without).

1 Like

Thanks for your help @daschl !

No, there are no exceptions, logging the token works fine. Tried the IoConfig but the hex flood is not manageable …
I did a small test client and seems to work - my current theory is that our timing is incorrect … will investigate that

OK I think the problem was our timing!
Time increases now when using the MAJORITY_AND_PERSIST_TO_ACTIVE instead of NONE, as it should.

(still a little confused since it seems not setting the duration at all made the call synchronous? The .duration(NONE) call seemed to return faster than when not setting the duration at all … thought should be same right?)

@malin yes it should be the same, doesn’t matter if it’s NONE or not set at all.