The challenge in answering this one is that it depends on the actions you take operationally. The system is flexible here and, in effect, how you handle it is partially in your control.
One scenario could be that after it has been acknowledged as persisted, but before it’s been replicated, the node fails. In that case, if you do not failover the node, when it comes back online, that item will be replicated.
One other scenario is that you could have autofailover enabled and after it’s received by the primary but before it’s replicated or persisted, autofailover kicks in and brings a replica to primary. In this case, your application will have seen the failure to achieve the durability requirement requested. If the previous primary does come back online, before it rejoins the cluster it will resync with the state of the current cluster meaning the location where the item is active is now the current state.
There isn’t a single best practice, arguably, as there isn’t a single “what do I do in this situation?” answer. For many of our users, they prioritize availability over higher levels of durability in the face of failure. That means they turn on autofailover, allowing applications which have more context to manage what to do in particular data mutation cases. For some other users, they prefer to not use auto-failover and have a human involved in deciding what to do next. For yet others, they may have some longer unavailability while they attempt to recover a node.
I hope that helps. Is there is a specific goal you have with this update? We can maybe give some options in that case.