Before I attempt to come up with a response to this I’d like to clear a few things up. There is no such thing as web socket P2P replication (the P2P ‘server’ side does not support it). Are you perhaps talking about continuous replication vs one-shot? I have a large data set that I test with made up of a DB which contains about 1,000,000 revisions (and a size of about 700 MB) and I’ve never observed out of memory issues. However, that is for operations against sync gateway which leads me to my next question. Which side is the one crashing? The ‘client’ side of the P2P (receiver of the pull) or the ‘server’ side (sender of the pull)?
Also, you are correct that continuous and non-continuous replication will store their checkpoints differently. However, it’s not as bad as pushing up revisions when they are not needed. Part of the algorithm involves agreement on which revisions are needed before actually sending them. That part is a lot faster than the actual upload or download.
One suggestion would to be to preseed the DBs on the client. That means you will bundle a snapshot of your data along with the app and the replication can start from that point instead of zero (I assume the ’10 years of data’ is not going to change anymore?) That will save you a lot of time too since you won’t have to spend time going through the motions of safely syncing data you already know you need form the beginning.