This doesn’t seem to work, “off the shelf”, so I’m wondering what the best practice is:
Our Syncgateway server sync function is authorising push replication with requireAccess to prevent users from pushing unauthorised documents. If a user’s authorised channels change however, then it does not seem that the client replicator is notified, nor is it triggered to re-push any documents that previously failed replication (due to having been previously unauthorised). The upshot of this is that it does not seem to be possible to simply modify a user’s authorised channels on the server, and then expect a continuously replicating client to now retrospectively push any existing, newly authorised documents.
So is it correct to assume that we need to implement this behaviour ourselves? If so, what’s the best practice or recommended approach? It feels like this is a fairly common use case, so I’m hoping that there is a simple solution (or that we missed something somewhere along the way!)
As it happens, we already have a mechanism to notify the client of the changed user permissions (because those permissions are themselves stored in a doc that is replicated to the client), so the bigger part of the problem for us is how to initiate a re-push of newly authorised documents. It seems like the simplest way would be to simply instruct the client to restart replication from scratch, but it’s not even clear how to do that using the iOS client.
So couchbase support have told me that I need to clear the server checkpoint _local doc, and restart replication. The checkpoint doc Id is not publically available though, so any suggestions?
As a related question: how are _local doc operations authorised on the public Sync Gateway port (they clearly don’t go through the sync function, or they’d be blocked by our current configuration). At first glance, it looks like all clients are authorised to upsert all_local docs! Is that correct? It seems like a DOS/security concern if it is the case.
If you are mining your sync gateway logs you’ll be able to track users and checkpoints created/update. From there you can just do a simple GET(“key”) and update the value to “last_seq”:0 So next push CBL will start from zero.
Possible, are you using session based auth or basic? also are you tracking your CRUD operations in SG to track replication behavior. Most people use the logs to audit and know who, when & what docs / versions were pulled and pushed by users including local docs
Are the user’s authorized channels changing via a doc update, or a REST admin API call?
then it does not seem that the client replicator is notified
The client replicator generally isn’t aware of what docs sync gateway might reject due to write permissions.
The upshot of this is that it does not seem to be possible to simply modify a user’s authorised channels on the server, and then expect a continuously replicating client to now retrospectively push any existing, newly authorised documents.
Yeah, that sounds correct. I don’t know of any built-in way to solve that problem. I’m asking around with the mobile team.
If so, what’s the best practice or recommended approach? It feels like this is a fairly common use case, so I’m hoping that there is a simple solution (or that we missed something somewhere along the way!)
I would say that if you keep the assignment of users to channels more static and have docs move in and out of channels as needed for your security model (as described in SG 264), it will avoid the problem altogether.
I wouldn’t recommend fiddling with the checkpoints stored in _local docs. Those are an internal mechanism and I think resetting those should be considered an absolute last resort.
One correction. After chatting w/ colleagues, it looks like the changes from SG 264 only really apply to pull replications, but not to push replications.
There’s no way for Lite to know which documents it should retry (i.e. it doesn’t know that a previous PUT was rejected due to missing access to channel X)
You are left with two choices from what I can tell:
Trigger a full re-push by resetting the local checkpoint on the couchbase lite instance, or the corresponding _local doc id on the Sync Gateway. As you mentioned, there isn’t a public API for either of those at the moment.
Come up with a signaling mechanism where you can signal down to the cblite instance to touch that doc (add a new revision), which will trigger that doc to get re-pushed.
I’d recommend the latter, since it seems more sustainable and doesn’t touch internal checkpoints. If it could be made “generic” enough, then maybe it could be turned into an “out-of-band” signaling mechanism where couchbase lite would handle the work for you. This would be a product enhancement, but I agree with you that it seems like a common pattern that users might run into.
Well, it’s “all clients with access to the database”, so unless you leave guest access enabled, someone has to have a user account to be able to do it.
Also, replication checkpoints are designed to have unguessable IDs, so there’s no way for someone to read or write another client’s checkpoint.
But yes, it’s possible for a user to spam the server with enormous numbers of (or enormously large) local documents. It’s auditable, though. The replication protocol comes from CouchDB and we needed to keep this behavior for compatibility. The upcoming 2.0 replication protocol is entirely different and doesn’t have this “feature”, though.
Thanks for the comments. I wouldn’t be confident scraping the logs to monitor CRUD operations though. We need guaranteed detection of these events, and since our sync gateway instances are dynamically scaled and load balanced, there would be definite occasions where a sync gateway instance could be suddenly terminated without having had an opportunity to detect a log event. I’d also prefer to manage this from the client end, rather than have the server maintain it: the implementation is much simpler that way.
That wouldn’t work for us in this case Traun, this is a push replication from the client to the server, the root of the problem is in having the client initiate a re-push of previously rejected documents. We already have a solution for notifying the client that authorisations have changed (the authorised channels are themselves contained in a document that replicates to the client, so the client just watches for change events on that doc).
Thanks Jens, that’s exactly what we were looking for. (It’s not very discoverable in the documentation BTW, we did spend quite a while searching for it. Also, Couchbase support told us that there was no such capability, so somebody might want to point them towards that doco too!)
You mention the upcoming 2.0 BLIP replicator. Can you confirm that the ability to reset replication in a similar fashion will be publicly available on the client API for this protocol too? Where are the replication checkpoints stored in this new protocol? From my rudimentary understanding, I thought it was essentially the same CouchDB replication “protocol” just implemented over a more performant binary transport, is that not the case?
BTW, feedback from the iOS developers on that API, just for your information:
The constants that define those strings are in a private header - not accessible (or even included) in the library. You only see it if you download the source, and then still can’t use the constants from within an app.
also it only checks if that key is present. You could set it to @NO and it would still have the effect that the docs say you need to set it to @YES
Following that error, saveLastSequence recovers by replacing the locally modified value with the value on the server, which is precisely what we don’t want to do!
Presumably the server document should be altered directly or deleted? This would ideally be in an async method that returns an error/success since there isn’t enough richness in the delegate status callback to determine what’s really going on.
Well crap, that’s definitely a bug. If there’s no local checkpoint, then when saving the remote checkpoint and hitting a conflict (expected) it should overwrite it instead of restoring the remote value.