It’s actually very difficult to determine accurate progress information , especially for pull replications, without causing serious performance problems. The replicator is streaming data in both directions, including the “changes feed” list from the server of which documents need to be updated. The total size of the replication isn’t known until that entire feed is received. If CBL waited to download it all before pulling any documents, it would really slow down replication. And in some cases the list is large enough to cause memory issues (yes, we had a crashing bug related to this years ago.) Moreover, if SG had to find the data size of every document in the changes feed, it would make generating that feed more expensive and reduce scaleability.