Push replication failing with large data sets

Hey,

I’m developing an Android app in Kenya with a user base is that is often rural and doesn’t have consistent access to the internet. We need to back up our users’ data, and have just started to use couchbase lite for Android. Because of their poor connectivity they often go long periods of time without uploading any data to the server so push replications are frequently pretty huge. We need push to work with any reasonably large set of documents that take up a reasonably large amount of space. Luckily we don’t need to sync any images are large files, just documents that are generally pretty small.

I haven’t plugged in a real dataset to test yet, and am just testing with one type of document with randomly generated data to make sure I can roll out a scalable solution before we drop that app into real-world conditions. I’m currently testing on an Alcatel One Touch (which is a pretty sub-standard tablet, but it’s the kind of thing are users are likely to use, so it’s a good test device) with ~65MB worth of documents (25,000 docs at ~2.5kb each). I flush the server before running the tests, so it’s purely a push replication.

By the time the ChangeEvent has registered about 24,000 changes (it hasn’t started uploading at this point) it starts generating Out of Memory errors.

First of all, I’m not sure why the upload isn’t incremental in this case? When I test with smaller data sets the upload starts before the CBLite has registered all of changed items. For example, when I run the push replication with 750 items, the upload starts after getChangeCount() returns 500 items, and that number is smaller (proportionally) with smaller data sets.

Second, is there anything I can do to change this behaviour? I have some hacky solutions in mind to break up the upload, but I don’t want to implement them if I don’t have to.

Here are the OOM errors I’m getting:
06-09 19:31:20.321 6978-7368/org.mycompany.mycompany E/Sync: RemoteRequest.run() exception: %s java.lang.OutOfMemoryError at com.fasterxml.jackson.core.util.BufferRecycler.calloc(BufferRecycler.java:156) at com.fasterxml.jackson.core.util.BufferRecycler.allocCharBuffer(BufferRecycler.java:124) at com.fasterxml.jackson.core.util.BufferRecycler.allocCharBuffer(BufferRecycler.java:114) at com.fasterxml.jackson.core.io.IOContext.allocConcatBuffer(IOContext.java:186) at com.fasterxml.jackson.core.json.UTF8JsonGenerator.<init>(UTF8JsonGenerator.java:126) at com.fasterxml.jackson.core.JsonFactory._createUTF8Generator(JsonFactory.java:1284) at com.fasterxml.jackson.core.JsonFactory.createGenerator(JsonFactory.java:1016) at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsBytes(ObjectMapper.java:2801) at com.couchbase.lite.support.RemoteRequest.setBody(RemoteRequest.java:326) at com.couchbase.lite.support.RemoteRequest.run(RemoteRequest.java:96) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:422) at java.util.concurrent.FutureTask.run(FutureTask.java:237) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:152) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:265) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587) at java.lang.Thread.run(Thread.java:841) 06-09 19:31:23.645 6978-7371/org.mc.mc E/dalvikvm-heap: Out of memory on a 7817-byte allocation. 06-09 19:31:23.652 6978-7371/org.mc.mc E/Sync: RemoteRequest.run() exception: %s java.lang.OutOfMemoryError at com.fasterxml.jackson.core.util.ByteArrayBuilder.toByteArray(ByteArrayBuilder.java:118) at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsBytes(ObjectMapper.java:2807) at com.couchbase.lite.support.RemoteRequest.setBody(RemoteRequest.java:326) at com.couchbase.lite.support.RemoteRequest.run(RemoteRequest.java:96) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:422) at java.util.concurrent.FutureTask.run(FutureTask.java:237) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:152) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:265) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587) at java.lang.Thread.run(Thread.java:841)

And here’s my sync function:

` final Replication pull = database.createPullReplication(createSyncURL(false));
final Replication push = database.createPushReplication(createSyncURL(false));

    final NotificationCompat.Builder builder = new NotificationCompat.Builder(appContext);
    final NotificationManager notifyManager = (NotificationManager) context.getSystemService
            (Context.NOTIFICATION_SERVICE);

    pull.setContinuous(false);
    pull.addChangeListener((new Replication.ChangeListener() {
        @Override
        public void changed(Replication.ChangeEvent event) {
            processSyncChangeEvent(event, pull, push, notifyManager, builder, appContext
                    .getString(R.string.downloaded_items), appContext.getString(R.string
                    .prep_downloaded_items), appContext);
        }
    }));
    if (pullListener != null) {
        pull.addChangeListener(pullListener);
    }

    push.setContinuous(false);

    push.addChangeListener(new Replication.ChangeListener() {
        @Override
        public void changed(Replication.ChangeEvent event) {
            processSyncChangeEvent(event, push, pull, notifyManager, builder, appContext
                    .getString(R.string.uploaded_items), appContext.getString(R.string
                    .prep_uploaded_items), appContext);

        }
    });
    if (pushListener != null) {
        push.addChangeListener(pushListener);
    }
    pull.start();
    push.start();`
1 Like

How did you finally solve this issue?

Hi, I posted on the GitHub page and they’ve fixed it in newer versions. If you’re using 1.3.0+ you should be fine