Couchbaselite swift (ios) upsert is too slow

I’m adding aprox 1 Million records in database. I’m doing upsert (code snippet below).
It is processing 80-100 records per second and this number reduces as data grows in DB.

SDK version

pod 'CouchbaseLite-Swift', '3.1.1'

Please let me know if I’m doing anything wrong here.

private func upsertObjects(_ objects: [[String: Any]], inTable table: String, skipDirty: Bool = true, replace: Bool = true) -> Bool {

        do {
            ADPLog.Debug("start batch")
            try database.inBatch {
                for object in objects {
                    guard let objectId = object["id"] as? String else {
                        continue
                    }
                    let existingDoc = try database.defaultCollection().document(id: objectId)?.toDictionary()
                    if skipDirty, let existingDoc, isDirty(json: existingDoc) {
                        continue
                    }
                    var updatedJSON = object
                    if replace, let existingDoc {
                        updatedJSON = mergeJSON(existingDoc, updatedJSON)
                    }
                    let mutableDoc = MutableDocument(id: objectId, data: updatedJSON)
                    mutableDoc.setValue(table, forKey: DOCTYPE)
                    try database.defaultCollection().save(document: mutableDoc)
                }
            }
            ADPLog.Debug("end batch")
            return true
        }
        catch {
            ADPLog.Error(error.localizedDescription)
            return false
        }
    }

    private func isDirty(json: [String: Any]) -> Bool {
        guard let dirtyFlag = json[IS_DIRTY] as? Bool, dirtyFlag == true else {
            return false
        }
        return true
    }

I’m not a cblite guy, but look at the served dashboard to see what the server is doing during the inserts.
Also - you might.get better performance by using smaller batches.

Thanks for the inputs. These upserts are on local copy of DB. And I’m adding data in chunks of 20k records.

Can you try to do the time profiling with XCode and share the result?

Hi Pasin, attached is the screenshot of time profiler. Upsert operation for 20k records took ~18 sec, when I have only 20k existing record in DB.

It’s not clear where the time most spent in the save method. If you clicked the > arrow on the [CBLCollection saveDocument:concurrencyControl:error], will it show more information?

Also, are you testing on the actual iPhone device or the simulator? If it’s an actual device, can you provide the info about the device including the iOS version?

What is the general size of the documents (and blobs if any)? If you can provide an example doc, it will be very helpful so we could use that similar docs to test it out.

Hi Pasin,

I have created a sample app to run this test. Speed of transaction degrades as records count in DB.
Please get the POC from this link:

Hi @pasin, Please check the sample app and here is time profile screenshot with more information

Thanks for the new profile result and the test app. I have created CBL-5241 for an engineer to look at.

Hi @pasin were you able to reproduce the issue and have any solution for it?

Yes, we were able to reproduce the issue. It’s caused by a missing database index which is created lazily by the replicator. The fix is in progress. I will report here when the fix is merged in and about the version the fix will be included. Thanks for reporting this issue.

2 Likes

Hi @pasin, is there any update on this issue?
This is a big concern for us as it is degrading the app performance. I’ll highly appreciate it if this issue can be prioritized.

Thanks for checking back. The issue has been fixed, and the fix will be available in the next 3.1.6. We are still working on the release. I don’t have a specific timeline for the release, but it’s possible that it could be in about a mouth or so.

2 Likes

Hi! I think I ran into this same issue and was wondering if there are any work arounds to avoid this issue before the fix is released?

@kptp If you perform one replication, the issue disappears. The issue was because the index was created lazily on the first replication, so performing one will solve it permanently (for that database).

1 Like

@callum.birks Is there any workaround without using replicator? I’m using couchbase for local storage only and not using any sync gateway.

in the android 3.1.0 SDK, i also update the document by inBatch .it run slow too, sometimes slow more. about update one item per second! please fix in thd android also!

I don’t have a specific timeline for the release, but it’s possible that it could be in about a mouth or so.