There’s some requirements you need to handle to make sure operations are rolled back nicely on failure if there’s any problems, when doing this kind of parallelism. Please see Using Couchbase Transactions | Couchbase Docs for the details.
There’s a few things that can impact performance:
- Key-value latency. Your code wants to be running as close to the cluster as possible, IMO. Though you can stage the inserts in parallel, as you’re doing there, they will be committed in serial. (We may consider adding a parallel-commit feature in future). If your code has a 150 milli roundtrip latency to the cluster, that’ll be 10,000 docs * 150 millis = 25 secs minimum to commit them. And the durability level you’ve specified, and the performance of your cluster hardware, will also impact key-value latency.
- Any write-write conflict on the same document with another transaction, and some transient failures (e.g. if the server reports that it is currently overloaded, things along those lines), will cause the transaction to rollback everything so far and retry. So this could be happening, you’d need to check either TransactionResult.logs or TransactionResult.attempts to know.
If you do want to drill into performance, the TransactionResults.logs are a good starting point. They’ll tell you how long each underlying key-value operation is taking, and whether the transaction is retrying, Hopefully they’re somewhat human readable. You might want to read Using Couchbase Transactions | Couchbase Docs first (this section was added today) to get a grounding in what’s going on under the hood.
But basically you are doing the right thing in that code for performance. (It does need some tweaks as mentioned in the docs, but this is for rollback safety not performance).