The mutation of a subdoc seems to be available at bucket level. I wonder if the couch-spark-connector provides similar capability as RDD.saveToCouchBase() for SubDocLookupRDD?
Basically, I would like to lookup subdoc based on a set of keys. (potentially millions of records) And save back the updates of the subdocs.
Is it feasible?
Using the Subdoc API to write to Couchbase from Spark isn’t implemented yet, but it’s mentioned as being on the roadmap for a future release in the documentation:
Note that the Couchbase SDK also supports subdocument mutations which will be added to the Spark Connector in a future release.
Thanks for pointing that out.
Any time frame about when it will be released?
It’s not been commited yet but it’s under consideration for a future release.
@daschl may have more information on this item. Although there is not much description attached, I believe it is tracked in this JIRA ticket: