Spark Connector 2.1 in Databricks


I was wondering how in scala to set up my databricks notebook so that I can access multiple buckets for my dataframes? A sample workbook is here Ive been following:


a) dbricks doest allow a config key to have an empty value which below indicates


b) this would limit me to one bucket a cluster start up, Id rather do this in the code if possible and be able to access multiple buckets within the same notebook?

Is this possible?



I think your connector documentation ( has everything to answer your questions. On connect you can open more buckets like:

  .config("", "") // open the beer-sample bucket with empty password
  .config("", "bar") // open the foo bucket with password bar

You can try setting the value of the config as “”, this might work as well. Also, we always recommend to set passwords anyways.

Keep in mind that if you open more than one bucket on each op against couchbase you need to specify he bucket name, since the connector doesn’t know which one you want. See our docs for each method on how to do that.

1 Like