Regarding the MAP function used in Spark Connector

I thought to post this in the forum because I could not understand why the SPARK connector is using both JAVA and SCALA MAP. I have put a sample code below. Please look at the bolded lines. The first variable option is a Java MAP while the 2nd one options1 is a SCALA Map. Can we not have used the same MAP for both. In my Java code I cannot import SCALA and JAVA MAP both, hence I had to use HASHMAP.

HashMap<String, String> options = new HashMap<String, String>();
options.put(“bucket”, “travel-sample”);
Dataset airlines = couchbaseReader (spark.sqlContext().read()).couchbase(type, options);

airlines.printSchema();
DataFrameWriterFunctions dataFrameWriterFunctions = new DataFrameWriterFunctions(
airlines.write().mode(SaveMode. Overwrite ));
// Print the number of airline
System. out .println("Number of Airlines: " + airlines.count());
Map<String, String> options1 = new Map.Map1<String, String>(“bucket”, “travel-sample-final”);
dataFrameWriterFunctions.couchbase(options1);

Hello @graham.pople / @daschl can you please help answer this?

Hi @rajib761
Only the Java API for the Spark Connector - anything under the com.couchbase.spark.japi package - will use Java types. If you stick to the Scala API you can exclusively use Scala types.