I thought to post this in the forum because I could not understand why the SPARK connector is using both JAVA and SCALA MAP. I have put a sample code below. Please look at the bolded lines. The first variable option is a Java MAP while the 2nd one options1 is a SCALA Map. Can we not have used the same MAP for both. In my Java code I cannot import SCALA and JAVA MAP both, hence I had to use HASHMAP.
HashMap<String, String> options = new HashMap<String, String>();
options.put(“bucket”, “travel-sample”);
Dataset airlines = couchbaseReader (spark.sqlContext().read()).couchbase(type, options);
airlines.printSchema();
DataFrameWriterFunctions dataFrameWriterFunctions = new DataFrameWriterFunctions(
airlines.write().mode(SaveMode. Overwrite ));
// Print the number of airline
System. out .println("Number of Airlines: " + airlines.count());
Map<String, String> options1 = new Map.Map1<String, String>(“bucket”, “travel-sample-final”);
dataFrameWriterFunctions.couchbase(options1);