Connect to Couchbase on pySpark using JDBC driver

I would like to be able to connect to Couchbase through pySpark on a JupyterNotebook as it is already possible for PostgresQL for example, using :
.option(“url”, "jdbc:postgresql:XXXXXXXXXXXX)
.option(“dbtable”, “XXXXX”)
.option(“user”, “XXX”)
.option(“password”, “XXX”)

I have only seen scala ways to do it but I would like to be able to interact pySpark and Couchbase.
For example, one purpose is to insert a Spark Dataframe that I have on my Jupyter Notebook into my local Couchbase Server.

Thanks !

After having created a sparksession like the following :

sparkSession = pyspark.sql.SparkSession.builder
.config(“spark.jars”, “./cdata.jdbc.couchbase.jar”)
.config(“spark.sql.caseSensitive”, True)
.config(“spark.couchbase.nodes”, “”)
.config(“spark.couchbase.username”, “XXX”)
.config(“spark.couchbase.password”, “XXX”)
.config(“spark.driver.extraClassPath”, “./cdata.jdbc.couchbase.jar”)