0

Following example from Azure team is using Apache Spark connector for SQL Server to write data to a table.

Question: How can we execute a Stored Procedure in an Azure Databricks when using Apache Spark Connector?

    server_name = "jdbc:sqlserver://{SERVER_ADDR}"
    database_name = "database_name"
    url = server_name + ";" + "databaseName=" + database_name + ";"
    
    table_name = "table_name"
    username = "username"
    password = "password123!#" # Please specify password here
    
    try:
      df.write \
        .format("com.microsoft.sqlserver.jdbc.spark") \
        .mode("overwrite") \
        .option("url", url) \
        .option("dbtable", table_name) \
        .option("user", username) \
        .option("password", password) \
        .save()
    except ValueError as error :
        print("Connector write failed", error)
5

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.