I have a huge table (147 columns) and I would like to know if it is possible to create the table in SQL server from my pandas dataframe so I don't have to CREATE TABLE for 147 columns.
I am trying to follow what is answered in most of related questions:
params = urllib.parse.quote_plus("DRIVER={ODBC Driver 17 for SQL Server};SERVER=DESKTOP-LFOSSEF;DATABASE=test;UID=xxxx;PWD=xxx")
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
connection = engine.raw_connection()
df.to_sql("table_name", connection,index=False)
The user and password work because that's what I am using to sign in into sqlserver.
When I run this, I get:
Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid object name 'sqlite_master'. (208) (SQLExecDirectW); [42S02] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared. (8180)")
If I remove the connection = engine.raw_connection() and use engine directly, I get:
AttributeError: 'Engine' object has no attribute 'cursor'
Any idea what is wrong? Do I need to create an empty table first? I've been troubleshooting for hours and can't find any reason why this isnt working.
engineas the second parameter toto_sql()should work. What versions of SQLAlchemy and pandas are you using?type(engine)?sqlalchemy.engine.base.Engine, don't know how it get tosqlalchemythoughpip uninstall sqlalchemyfollowed bypip install sqlalchemyand see if that helps.