I'm currently dealing with a 4gb dump.sql file so I've tried to create a database from it using mysql server console.
These are the commands I've used in the terminal:
mysql -u username -ppassword
mysql> create database test;
mysql> use test;
mysql> source dump.sql
This took me like 3 hours to complete the process. After that I was able to access the created database with no problem.
Specs: 16 cores intel processor, 60gb ram, 120gb ssd.
The thing is I have a dump file of 8gb or more so I'm looking for any faster way to execute the .sql script. I'm not sure the first method is optimized.
I've also tried to do it in python,
import mysql.connector
conn = mysql.connector.connect(user='root', password='root')
cursor = conn.cursor()
cursor.execute(open('dump.sql').read(), multi=True)
conn.commit()
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-7-b5009cf1d04b> in <module>
----> 1 cursor.execute(open('dump.sql').read(), multi=True)
~/miniconda3/lib/python3.7/site-packages/mysql/connector/cursor_cext.py in execute(self, operation, params, multi)
264 result = self._cnx.cmd_query(stmt, raw=self._raw,
265 buffered=self._buffered,
--> 266 raw_as_string=self._raw_as_string)
267 except MySQLInterfaceError as exc:
268 raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno,
~/miniconda3/lib/python3.7/site-packages/mysql/connector/connection_cext.py in cmd_query(self, query, raw, buffered, raw_as_string)
487 self._cmysql.query(query,
488 raw=raw, buffered=buffered,
--> 489 raw_as_string=raw_as_string)
490 except MySQLInterfaceError as exc:
491 raise errors.get_mysql_exception(exc.errno, msg=exc.msg,
OverflowError: size does not fit in an int
This returned a overflow error for int. I couldn't find any help to overcome this error online.
COPYcan be a lot faster than executing SQL. Just wondering if you needed to start from the file you're given or if you had the option to change it.