0

we recently are being confronted by an error which always occurs now when trying to create a dump of the database running on mysql Ver 8.0.40-0ubuntu0.20.04.1 for Linux on x86_64 ((Ubuntu 20.04)). This database is being used by Matomo. The error which always occurs is, sometimes the same/different tables and different rows:

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `[TABLE]` at row: [ROW-number]

Mysqldump Command:

 mysqldump --compression-algorithms=zlib --quick --single-transaction db_name.sql

We've tried different settings

[mysqld]
max_allowed_packet=2048M

wait_timeout = 86400
interactive_timeout = 86400

net_read_timeout=4800
net_write_timeout=4800
    
[mysqldump]
    max_allowed_packet = 2048M

Also tried the suggestions from these sites:

mysqldump-Error-2013-Lost-connection-to-MySQL-server

and

mysql-dump scaler

This doesn't help. Does anybody have different solutions? Workarounds?

4
  • 1
    (1) Across what type of connection (local, LAN, WAN) are you connecting to the database server? (2) Approximately what is the size of the database you're trying to dump? (3) Has this worked at all? What about with another (smaller) database? Commented Nov 20, 2024 at 14:03
  • Are the tables InnoDB of MyISAM? Do the mysql log files show anything unusual. And, although this is not good for production, does the dump run to completion if you remove --single-transaction? Have you tried adding --skip-lock-tables? Commented Nov 20, 2024 at 14:12
  • What's in your logs? Commented Nov 21, 2024 at 15:57
  • I rolled back your edit, adding "SOLVED" to the title. Whenever you accept a given answer (possibly your own), the issue is marked as resolved. See also unix.stackexchange.com/help/accepted-answer Commented Nov 21, 2024 at 16:19

2 Answers 2

0

thanks for your replies,

yes, it worked before, we use InnoDB, what worked was

Set innodb_force_recovery to 1 (either in your custom .cnf or set in the DB command line):

SET GLOBAL innodb_force_recovery = 1;

This helped in recovering from InnoDB corruption and allowed to perform the dump.

After dump is complete, reset innodb_force_recovery,

Reset innodb_force_recovery to 0:

mysql> SET GLOBAL innodb_force_recovery = 0;

In /etc/mysql/, your custom *.cnf:

innodb_force_recovery = 0       #  Provide help in recovering from InnoDB corruption and might allow you to perform mysql dump

If you don't reset, you could receive other errors, not being able to login etc. Preferred is set from mysql commandline

mysql> SET GLOBAL innodb_force_recovery = 0;

Or in your dump script, bash etc.:

# mysql --defaults-extra-file="$your_mysql_password_cfg" -e "SET GLOBAL innodb_force_recovery = 0;"

Other values which might improve performance and tuning related to mysql dumps, advised on other sites etc.:

[mysqld]    
innodb_buffer_pool_size = 45GB    # 70-80% of total RAM
max_allowed_packet= 2G   # max_allowed_packet - max size 2GB
wait_timeout = 28800
interactive_timeout = 28800
net_read_timeout = 1200         # 20 minutes
net_write_timeout= 1200         # 20 minutes
net_buffer_length = 64K

[mysqldump]
# max_allowed_packet - largest allowed value (2GB)
max_allowed_packet = 2G
0

It seems like you're encountering a common issue with mysqldump, often caused by large packet sizes when dumping large tables. You have already adjusted some key parameters, which is good.

I am sharing few additional things you can try:

Increase max_allowed_packet: You've set it to 2048M, but you can try pushing it further to 4096M if your tables are really large. [mysqld] max_allowed_packet=4096M

Use --net-buffer-length: Adding this option can help if you’re working with large result sets. Try setting it to 16K or higher. mysqldump --net-buffer-length=16384 --quick --single-transaction --compression-algorithms=zlib db_name > dump.sql

Check for long-running queries: You might want to review your MySQL server for long-running queries or blocking processes that could be affecting the dump. Use SHOW PROCESSLIST; to check active processes.

Try a lower compression level: The --compression-algorithms=zlib might add extra load on large dumps. Try without compression or with a lower compression level to see if it helps.

Optimize the database: It’s also worth checking for performance issues. Running OPTIMIZE TABLE on large tables before the dump could improve performance.

Also, I would like to share If these steps don’t work, consider using tools like mydumper for more reliable dumping, especially if your database is large and complex.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.