Mysqldump remains one of the most popular backup solutions. If at this stage of the project development more advanced backup tools are not used, it is worth to use mysqldump with the correct parameters.
The most critical impact, which mysqldump has on production is table locking:
root @ webminer-manager-1: ~ # mysql -e "show processlist" + ------- + ------ + ----------------- + ----------- + ---- ----- + ------ + --------------------------------- + --- --------------------------- + | Id | User | Host | db | Command | Time | State | Info | + ------- + ------ + ----------------- + ----------- + ---- ----- + ------ + --------------------------------- + --- --------------------------- + | 416 | user | localhost: 52102 | db | Query | 632 | Waiting for table metadata lock | INSERT INTO products SET url = '... | | 417 | user | localhost: 52104 | db | Query | 633 | Waiting for table metadata lock | INSERT INTO products SET url = '... | | 418 | user | localhost: 52106 | db | Query | 633 | Waiting for table metadata lock | INSERT INTO products SET url = '... | ...
# table locking during dump execution
In this case, any requests for data changes will wait for a complete dump. And this means that your application will practically stop working for users. You can disable table locking using the following options:
-l, --lock-tables Lock all tables for read. (Defaults to on; use --skip-lock-tables to disable.) --single-transaction Creates a consistent snapshot by dumping all tables in a single transaction. Works ONLY for tables stored in storage engines which support multiversioning (currently only InnoDB does); the dump is NOT guaranteed to be consistent for other storage engines. While a --single-transaction dump is in process, to ensure a valid dump file (correct table contents and binary log position), no other connection should use the following statements: ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, as consistent snapshot is not isolated from them. Option automatically turns off --lock-tables. # disable lock during dump
The second option will automatically turn off the first for engines that support versioning (for example, InnoDB). But if you use different engines, you should use both options.
In addition, it is useful to enable the –insert-ignore option to avoid errors during next recovery (for example, if the keys have changed).
And of course, you should use streaming compression with gzip to save disk resources. The -v option is also useful when dumping multiple tables — it displays system information about the progress of the dump.
The final dump command to execute on production might look like this:
mysqldump -v --insert-ignore --skip-lock-tables --single-transaction = TRUE db products | gzip> /root/products.sql.gz Such a dump will not block tables and minimizes the impact on the main application: mysql> show processlist; + ------- + ------ + ----------------- + ---- + --------- + - ----- + ------------------- + ------------------------ -------------------------------------------------- ---------------------------- + | Id | User | Host | db | Command | Time | State | Info | + ------- + ------ + ----------------- + ---- + --------- + - ----- + ------------------- + ------------------------ -------------------------------------------------- ---------------------------- + | 15655 | user | localhost: 39430 | db | Query | 0 | query end | INSERT INTO products SET url = '... | 15656 | user | localhost: 39450 | db | Sleep | 614 | | NULL | | 15661 | root | localhost | db | Query | 589 | Sending to client | SELECT / *! 40001 SQL_NO_CACHE * / * FROM `products` | 15682 | user | localhost: 39504 | db | Sleep | 0 | | Null ... # Lack of blocking when dumping
TL; DR
Mysqldump can be used on production, but you should disable table locking. If the database size exceeds several tens of GB, plan to use hot backups.