【MySQL】一條SQL使磁碟暴漲並導致MySQL Crash
收到一個MySQL例項的磁碟告警,看到監控圖,磁碟使用率增長速度非常快,在兩個多小時的時間裡,已經漲了170多G。
檢查到binlog目錄並不大,而datadir已經180多G了,發現ibtmp1非常大,並且持續增長。
趕緊上去看processlist,執行緒不多,檢查到有一個SELECT case ... when ... 的執行緒,狀態是sending data,並且已經持續幾千秒了。
有點懵,沒有第一時間kill掉這個執行緒,再一次show processlist的時候……發現大概已經掛了………………
那好,既然已經掛了,那麼開始找原因吧…
看一下datadir掛載的磁碟,已經釋放空了,果然已經被重啟了。
ext4 197G 4.6G 191G 3% $datadir
檢查一下mysqld程式,發現已經被mysqld_safe拉起來了。
檢查一下error log:
紅字是關於mysqld crash的資訊,很顯然,datadir掛載的磁碟滿了。
沒有多餘的空間寫binlog和ibtmp1臨時表空間檔案(5.7新增)。
而且ibtmp1檔案最後達到了201876045824bytes,將近190G,而掛載的磁碟總大小才不到200G。
藍字是關於sql thread的問題,在mysqld起來之後,sql thread也出現了問題。
這個之後再修復。
因為已經掛了,而且沒有來得及將當時執行的sql完整記錄下來,只能看到show processlist前面的一小部分。
聯絡到業務,業務說這一臺是slave,無讀的業務,但偶爾會手動執行報表相關的SELECT操作。
那麼檢查一下crash前的兩個小時的slow log:
發現反覆有這麼幾條SQL:
雖然沒有一條超過20秒的,但不要忘了,在磁碟被打爆之前的一秒,有一條執行了幾千秒的SQL。
並且case when只有一個order by。
出於取出其中一條,檢查一下執行計劃:
看起來不是很好,但是也不至於用到190G臨時表。
雖然臨時表是長期積累下來的,但看監控,磁碟使用率確實是在這一個多小時迅速積攢起來的。
首先這個從庫是沒有業務的,而是透過客戶那邊可能手動連線過來做一些查詢服務的。
知道這個資訊之後,繼續對照slow log,發現slow log關於case when的這一條sql並不完全一樣……
這可能意味著客戶仍然在不停地除錯這條SQL…
而那條執行了幾千秒的case when可能寫得十分爛。
可惜的是,在mysqld crash的最後一瞬間,該sql仍然沒有執行完成,導致沒有被記錄到slow log。
那條sql估計也永遠無法再復現。
那麼究竟是多爛的SQL,或者多大的表,可以用到這麼多基於磁碟的臨時表或檔案排序呢??
趕緊檢查一下庫表的情況:
① 表1:100w,表2:40w,也沒有blob,text等欄位。
② 重啟後的datadir只有3、4G的資料量
所以我有個大但的想法:
可能客戶在除錯的過程中,產生了一條類似這樣的SQL:
我對照了已經記錄在slow log的幾條sql,脫敏,格式化之後還原如下:
對,這兩張表做了一個JOIN,但是沒有條件。
產生了一個很大很大很大的笛卡爾積。
那麼算一下:
列:13 + 19 = 32
行: 1088181*440650 = 479,506,957,650 rows
再測試一下:
將兩張表dump成sql檔案,看了一下大小:330MB,並不大。
匯入自己的實驗環境,再將上述產生笛卡爾積的SQL跑一下。
經過一段時間等待:
已經執行了半個小時多了。並且該SQL仍然在執行:
說出來你們可能不信,此時的ibtmp1大小:
大概也好解釋,為什麼兩張並不是很大的表,在執行某些查詢之後會產生這麼大的臨時表了…
當然,在做實驗的等待時間裡,已經將複製的ERROR給修復了。
當時複製的報錯:
結合之前貼的error log可以看到更多資訊,推斷是relay log有損壞。
那麼就好解決了:
此處不再贅述。
到目前為止,我實驗環境的datadir已經滿了,但是和此處正式環境有區別的是。
在磁碟寫滿之時,並沒有馬上導致mysql crash,而是輸出了:
實際上正常情況下,mysql在磁碟滿了的情況下,會每隔1分鐘做一次檢查,每隔10分鐘將檢查資訊輸出到錯誤日誌。
既然mysqld沒有被幹掉,我再次執行了一下那條可怕的sql呢?……
當然……error log如願輸出被kill的訊息……
並且mysqld程式被殺掉,此時:
有趣的是,“information”這個單詞都沒有輸出完,大概是真的一個字元都寫不下了吧。
試圖用vim在error log裡新增幾個字元並儲存,也報錯。
那麼總結一下,供參考:
〇 線上上環境執行的sql,建議進行稽核,報表等OLAP需求也需要正式一些。
〇 考慮是否可以限制ibtmp1的大小,也就是設定innodb_temp_data_file_path的最大值。
〇 可能需要定期重啟mysqld來收縮臨時表空間。
〇 做好監控和及時響應,這次就是響應時間過長,達到閾值,然後到機子上檢查時,已經晚了,眼睜睜地看到mysqld在我眼前crash。
檢查到binlog目錄並不大,而datadir已經180多G了,發現ibtmp1非常大,並且持續增長。
趕緊上去看processlist,執行緒不多,檢查到有一個SELECT case ... when ... 的執行緒,狀態是sending data,並且已經持續幾千秒了。
有點懵,沒有第一時間kill掉這個執行緒,再一次show processlist的時候……發現大概已經掛了………………
-
mysql> show processlist;
-
ERROR 2006 (HY000): MySQL server has gone away
-
No connection. Trying to reconnect...
-
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '$datadir/mysqld.sock' (2)
- ERROR: Can't connect to the server
那好,既然已經掛了,那麼開始找原因吧…
看一下datadir掛載的磁碟,已經釋放空了,果然已經被重啟了。
ext4 197G 4.6G 191G 3% $datadir
檢查一下mysqld程式,發現已經被mysqld_safe拉起來了。
檢查一下error log:
紅字是關於mysqld crash的資訊,很顯然,datadir掛載的磁碟滿了。
沒有多餘的空間寫binlog和ibtmp1臨時表空間檔案(5.7新增)。
而且ibtmp1檔案最後達到了201876045824bytes,將近190G,而掛載的磁碟總大小才不到200G。
藍字是關於sql thread的問題,在mysqld起來之後,sql thread也出現了問題。
這個之後再修復。
- [ERROR] InnoDB: posix_fallocate(): Failed to preallocate data for file $datadir/ibtmp1, desired size 67108864 bytes. Operating system error number 28. Check that the disk is not full or a disk quota exceeded. Make sure the file system supports this function. Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/ operating-system-error-codes.html
- [ERROR] Disk is full writing '$datadir/mysql-bin.000004' (Errcode: 15870576 - No space left on device). Waiting for someone to free space...
- [ERROR] Retry in 60 secs. Message reprinted in 600 secs
- [Warning] InnoDB: 1048576 bytes should have been written. Only 647168 bytes written. Retrying for the remaining bytes.
- [Warning] InnoDB: Retry attempts for writing partial data failed.
- [ERROR] InnoDB: Write to file $datadir/ibtmp1 failed at offset 201911697408, 1048576 bytes should have been written, only 647168 were written. Operating system error number 28. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded.
- [ERROR] InnoDB: Error number 28 means 'No space left on device'
- [Note] InnoDB: Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/operating-system-error-codes.html
- [Warning] InnoDB: Error while writing 67108864 zeroes to $datadir/ibtmp1 starting at offset 201876045824
- [ERROR] $basedir/bin/mysqld: The table '$tmpdir/#sql_37c5_0' is full
- [ERROR] InnoDB: posix_fallocate(): Failed to preallocate data for file ./thread_quartz/QRTZ_FIRED_TRIGGERS.ibd, desired size 32768 bytes. Operating system error number 28. Check that the disk is not full or a disk quota exceeded. Make sure the file system supports this function. Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/ operating-system-error-codes.html
- [Warning] InnoDB: Retry attempts for writing partial data failed.
- [Warning] InnoDB: Error while writing 32768 zeroes to ./thread_quartz/QRTZ_FIRED_TRIGGERS.ibd starting at offset 442362017-07-06T11:49:21.893377Z mysqld_safe Number of processes running now: 0
- mysqld_safe mysqld restarted
- ……………………………………………………………………………………………………………………
- [Note] InnoDB: Last MySQL binlog file position 0 690908428, file name mysql-bin.000004
- [Note] InnoDB: Starting in background the rollback of uncommitted transactions
- [Note] InnoDB: Rollback of non-prepared transactions completed
- [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
- [Note] InnoDB: Creating shared tablespace for temporary tables
- [Note] InnoDB: Setting file '$datadir/ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
- [Note] InnoDB: File '$datadir/ibtmp1' size is now 12 MB.
- [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
- [Note] InnoDB: 32 non-redo rollback segment(s) are active.
- [Note] InnoDB: Waiting for purge to start
- [Note] InnoDB: 5.7.12 started; log sequence number 4828513952
- [Note] InnoDB: page_cleaner: 1000ms intended loop took 7748ms. The settings might not be optimal. (flushed=0 and evicted=0, during the time.)
- [Note] InnoDB: Loading buffer pool(s) from $datadir/ib_buffer_pool
- [Note] Plugin 'FEDERATED' is disabled.
- [Note] InnoDB: Buffer pool(s) load completed at 170706 19:49:30
- [Note] Recovering after a crash using $basedir/mysql-bin
- [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 579, event_type: 2
- [Note] Starting crash recovery...
- [Note] InnoDB: Starting recovery for XA transactions...
- [Note] InnoDB: Transaction 6729603 in prepared state after recovery
- [Note] InnoDB: Transaction contains changes to 1 rows
- [Note] InnoDB: 1 transactions in prepared state after recovery
- [Note] Found 1 prepared transaction(s) in InnoDB
- [Note] Crash recovery finished.
- [Note] Crashed binlog file $basedir/mysql-bin.000004 size is 690909184, but recovered up to 690908428. Binlog trimmed to 690908428 bytes.
- [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
- ……………………………………………………………………………………………………………………
- [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 835, event_type: 2
- [Warning] Error reading GTIDs from relaylog: -1
- [Note] Slave I/O thread: Start asynchronous replication to master '*****' in log 'mysql-bin.000014' at position 286486095
- [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
- [Note] Slave I/O thread for channel '': connected to master '*****',replication started in log 'mysql-bin.000014' at position 286486095
- [Warning] Slave SQL for channel '': If a crash happens this configuration does not guarantee that the relay log info will be consistent, Error_code: 0
- [Note] Slave SQL thread for channel '' initialized, starting replication in log 'mysql-bin.000014' at position 286485153, relay log '$datadir/mysql-relay.000013' position: 286485326
- [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 835, event_type: 2
- [ERROR] Error reading relay log event for channel '': slave SQL thread aborted because of I/O error
- [ERROR] Slave SQL for channel '': Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code: 1594
- [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.000014' position 286485153
- [Note] Event Scheduler: Loaded 0 events
- [Note] $basedir/mysqld: ready for connections.
- Version: '5.7.12' socket: '$datadir/mysqld.sock' port: 3306 Source distribution
因為已經掛了,而且沒有來得及將當時執行的sql完整記錄下來,只能看到show processlist前面的一小部分。
聯絡到業務,業務說這一臺是slave,無讀的業務,但偶爾會手動執行報表相關的SELECT操作。
那麼檢查一下crash前的兩個小時的slow log:
發現反覆有這麼幾條SQL:
- ……………………
- 00:00:11.392881 select * from tb ……………………
-
00:00:04.779748 select (case when ……………………
-
00:00:04.779748 select ( case when t2.col1 ……………………
-
00:00:03.328248 select ( case when t2.col1 ……………………
- 00:00:04.276773 select count(t1.id) from tb1
- 00:00:05.039027 select (case when t2.col ……………………
- 00:00:10.263063 select (case when t2.col ……………………
- 00:00:03.131713 select t2.* from tb1 t1 ……………………
- 00:00:15.909456 select t2.* from tb1 t1 ……………………
-
00:00:14.367047 select * from tb ……………………
- ……………………
雖然沒有一條超過20秒的,但不要忘了,在磁碟被打爆之前的一秒,有一條執行了幾千秒的SQL。
並且case when只有一個order by。
出於取出其中一條,檢查一下執行計劃:
-
+--------+---------------+---------+---------+---------------------+--------+----------+----------------------------------------------+
-
| type | possible_keys | key | key_len | ref | rows | filtered | Extra |
-
+--------+---------------+---------+---------+---------------------+--------+----------+----------------------------------------------+
-
| ALL | NULL | NULL | NULL | NULL | 472765 | 20.99 | Using where; Using temporary; Using filesort |
-
| eq_ref | PRIMARY | PRIMARY | 8 | $tb1.col1 | 1 | 10.00 | Using where |
- +--------+---------------+---------+---------+---------------------+--------+----------+----------------------------------------------+
看起來不是很好,但是也不至於用到190G臨時表。
雖然臨時表是長期積累下來的,但看監控,磁碟使用率確實是在這一個多小時迅速積攢起來的。
首先這個從庫是沒有業務的,而是透過客戶那邊可能手動連線過來做一些查詢服務的。
知道這個資訊之後,繼續對照slow log,發現slow log關於case when的這一條sql並不完全一樣……
這可能意味著客戶仍然在不停地除錯這條SQL…
而那條執行了幾千秒的case when可能寫得十分爛。
可惜的是,在mysqld crash的最後一瞬間,該sql仍然沒有執行完成,導致沒有被記錄到slow log。
那條sql估計也永遠無法再復現。
那麼究竟是多爛的SQL,或者多大的表,可以用到這麼多基於磁碟的臨時表或檔案排序呢??
趕緊檢查一下庫表的情況:
① 表1:100w,表2:40w,也沒有blob,text等欄位。
② 重啟後的datadir只有3、4G的資料量
所以我有個大但的想法:
可能客戶在除錯的過程中,產生了一條類似這樣的SQL:
我對照了已經記錄在slow log的幾條sql,脫敏,格式化之後還原如下:
-
SELECT ( CASE
-
WHEN t2.col='$info_000' THEN '$a'
-
WHEN t2.col='$info_001' THEN '$b'
-
WHEN t2.col='$info_002' THEN '$c'
-
WHEN t2.col='$info_003' THEN '$d'
-
WHEN t2.col='$info_004' THEN '$e'
-
WHEN t2.col='$info_005' THEN '$f'
-
WHEN t2.col='$info_006' THEN '$g'
-
ELSE t2.col
-
END ) AS 來源XX,
-
( CASE
-
WHEN t2.clo2='$info_000' THEN '$a'
-
WHEN t2.clo2='$info_001' THEN '$b'
-
WHEN t2.clo2='$info_002' THEN '$c'
-
WHEN t2.clo2='$info_003' THEN '$d'
-
WHEN t2.clo2='$info_004' THEN '$e'
-
WHEN t2.clo2='$info_005' THEN '$f'
-
WHEN t2.clo2='$info_006' THEN '$g'
- ELSE t2.col
-
END ) AS 目標XX,
-
t2.col4 AS XX時間,
-
t2.col5 AS 金額
-
FROM $tb1 t1 JOIN $tb2 t2
-
WHERE t1.col3 = 4 AND (t2.col LIKE '$info%' OR t2.clo2 LIKE '$info%')
- ORDER BY t1.col4 DESC;
對,這兩張表做了一個JOIN,但是沒有條件。
產生了一個很大很大很大的笛卡爾積。
那麼算一下:
列:13 + 19 = 32
行: 1088181*440650 = 479,506,957,650 rows
再測試一下:
將兩張表dump成sql檔案,看了一下大小:330MB,並不大。
匯入自己的實驗環境,再將上述產生笛卡爾積的SQL跑一下。
經過一段時間等待:
已經執行了半個小時多了。並且該SQL仍然在執行:
說出來你們可能不信,此時的ibtmp1大小:
-
# du -sh *
-
4.0K auto.cnf
-
4.0K ib_buffer_pool
-
2.9G ibdata1
-
48M ib_logfile0
-
48M ib_logfile1
-
54G ibtmp1
-
12M mysql
-
1.1M performance_schema
-
676K sys
-
633M test
- 47M test_2
大概也好解釋,為什麼兩張並不是很大的表,在執行某些查詢之後會產生這麼大的臨時表了…
當然,在做實驗的等待時間裡,已經將複製的ERROR給修復了。
當時複製的報錯:
- ……
- Slave_IO_Running: Yes
- Slave_SQL_Running: No
- ……
- Seconds_Behind_Master: NULL
- Master_SSL_Verify_Server_Cert: No
- Last_IO_Errno: 0
- Last_IO_Error:
- Last_SQL_Errno: 1594
- Last_SQL_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
那麼就好解決了:
-
STOP SLAVE;
-
-
-- master_log_file和master_log_pos為報錯時所記錄的位置。
-
CHANGE MASTER master_log_file='xxx', master_log_pos=xxx;
-
- START SLAVE;
到目前為止,我實驗環境的datadir已經滿了,但是和此處正式環境有區別的是。
在磁碟寫滿之時,並沒有馬上導致mysql crash,而是輸出了:
-
[ERROR] InnoDB: Write to file ./ibtmp1failed at offset 83579895808, 1048576 bytes should have been written, only 958464 were written. Operating system error number 28. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded.
-
[ERROR] InnoDB: Error number 28 means 'No space left on device'
-
[Note] InnoDB: Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/operating-system-error-codes.html
-
[Warning] InnoDB: Error while writing 67108864 zeroes to ./ibtmp1 starting at offset 83563118592
-
[ERROR] /data/mysql-base/mysql57/bin/mysqld: The table '#sql_b26_0' is full
-
[ERROR] Disk is full writing '/data/mysql-data/mysql57-3357/binlog/mysql-bin.000015' (Errcode: 15868064 - No space left on device). Waiting for someone to free space...
- [ERROR] Retry in 60 secs. Message reprinted in 600 secs
既然mysqld沒有被幹掉,我再次執行了一下那條可怕的sql呢?……
當然……error log如願輸出被kill的訊息……
並且mysqld程式被殺掉,此時:
-
[ERROR] /data/mysql-base/mysql57/bin/mysqld: Binary logging not possible. Message: An error occurred during flush stage of the commit. 'binlog_error_action' is set to 'ABORT_SERVER'. Hence aborting the server.
-
14:42:48 UTC - mysqld got signal 6 ;
-
This could be because you hit a bug. It is also possible that this binary
-
or one of the libraries it was linked against is corrupt, improperly built,
-
or misconfigured. This error can also be caused by malfunctioning hardware.
-
Attempting to collect some information that could help diagnose the problem.
- As this is a crash and something is definitely wrong, the informati
試圖用vim在error log裡新增幾個字元並儲存,也報錯。
那麼總結一下,供參考:
〇 線上上環境執行的sql,建議進行稽核,報表等OLAP需求也需要正式一些。
〇 考慮是否可以限制ibtmp1的大小,也就是設定innodb_temp_data_file_path的最大值。
〇 可能需要定期重啟mysqld來收縮臨時表空間。
〇 做好監控和及時響應,這次就是響應時間過長,達到閾值,然後到機子上檢查時,已經晚了,眼睜睜地看到mysqld在我眼前crash。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29773961/viewspace-2142197/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Latch導致MySQL CrashMySql
- MySQL空間暴漲150G導致鎖定,發生了什麼MySql
- 故障分析 | MySQL : slave_compressed_protocol 導致 crashMySqlProtocol
- 【Mysql】JDB2導致磁碟io使用率高 導致mysql延遲過高MySqlDB2
- 故障分析 | MySQL 5.7 使用臨時表導致資料庫 CrashMySql資料庫
- 執行SQL語句導致mysqld的crashMySql
- 【Mysql】是什麼導致MySQL資料庫伺服器磁碟IO高?MySql資料庫伺服器
- windbg分析一次大查詢導致的記憶體暴漲記憶體
- oracle BUG 5890312導致表空間瞬間暴漲Oracle
- SQLServer的tempdb暴增導致磁碟消耗的處理方案SQLServer
- 【MySQL】表索引損壞致Crash及修復過程一則MySql索引
- 一條主鍵索引SQL導致的CPU被打滿索引SQL
- 早上MYSQL突然 CRASHMySql
- MySQL 預插入的資料條數過多導致異常MySql
- 【Mysql】mysql主鍵的缺少導致備庫hangMySql
- mysql crash with disk sector errorMySqlError
- MySQL 中一條 sql 的執行過程MySql
- 【MySQL】5.6/5.7並行複製bug導致的故障 ERROR 1755/1756MySql並行Error
- RDS最佳實踐(五)—Mysql大欄位的頻繁更新導致binlog暴增MySql
- Mysql索引型別建立錯誤導致SQL查詢緩慢MySql索引型別
- MySQL:Innodb crash recovery一些程式碼MySql
- 由一條sql語句導致的系統IO問題SQL
- MySQL:MySQL客戶端快取結果導致OOMMySql客戶端快取OOM
- 如何定位導致Crash的程式碼位置
- MySQL For Update導致全表排他鎖MySql
- MySQL Flush導致的等待問題MySql
- 執行SQL查詢導致磁碟耗盡故障演示SQL
- 【MySQL】檔案描述符導致報警一則MySql
- MySQL刪除重複記錄並保留第一條MySql
- MySQL系列之一條更新SQL的生命歷程MySql
- 一條sql瞭解MYSQL的架構設計MySql架構
- 一條 sql 瞭解 MYSQL 的架構設計MySql架構
- MySQL5.7被一條排序SQL弄當機MySql排序
- mysql_sql小學(一)_mysql_sqlMySql
- 【MySQL】MySQL5.6新特性之crash-safeMySql
- 一條簡單的sql語句導致的系統問題SQL
- vip/public ip斷網,導致instance crash
- Mysql 會導致索引失效的情況MySql索引