MySQL 記一次 Bug發現過程
水平有限有誤請諒解
這個問題是一位朋友@DBA-老莊的,他們使用的是PXC環境如下:
MySQL:5.7.18-15
wsrep:29.20
os:Red Hat Enterprise Linux Server release 6.5
實際上我對PXC並不是很熟,透過分析pstack還是找到了問題。並且提交Bug,percona確認了。雖然我不是第一個發現這個問題的人。
一、問題描述
資料庫處於完全hang住的狀態,不能連線,不能kill連線,不能show engine innodb 等等。已有的連線透過show processlist看到大量如下的連線:
作業系統層面基本看不出來任何負載:
對於這種問題只能用pstack進行問題確認了。
二、為什麼這麼多opening tables的會話
透過pstack我發現很多會話堵塞在trx_allocate_for_mysql()如下:
Thread 54 (Thread 0x7f9085cf6700 (LWP 17448)): #0 0x0000003715e0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00000000011059cb in os_event::wait_low(long) () #2 0x00000000011b0449 in sync_array_wait_event(sync_array_t*, sync_cell_t*&) () #3 0x000000000108a8c4 in TTASEventMutex<GenericPolicy>::wait(char const*, unsigned int, unsigned int) () #4 0x000000000108aa3b in PolicyMutex<TTASEventMutex<GenericPolicy> >::enter(unsigned int, unsigned int, char const*, unsigned int) () #5 0x00000000011e5974 in trx_allocate_for_mysql() () #wait trx #6 0x000000000106fa9f in innobase_trx_allocate(THD*) () #7 0x0000000001076d28 in ha_innobase::extra(ha_extra_function) () #8 0x0000000000ce4229 in open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int, Prelocking_strategy*) () #9 0x0000000000ce5912 in open_tables_for_query(THD*, TABLE_LIST*, unsigned int) ()
簡單的說對於innodb表進行任何操作即便是select也需要分配事物,如果在事物池中沒有可用的事物就行要呼叫這個函式進行分配,以下是棧幀:
#0 trx_allocate_for_mysql () at /mysql/mysql-5.7.17/storage/innobase/trx/trx0trx.cc:538 #1 0x0000000001913d62 in innobase_trx_allocate (thd=0x7fffc8000d30) at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:2580 #2 0x0000000001913e04 in check_trx_exists (thd=0x7fffc8000d30) at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:2605 #3 0x0000000001914482 in ha_innobase::update_thd (this=0x7fffc8009990, thd=0x7fffc8000d30) at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:2825 #4 0x00000000019296b4 in ha_innobase::info_low (this=0x7fffc8009990, flag=26, is_analyze=false) at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:13805 #5 0x000000000192a385 in ha_innobase::info (this=0x7fffc8009990, flag=26) at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:14211 #6 0x000000000191ad83 in ha_innobase::open (this=0x7fffc8009990, name=0x7fffcc1b4540 "./test/test1", mode=2, test_if_locked=2) at /mysql/mysql-5.7.17/storage/innobase/handler/ha_innodb.cc:6130 #7 0x0000000000f48d09 in handler::ha_open (this=0x7fffc8009990, table_arg=0x7fffc8008fe0, name=0x7fffcc1b4540 "./test/test1", mode=2, test_if_locked=2) at /mysql/mysql-5.7.17/sql/handler.cc:2759 #8 0x0000000001674fd1 in open_table_from_share (thd=0x7fffc8000d30, share=0x7fffcc1b4170, alias=0x7fffc80051d8 "test1", db_stat=39, prgflag=8, ha_open_flags=0, outparam=0x7fffc8008fe0, is_create_table=false) at /mysql/mysql-5.7.17/sql/table.cc:3336 #9 0x00000000014f9577 in open_table (thd=0x7fffc8000d30, table_list=0x7fffc80051e0, ot_ctx=0x7ffff149fb80) at /mysql/mysql-5.7.17/sql/sql_base.cc:3522 #10 0x00000000014fbf7f in open_and_process_table (thd=0x7fffc8000d30, lex=0x7fffc8003028, tables=0x7fffc80051e0, counter=0x7fffc80030e8, flags=0, prelocking_strategy=0x7ffff149fcb0, has_prelocking_list=false, ot_ctx=0x7ffff149fb80) at /mysql/mysql-5.7.17/sql/sql_base.cc:5108 #11 0x00000000014fd06a in open_tables (thd=0x7fffc8000d30, start=0x7ffff149fc70, counter=0x7fffc80030e8, flags=0, prelocking_strategy=0x7ffff149fcb0) at /mysql/mysql-5.7.17/sql/sql_base.cc:5719
而這個函式里麵包含如下程式碼:
trx_sys_mutex_enter(); ##獲取trx_sys->mutex鎖 ut_d(trx->in_mysql_trx_list = TRUE);
UT_LIST_ADD_FIRST(trx_sys->mysql_trx_list, trx); ##將事物加入trx_sys全域性結構中的連結串列中 trx_sys_mutex_exit();
trx_sys是一個全域性的資料結構,各個事物都以連結串列的形式掛載到它下面,那麼修改這些連結串列需要透過一個mutex來保護這個全域性資料結構避免多執行緒併發的修改。比如這裡就是更新連結串列操作。但是我們從棧幀來看他處於open_table函式,本函式主要建立table cache同時做好表的例項化,也就是建立好mysql層和innodb層檔案的對應關係,此外還會獲取相應的MDL LOCK和開啟frm檔案。
為了測試我簡單的在程式碼中加入了sleep(10),停頓10秒,可以看到如下。證明這裡的opening tables確實是在trx_allocate_for_mysql 發生了等待出現的狀態:
所以show processlist的state只是一個狀態值,它代表是程式碼某一段到某一段的執行階段,下面是一個典型的
select的狀態切換流程。但是要確認問題,有時候光靠這個是不夠的。
T@2: | THD::enter_stage: 'starting' /root/mysql5.7.14/percona-server-5.7.14-7/sql/conn_handler/socket_connection.cc:100
T@2: | | | | | THD::enter_stage: 'checking permissions' /root/mysql5.7.14/percona-server-5.7.14-7/sql/auth/sql_authorization.cc:843 T@2: | | | | | | THD::enter_stage: 'Opening tables' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:5719 T@2: | | | | | THD::enter_stage: 'init' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_select.cc:121
T@2: | | | | | | | THD::enter_stage: 'System lock' /root/mysql5.7.14/percona-server-5.7.14-7/sql/lock.cc:321 T@2: | | | | | | | THD::enter_stage: 'optimizing' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_optimizer.cc:151
T@2: | | | | | | | THD::enter_stage: 'statistics' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_optimizer.cc:386 T@2: | | | | | | | THD::enter_stage: 'preparing' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_optimizer.cc:494
T@2: | | | | | | THD::enter_stage: 'executing' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:119
T@2: | | | | | | THD::enter_stage: 'Sending data' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:195
T@2: | | | | | THD::enter_stage: 'end' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_select.cc:199 T@2: | | | | THD::enter_stage: 'query end' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5174 T@2: | | | | THD::enter_stage: 'closing tables' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5252 T@2: | | | THD::enter_stage: 'freeing items' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5855
T@2: | | THD::enter_stage: 'cleaning up' /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1884
三、詳細的分析pstack
因為pstack日誌太長了。我就不貼了。詳細的分析pstack日誌在開頭給出的bug連線。其實要在冗長的pstack中找到有用的資訊和合理的解釋是一個困難的過程,因為原始碼能力非常有限,某些時候只能透過搜尋臨界區來確認問題。下面是我分析的結果,也是提交bug給出了的:
I use pstack to review stack discover Dead lock
Analyze pstack i find some problem: Thread 56: lock:trx_sys (when parameter wsrep_log_conflicts=ON lock0lock.cc 2281 line) requisite:LOCK_wsrep_thd
Thread 9: lock: LOCK_thd_list (mysql_thread_manager.cc 339 line) requisite:LOCK_thd_data (sql_parse.h 175 line)
Thread 26: lock: LOCK_thd_data (in PFS_status_variable_cache::do_materialize_all after PFS_status_variable_cache::manifest release LOCK_thd_data ,but hang) requisite:trx_sys->mutex (srv0srv.cc 1703 line)
a lot of Thread wait when call function trx_allocate_for_mysql at mutex trx_sys
a lot of Thread wait when call function THD::release_resources at mutex LOCK_thd_data
a lot of Thread wait when call function Global_THD_manager::add_thd at mutex LOCK_thd_list and any other mutex wait!!
but I not find which thread hold LOCK_wsrep_thd mutex.
Now we do follow things hope to resolve this problem: 1、wsrep_log_conflicts=off 2、SET global optimizer_switch = 'materialization=off'; 3、at high load time not execute sql 'show [global] status/select * from performance_schema.global_status'
簡單的說我發現有多個執行緒獲取mutex近乎出現環狀,但是其中一環沒有找到。最終percona恢復如下:
Your problem sounds quite similar to one mentioned here: https://jira.percona.com/browse/PXC-877 Said release fix the issue https://www.percona.com/blog/2018/01/26/percona-xtradb-cluster-5-7-20-29-24-is-now-available/ You may want to consider an upgrade to latest one though which has more fixes 5.7.21.
雖然我不是第一個發現的人,但是起碼確認我的分析基本確認的問題。蛋疼又說升級升級。
作者微信:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/7728585/viewspace-2152618/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 記一次 Laravel-Admin 的 Debug 過程Laravel
- 記錄一次 MySQL 死鎖排查過程MySql
- 記一次bug解決過程(數字轉化成中文)
- 解Bug之路-記一次儲存故障的排查過程
- 一次奇怪的的bug排查過程
- 記錄一次現網MySQL記憶體增長超限問題定位過程MySql記憶體
- 記一次MySQL資料遷移到SQLServer全過程MySqlServer
- 記一次nodejs開發CLI的過程NodeJS
- 一次IO效能問題的發現過程
- 解Bug之路-記一次中介軟體導致的慢SQL排查過程SQL
- 記一次SQL調優過程SQL
- 記一次 Mysql 日期使用不當造成的 bugMySql
- 記一次實現遠端控制電腦開機過程
- 排查Mysql突然變慢的一次過程MySql
- 記一次系統演變過程
- 記一次OOM問題排查過程OOM
- 記一次ElementUI原始碼修改過程UI原始碼
- 記一次安卓webview查錯過程安卓WebView
- 記一次"記憶體洩露"排查過程記憶體洩露
- 記一次使用Asp.Net Core WebApi 5.0+Dapper+Mysql+Redis+Docker的開發過程ASP.NETWebAPIAPPMySqlRedisDocker
- 記錄一個前端bug的解決過程前端
- MySql特性發展過程(轉)MySql
- dubbo debug過程中發現的IDEA有趣的小發現及dubbo消費者debug小tipsIdea
- 記錄一次記憶體洩漏排查過程記憶體
- MYSQL Group Replication搭建過程記錄MySql
- Mysql之一次完成的sql執行過程MySql
- 記一次 Boomer 壓測 MQTT 過程OOMMQQT
- 記一次 GitLab 的遷移過程Gitlab
- 記一次ceph pg unfound處理過程
- 記一次前端面試的全過程前端面試
- 記一次完整的wordpress安裝過程
- 記一次FreeBSD系統中mysql服務異常的排查過程MySql
- 記一次堆外記憶體洩漏排查過程記憶體
- 記一次node協程模組開發
- 記一次PMML檔案的處理過程
- 記一次全民K歌的crash定位過程
- 一次線上問題處理過程記錄
- 記錄一次線上OOM情況排查過程OOM