MySQL:MGR 學習(2):Write set(寫集合)的寫入過程
MySQL:MGR 學習(2):Write set(寫集合)的寫入過程
水平有限,有誤請諒解。
原始碼版本5.7.22
一、前文總結
前文 <<MySQL:MGR 學習(1):寫集合(Write set)>>中已經說明了Write set的生成過程,但是Write set是需要封裝如下Transaction_context_log_event中進行廣播到其他節點進行認證的。本文就描述Write set的寫入和廣播的過程。
如前文所描述,整個事物的Write set在函式binlog_log_row中生成,對於5.7來講每一行的每個唯一鍵都會生成一個Write set(但是諮詢宋利兵老師得知8.0唯一鍵不會再記錄Write set了),每個Write set實際上是一個8位元組的uint64型別其通過hash函式生成,並且在Rpl_transaction_write_set_ctx儲存了一個vector陣列和一個set集合來分別儲存,如果修改的行比較多那麼可能需要一個更多記憶體來儲存這些hash值,雖然8位元組比較小,但是如果是大事物上千萬的表在一個事物裡面做修改那麼記憶體可能消耗會上百兆。如下圖是事物執行期間(commit之前)最終形成的Write set記憶體空間示意圖。
image.png
二、Transaction_context_log_event生成的時機
在事物執行期間會生成map event/query event/dml event等並且會源源不斷的寫入到binlog cache中,同時會將Write set 不斷的寫入到Rpl_transaction_write_set_ctx儲存在記憶體中,這些邏輯都在binlog_log_row中。但是Transaction_context_log_event的生成卻是在commit的時候,具體的位置是在MYSQL_BIN_LOG::prepare之後但是在MYSQL_BIN_LOG::ordered_commit之前,顯而易見這個時候的binlog event還在bing cache中,還沒有寫入binlog file中。所以MGR的事物全域性認證的動作是發生在binlog event落地之前。下面是這個棧幀:
#0 group_replication_trans_before_commit (param=0x7ffff0e7b8d0) at /root/softm/percona-server-5.7.22-22/rapid/plugin/group_replication/src/observer_trans.cc:511#1 0x00000000014e4814 in Trans_delegate::before_commit (this=0x2e44800, thd=0x7fffd8000df0, all=false, trx_cache_log=0x7fffd8907a10, stmt_cache_log=0x7fffd8907858, cache_log_max_size=18446744073709547520) at /root/softm/percona-server-5.7.22-22/sql/rpl_handler.cc:325#2 0x000000000188a386 in MYSQL_BIN_LOG::commit (this=0x2e7b440, thd=0x7fffd8000df0, all=false) at /root/softm/percona-server-5.7.22-22/sql/binlog.cc:8974#3 0x0000000000f80623 in ha_commit_trans (thd=0x7fffd8000df0, all=false, ignore_global_read_lock=false) at /root/softm/percona-server-5.7.22-22/sql/handler.cc:1830#4 0x00000000016ddab9 in trans_commit_stmt (thd=0x7fffd8000df0) at /root/softm/percona-server-5.7.22-22/sql/transaction.cc:458#5 0x00000000015d1a8d in mysql_execute_command (thd=0x7fffd8000df0, first_level=true) at /root/softm/percona-server-5.7.22-22/sql/sql_parse.cc:5293#6 0x00000000015d3182 in mysql_parse (thd=0x7fffd8000df0, parser_state=0x7ffff0e7e600) at /root/softm/percona-server-5.7.22-22/sql/sql_parse.cc:5901#7 0x00000000015c6d16 in dispatch_command (thd=0x7fffd8000df0, com_data=0x7ffff0e7ed70, command=COM_QUERY) at /root/softm/percona-server-5.7.22-22/sql/sql_parse.cc:1490#8 0x00000000015c5aa3 in do_command (thd=0x7fffd8000df0) at /root/softm/percona-server-5.7.22-22/sql/sql_parse.cc:1021#9 0x000000000170ebb0 in handle_connection (arg=0x3cd32d0) at /root/softm/percona-server-5.7.22-22/sql/conn_handler/connection_handler_per_thread.cc:312#10 0x0000000001946140 in pfs_spawn_thread (arg=0x3c71630) at /root/softm/percona-server-5.7.22-22/storage/perfschema/pfs.cc:2190#11 0x00007ffff7bc7851 in start_thread () from /lib64/libpthread.so.0#12 0x00007ffff651290d in clone () from /lib64/libc.so.6
三、MGR全域性認證傳送內容的生成過程
下面是我通過對原始碼淺顯的理解得出過程:
-
1、獲取當前的binlog cache內容記錄為cache_log,這些就是已經在執行階段生成map/query/dml event等。
-
2、生成一個新的IO_CACHE作為臨時儲存為cache,目的在於儲存。Transaction_context_log_event 和Gtid_log_event。
-
3、將cache_log型別轉換為READ型別同時初始化各種輔助類容如偏移量。
-
4、初始化Transaction_context_log_event 。
-
5、掃描Rpl_transaction_write_set_ctx中的write_set_unique 集合的內容,並且將其儲存到Transaction_write_set 定義的記憶體空間中write_set中,注意這裡只是用到了集合沒用到陣列。這裡也就是進行Write set的一個拷貝而已其考到write_set臨時變數中。
-
6、將write_set內容填充到Transaction_context_log_event中,整個過程還會做base64的轉換,最終填充到event的是base64格式的Write set類容。完成後析構write_set來臨時變數
-
7、 將Transaction_context_log_event寫入到第二步定義的cache中。
-
8、生成Gtid_log_event,只是做一些初始化動作,Gtid並沒有生成。
-
9、將Gtid_log_event寫入到第二步定義的cache中。
-
10、通過cache+cache_log的總和來對比 group_replication_transaction_size_limit 設定的值,也就是判斷整個事物的binlog event是否操作了引數設定。
-
11、將cache型別轉換為READ型別同時初始化各種輔助類容如偏移量。
-
12、將cache和cache_log分別寫入到到transaction_msg中。
-
13、流控相關,沒仔細看,如果有機會學習流控機制在仔細學習。
-
14、gcs_module負責傳送transaction_msg到各個節點
-
15、掛起等待事物認證的結果。
那麼整個過程大概就是:
-
經過hash的Write set (集合)->拷貝到write_set變數(類陣列)->通過base64演算法寫入到Transaction_context_log_event ->合併其他binlog event到transaction_msg->gcs_module廣播transaction_msg到其他節點->等待認證結果
四、相關原始碼
1、group_replication_trans_before_commit 函式相關內容
if (trx_cache_log_position > 0 && stmt_cache_log_position == 0) //如果存在事物cache { cache_log= param->trx_cache_log; //設定到IO_cache cache_log_position= trx_cache_log_position; } else if (trx_cache_log_position == 0 && stmt_cache_log_position > 0)//如果存在語句cache { cache_log= param->stmt_cache_log; cache_log_position= stmt_cache_log_position; is_dml= false; may_have_sbr_stmts= true; } else { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "We can only use one cache type at a " "time on session %u", param->thread_id); shared_plugin_stop_lock->release_read_lock(); DBUG_RETURN(1); /* purecov: end */ } applier_module->get_pipeline_stats_member_collector() ->increment_transactions_local(); DBUG_ASSERT(cache_log->type == WRITE_CACHE); DBUG_PRINT("cache_log", ("thread_id: %u, trx_cache_log_position: %llu," " stmt_cache_log_position: %llu", param->thread_id, trx_cache_log_position, stmt_cache_log_position)); /* Open group replication cache. Reuse the same cache on each session for improved performance. */ cache= observer_trans_get_io_cache(param->thread_id, param->cache_log_max_size); //獲取一個新的IO_CACHE系統 if (cache == NULL) //錯誤處理 { /* purecov: begin inspected */ error= pre_wait_error; goto err; /* purecov: end */ } // Reinit binlog cache to read. if (reinit_cache(cache_log, READ_CACHE, 0)) ////將IO_CACHE型別進行轉換 並且位置還原 { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Failed to reinit binlog cache log for read " "on session %u", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ } /* After this, cache_log should be reinit to old saved value when we are going out of the function scope. */ reinit_cache_log_required= true; // Create transaction context. tcle= new Transaction_context_log_event(param->server_uuid, Rpl_transaction_write_set_ctx is_dml, param->thread_id, is_gtid_specified); //初始化 Transaction_context_log_event if (!tcle->is_valid()) { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Failed to create the context of the current " "transaction on session %u", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ } if (is_dml) { Transaction_write_set* write_set= get_transaction_write_set(param->thread_id);// 獲取前期得到write set 並且放回到一個臨時記憶體空間 write_set中 /* When GTID is specified we may have empty transactions, that is, a transaction may have not write set at all because it didn't change any data, it will just persist that GTID as applied. */ if ((write_set == NULL) && (!is_gtid_specified)) { log_message(MY_ERROR_LEVEL, "Failed to extract the set of items written " "during the execution of the current " "transaction on session %u", param->thread_id); error= pre_wait_error; goto err; } if (write_set != NULL) { if (add_write_set(tcle, write_set))//將整個wirte_set內容複製到event Transaction_context_log_event中 此時就進入了event了 { /* purecov: begin inspected */ cleanup_transaction_write_set(write_set); //write set已經完成了它的功能需要析構 log_message(MY_ERROR_LEVEL, "Failed to gather the set of items written " "during the execution of the current " "transaction on session %u", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ } cleanup_transaction_write_set(write_set); //如果add_write_set函式呼叫出現 有問題 也需要析構掉 DBUG_ASSERT(is_gtid_specified || (tcle->get_write_set()->size() > 0)); } else { /* For empty transactions we should set the GTID may_have_sbr_stmts. See comment at binlog_cache_data::may_have_sbr_stmts(). */ may_have_sbr_stmts= true; } Log_event::write } // Write transaction context to group replication cache. tcle->write(cache); //寫入到MGR CACHE 寫入 TCLE的header(virtual) body(virtual) footer // Write Gtid log event to group replication cache. gle= new Gtid_log_event(param->server_id, is_dml, 0, 1, may_have_sbr_stmts, gtid_specification); gle->write(cache); //寫入GTID event到MGR CACHE 佔位 transaction_size= cache_log_position + my_b_tell(cache); if (is_dml && transaction_size_limit && transaction_size > transaction_size_limit) { log_message(MY_ERROR_LEVEL, "Error on session %u. " "Transaction of size %llu exceeds specified limit %lu. " "To increase the limit please adjust group_replication_transaction_size_limit option.", param->thread_id, transaction_size, transaction_size_limit); //group_replication_transaction_size_limit 事物大小引數 error= pre_wait_error; goto err; } // Reinit group replication cache to read. if (reinit_cache(cache, READ_CACHE, 0))//將IO_CACHE型別進行轉換 並且位置還原 { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Error while re-initializing an internal " "cache, for read operations, on session %u", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ } // Copy group replication cache to buffer. if (transaction_msg.append_cache(cache)) //加入到transaction_msg { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Error while appending data to an internal " "cache on session %u", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ } // Copy binlog cache content to buffer. if (transaction_msg.append_cache(cache_log))//加入到transaction_msg { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Error while writing binary log cache on " "session %u", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ } DBUG_ASSERT(certification_latch != NULL); if (certification_latch->registerTicket(param->thread_id)) { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Unable to register for getting notifications " "regarding the outcome of the transaction on " "session %u", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ }#ifndef DBUG_OFF DBUG_EXECUTE_IF("test_basic_CRUD_operations_sql_service_interface", { DBUG_SET("-d,test_basic_CRUD_operations_sql_service_interface"); DBUG_ASSERT(!sql_command_check()); };); DBUG_EXECUTE_IF("group_replication_before_message_broadcast", { const char act[]= "now wait_for waiting"; DBUG_ASSERT(!debug_sync_set_action(current_thd, STRING_WITH_LEN(act))); });#endif /* Check if member needs to throttle its transactions to avoid cause starvation on the group. */ applier_module->get_flow_control_module()->do_wait(); //流控相關 //Broadcast the Transaction Message send_error= gcs_module->send_message(transaction_msg); //gcs廣播 if (send_error == GCS_MESSAGE_TOO_BIG) { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Error broadcasting transaction to the group " "on session %u. Message is too big.", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ } else if (send_error == GCS_NOK) { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Error while broadcasting the transaction to " "the group on session %u", param->thread_id); error= pre_wait_error; goto err; /* purecov: end */ } shared_plugin_stop_lock->release_read_lock(); DBUG_ASSERT(certification_latch != NULL); if (certification_latch->waitTicket(param->thread_id)) //等待認證結果 { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Error while waiting for conflict detection " "procedure to finish on session %u", param->thread_id); error= post_wait_error; goto err; /* purecov: end */ }
2、add_write_set函式
int add_write_set(Transaction_context_log_event *tcle, Transaction_write_set *set){ DBUG_ENTER("add_write_set"); int iterator= set->write_set_size; //將迴圈次數設定為 set的長度 也就是有多少個write sets for (int i = 0; i < iterator; i++) { uchar buff[BUFFER_READ_PKE]; int8store(buff, set->write_set[i]); //逐位元組複製到buffer中 uint64 const tmp_str_sz= base64_needed_encoded_length((uint64) BUFFER_READ_PKE); char *write_set_value= (char *) my_malloc(PSI_NOT_INSTRUMENTED, static_cast<size_t>(tmp_str_sz), MYF(MY_WME)); //13bytes (gdb) p tmp_str_sz $2 = 13 if (!write_set_value)//分配記憶體錯誤 { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "No memory to generate write identification hash"); DBUG_RETURN(1); /* purecov: end */ } if (base64_encode(buff, (size_t) BUFFER_READ_PKE, write_set_value)) //做base64演算法 { /* purecov: begin inspected */ log_message(MY_ERROR_LEVEL, "Base 64 encoding of the write identification hash failed"); DBUG_RETURN(1); /* purecov: end */ } tcle->add_write_set(write_set_value); //最終將base64格式的write set寫入到event中 } DBUG_RETURN(0); }
3、get_transaction_write_set函式
Transaction_write_set* get_transaction_write_set(unsigned long m_thread_id){ DBUG_ENTER("get_transaction_write_set"); THD *thd= NULL; Transaction_write_set *result_set= NULL; Find_thd_with_id find_thd_with_id(m_thread_id, false); thd= Global_THD_manager::get_instance()->find_thd(&find_thd_with_id); if (thd) { std::set<uint64> *write_set= thd->get_transaction() ->get_transaction_write_set_ctx()->get_write_set(); //Rpl_transaction_write_set_ctx std::set<uint64> *get_write_set(); unsigned long write_set_size= write_set->size(); //返回集合大小 if (write_set_size == 0) { mysql_mutex_unlock(&thd->LOCK_thd_data); DBUG_RETURN(NULL); } result_set= (Transaction_write_set*)my_malloc(key_memory_write_set_extraction, sizeof(Transaction_write_set), MYF(0));//這裡為其Transaction_write_set分配記憶體空間 result_set->write_set_size= write_set_size; //獲取size result_set->write_set= (unsigned long long*)my_malloc(key_memory_write_set_extraction, write_set_size * sizeof(unsigned long long), MYF(0));//分配記憶體 int result_set_index= 0; for (std::set<uint64>::iterator it= write_set->begin();//完成複製注意是從set中複製到簡單的記憶體中 it != write_set->end(); ++it) { uint64 temp= *it; result_set->write_set[result_set_index++]=temp; } mysql_mutex_unlock(&thd->LOCK_thd_data); } DBUG_RETURN(result_set); }
作者微信:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/7728585/viewspace-2214332/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- MySQL:MGR 學習(1):寫集合(Write set)MySql
- MySQL的寫入資料儲存過程MySql儲存過程
- MySQL資料寫入過程介紹MySql
- 探究MySQL MGR的讀寫分離MySql
- 新手寫的Vue原始碼學習記錄(渲染過程)Vue原始碼
- Python學習之set集合Python
- MySQL MGR單主模式詳細搭建過程MySql模式
- MySQL update ...set後的and寫法的邏輯MySql
- 練習英文寫作 Learn to write the english word
- 把list集合的內容寫入到Xml中,通過XmlDocument方式寫入Xml檔案中XML
- MongoDB 寫安全(Write Concern)MongoDB
- Mysql 5.7儲存過程的學習MySql儲存過程
- HDFS寫過程分析
- MySQL學習 - 查詢的執行過程MySql
- JAVA集合:常見Set原始碼學習Java原始碼
- redis 有序集合(sorted set)(redis學習七)Redis
- python學習筆記24_集合set( )Python筆記
- MySQL 8.0.20 MGR資料遷移過程以及注意事項MySql
- Mysql增量寫入Hdfs(一) --將Mysql資料寫入Kafka TopicMySqlKafka
- Detectron2-寫模型(Write Models)官方文件中文翻譯模型
- 手寫IOC實現過程
- 手寫AOP實現過程
- 學習C過程中的筆記系列-2筆記
- 寫給新手的MySQL入門指南MySql
- Elasticsearch 如何保證寫入過程中不丟失資料的Elasticsearch
- MySQL升級WRITE_SET後的一次死鎖分析MySql
- Java 中的寫時複製 (Copy on Write, COW)Java
- memcached的學習過程
- MySQL寫sql的21個好習慣,學習工作效率翻倍MySql
- oracle儲存過程書寫格式Oracle儲存過程
- MySQL增刪改查學習筆記(手寫)MySql筆記
- HDFS 03 - 你能說說 HDFS 的寫入和讀取過程嗎?
- 每秒570000的寫入,MySQL如何實現?MySql
- MySQL 每秒 570000 的寫入,如何實現?MySql
- 【Redis學習筆記】2018-06-21 redis命令執行過程 SETRedis筆記
- python excel 內容寫入mysqlPythonExcelMySql
- 學習編寫DockerfileDocker
- MGR測試過程中出現的問題彙總