1.Delayed Replication
MySQL supports delayed replication such that a replica server deliberately【dɪˈlɪbərətli 故意;蓄意;存心;從容不迫地;不慌不忙地;小心翼翼地;】 executes transactions later than the source by at least a specified【ˈspesɪfaɪd 明確規定;具體說明;詳述;詳列;】 amount of time.
In MySQL 8.0, the method of delaying replication depends on two timestamps, immediate_commit_timestamp and original_commit_timestamp.If all servers in the replication topology are running MySQL 8.0 or above, delayed replication is measured using these timestamps. If either the immediate source or replica is not using these timestamps, the implementation of delayed replication from MySQL 5.7 is used.
The default replication delay is 0 seconds. Use a CHANGE REPLICATION SOURCE TO SOURCE_DELAY=N statement (from MySQL 8.0.23) or a CHANGE MASTER TO MASTER_DELAY=N statement (before MySQL 8.0.23) to set the delay to N seconds. A transaction received from the source is not executed until at least N seconds later than its commit on the immediate【ɪˈmiːdiət 立即的;立刻的;】 source. The delay happens per transaction (not event as in previous MySQL versions) and the actual delay is imposed【ɪmˈpoʊzd 把…強加於;推行,採用(規章制度);迫使;強制實行;使(別人)接受自己的意見;】 only on gtid_log_event or anonymous_gtid_log_event. The other events in the transaction always follow these events without any waiting time imposed on them.
【START REPLICA and STOP REPLICA take effect immediately and ignore any delay. RESET REPLICA resets the delay to 0.】
The replication_applier_configuration Performance Schema table contains the DESIRED_DELAY column which shows the delay configured using the SOURCE_DELAY | MASTER_DELAY option. The replication_applier_status Performance Schema table contains the REMAINING_DELAY column which shows the number of delay seconds remaining.
Delayed replication can be used for several purposes:--設計目的
• To protect against user mistakes on the source. With a delay you can roll back a delayed replica to the time just before the mistake.
• To test how the system behaves when there is a lag. For example, in an application, a lag might be caused by a heavy load on the replica. However, it can be difficult to generate this load level. Delayed replication can simulate the lag without having to simulate the load. It can also be used to debug conditions related to a lagging replica.
• To inspect what the database looked like in the past, without having to reload a backup. For example, by configuring a replica with a delay of one week, if you then need to see what the database looked like before the last few days' worth of development, the delayed replica can be inspected.
2.Replication Delay Timestamps
MySQL 8.0 provides a new method for measuring delay (also referred to as replication lag) in replication topologies that depends on the following timestamps associated with the GTID of each transaction (instead of each event) written to the binary log.
• original_commit_timestamp: the number of microseconds since epoch【ˈepək 時代;時期;紀元;世(地質年代,紀下分世);】 when the transaction was written (committed) to the binary log of the original source.
• immediate_commit_timestamp: the number of microseconds since epoch when the transaction was written (committed) to the binary log of the immediate source.
The output of mysqlbinlog displays these timestamps in two formats, microseconds from epoch and also TIMESTAMP format, which is based on the user defined time zone for better readability. For example:
#170404 10:48:05 server id 1 end_log_pos 233 CRC32 0x016ce647 GTID last_committed=0 \ sequence_number=1 original_committed_timestamp=1491299285661130 immediate_commit_timestamp=1491299285843771 # original_commit_timestamp=1491299285661130 (2017-04-04 10:48:05.661130 WEST) # immediate_commit_timestamp=1491299285843771 (2017-04-04 10:48:05.843771 WEST) /*!80001 SET @@SESSION.original_commit_timestamp=1491299285661130*//*!*/; SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1'/*!*/; # at 233
As a rule, the original_commit_timestamp is always the same on all replicas where the transaction is applied. In source-replica replication, the original_commit_timestamp of a transaction in the (original) source’s binary log is always the same as its immediate_commit_timestamp. In the replica’s relay log, the original_commit_timestamp and immediate_commit_timestamp of the transaction are the same as in the source’s binary log; whereas in its own binary log, the transaction’s immediate_commit_timestamp corresponds to when the replica committed the transaction.
In a Group Replication setup, when the original source is a member of a group, the original_commit_timestamp is generated when the transaction is ready to be committed. In other words, when it finished executing on the original source and its write set is ready to be sent to all members of the group for certification. When the original source is a server outside the group, the original_commit_timestamp is preserved. The same original_commit_timestamp for a particular transaction is replicated to all servers in the group, and to any replica outside the group that is replicating from a member. From MySQL 8.0.26, each recipient of the transaction also stores the local commit time in its binary log using immediate_commit_timestamp.
View change events, which are exclusive to Group Replication, are a special case. Transactions containing these events are generated by each group member but share the same GTID (so, they are not first executed in a source and then replicated to the group, but all members of the group execute and apply the same transaction). Before MySQL 8.0.26, these transactions have their original_commit_timestamp set to zero, and they appear this way in viewable output. From MySQL 8.0.26, for improved observability, group members set local timestamp values for transactions associated with view change events.
3.Monitoring Replication Delay
One of the most common ways to monitor replication delay (lag) in previous MySQL versions was by relying on the Seconds_Behind_Master field in the output of SHOW REPLICA STATUS. However, this metric is not suitable when using replication topologies more complex than the traditional sourcereplica setup, such as Group Replication. The addition of immediate_commit_timestamp and original_commit_timestamp to MySQL 8 provides a much finer degree of information about replication delay. The recommended method to monitor replication delay in a topology that supports these timestamps is using the following Performance Schema tables.
• replication_connection_status: current status of the connection to the source, provides information on the last and current transaction the connection thread queued into the relay log.
• replication_applier_status_by_coordinator: current status of the coordinator【koʊˈɔːrdɪneɪtər 協調員;協調人;排程員;共同調濟器(官);】 thread that only displays information when using a multithreaded replica, provides information on the last transaction buffered by the coordinator thread to a worker’s queue, as well as the transaction it is currently buffering.
• replication_applier_status_by_worker: current status of the thread(s) applying transactions received from the source, provides information about the transactions applied by the replication SQL thread, or by each worker thread when using a multithreaded replica.
Using these tables you can monitor information about the last transaction the corresponding thread processed and the transaction that thread is currently processing. This information comprises:
• a transaction’s GTID
• a transaction's original_commit_timestamp and immediate_commit_timestamp, retrieved from the replica’s relay log
• the time a thread started processing a transaction
• for the last processed transaction, the time the thread finished processing i
In addition to the Performance Schema tables, the output of SHOW REPLICA STATUS has three fields that show:
• SQL_Delay: A nonnegative【非負的;正的;零的;】 integer indicating the replication delay configured using CHANGE REPLICATION SOURCE TO SOURCE_DELAY=N (from MySQL 8.0.23) or CHANGE MASTER TO MASTER_DELAY=N (before MySQL 8.0.23), measured in seconds.
• SQL_Remaining_Delay: When Replica_SQL_Running_State is Waiting until MASTER_DELAY seconds after master executed event, this field contains an integer indicating the number of seconds left of the delay. At other times, this field is NULL.
• Replica_SQL_Running_State: A string indicating the state of the SQL thread (analogous to Replica_IO_State). The value is identical to the State value of the SQL thread as displayed by SHOW PROCESSLIST.
When the replication SQL thread is waiting for the delay to elapse before executing an event, SHOW PROCESSLIST displays its State value as Waiting until MASTER_DELAY seconds after master executed event.
Replication Notes and Tips
4.Replication Features and Issues
The following sections provide information about what is supported and what is not in MySQL replication, and about specific issues and situations that may occur when replicating certain statements.
Statement-based replication depends on compatibility【kəmˌpætəˈbɪləti (尤指計算機及程式的)相容性,相容性;相容;並存;和睦相處;】 at the SQL level between the source and replica. In other words, successful statement-based replication requires that any SQL features used be supported by both the source and the replica servers. If you use a feature on the source server that is available only in the current version of MySQL, you cannot replicate to a replica that uses an earlier version of MySQL. Such incompatibilities can also occur within a release series as well as between versions.
With MySQL's statement-based replication, there may be issues with replicating stored routines or triggers. You can avoid these issues by using MySQL's row-based replication instead.
4.1 Replication and AUTO_INCREMENT
Statement-based replication of AUTO_INCREMENT, LAST_INSERT_ID(), and TIMESTAMP values is carried out subject to the following exceptions:
• A statement invoking a trigger or function that causes an update to an AUTO_INCREMENT column is not replicated correctly using statement-based replication. These statements are marked as unsafe. (Bug #45677)
• An INSERT into a table that has a composite primary key that includes an AUTO_INCREMENT column that is not the first column of this composite key is not safe for statement-based logging or replication. These statements are marked as unsafe. (Bug #11754117, Bug #45670)
This issue does not affect tables using the InnoDB storage engine, since an InnoDB table with an AUTO_INCREMENT column requires at least one key where the auto-increment column is the only or leftmost column.
• Adding an AUTO_INCREMENT column to a table with ALTER TABLE might not produce the same ordering of the rows on the replica and the source. This occurs because the order in which the rows are numbered depends on the specific storage engine used for the table and the order in which the rows were inserted. If it is important to have the same order on the source and replica, the rows must be ordered before assigning an AUTO_INCREMENT number. Assuming that you want to add an AUTO_INCREMENT column to a table t1 that has columns col1 and col2, the following statements produce a new table t2 identical to t1 but with an AUTO_INCREMENT column:
CREATE TABLE t2 LIKE t1; ALTER TABLE t2 ADD id INT AUTO_INCREMENT PRIMARY KEY; INSERT INTO t2 SELECT * FROM t1 ORDER BY col1, col2;
【To guarantee the same ordering on both source and replica, the ORDER BY clause must name all columns of t1.】
The instructions just given are subject to the limitations of CREATE TABLE ... LIKE: Foreign key definitions are ignored, as are the DATA DIRECTORY and INDEX DIRECTORY table options. If a table definition includes any of those characteristics, create t2 using a CREATE TABLE statement that is identical to the one used to create t1, but with the addition of the AUTO_INCREMENT column.
Regardless of the method used to create and populate the copy having the AUTO_INCREMENT column, the final step is to drop the original table and then rename the copy:
DROP t1; ALTER TABLE t2 RENAME t1;
4.2 Replication and BLACKHOLE Tables
The BLACKHOLE storage engine accepts data but discards it and does not store it. When performing binary logging, all inserts to such tables are always logged, regardless of the logging format in use. Updates and deletes are handled differently depending on whether statement based or row based logging is in use. With the statement based logging format, all statements affecting BLACKHOLE tables are logged, but their effects ignored. When using row-based logging, updates and deletes to such tables are simply skipped— they are not written to the binary log. A warning is logged whenever this occurs.
For this reason we recommend when you replicate to tables using the BLACKHOLE storage engine that you have the binlog_format server variable set to STATEMENT, and not to either ROW or MIXED.
4.3 Replication and Character Sets
The following applies to replication between MySQL servers that use different character sets:
• If the source has databases with a character set different from the global character_set_server value, you should design your CREATE TABLE statements so that they do not implicitly【ɪmˈplɪsətli 含蓄地;無保留地;暗中地;不明顯地;無疑問地;】 rely on the database default character set. A good workaround【ˈwɜːrkəraʊnd 變通方法;應變方法;】 is to state the character set and collation explicitly in CREATE TABLE statements.
4.4 Replication and CHECKSUM TABLE
CHECKSUM TABLE returns a checksum that is calculated row by row, using a method that depends on the table row storage format. The storage format is not guaranteed to remain the same between MySQL versions, so the checksum value might change following an upgrade.
4.5 Replication of CREATE SERVER, ALTER SERVER, and DROP SERVER
The statements CREATE SERVER, ALTER SERVER, and DROP SERVER are not written to the binary log, regardless of the binary logging format that is in use.
4.6 Replication of CREATE ... IF NOT EXISTS Statements
MySQL applies these rules when various CREATE ... IF NOT EXISTS statements are replicated:
• Every CREATE DATABASE IF NOT EXISTS statement is replicated, whether or not the database already exists on the source.
• Similarly, every CREATE TABLE IF NOT EXISTS statement without a SELECT is replicated, whether or not the table already exists on the source. This includes CREATE TABLE IF NOT EXISTS ... LIKE.
• CREATE EVENT IF NOT EXISTS is always replicated, whether or not the event named in the statement already exists on the source.
• CREATE USER is written to the binary log only if successful. If the statement includes IF NOT EXISTS, it is considered successful, and is logged as long as at least one user named in the statement is created;in such cases, the statement is logged as written; this includes references to existing users that were not created.
• (MySQL 8.0.29 and later:) CREATE PROCEDURE IF NOT EXISTS, CREATE FUNCTION IF NOT EXISTS, or CREATE TRIGGER IF NOT EXISTS, if successful, is written in its entirety to the binary log (including the IF NOT EXISTS clause), whether or not the statement raised a warning because the object (procedure, function, or trigger) already existed.
4.7 Replication of CREATE TABLE ... SELECT Statements
MySQL applies these rules when CREATE TABLE ... SELECT statements are replicated:
• CREATE TABLE ... SELECT always performs an implicit commit.
• If the destination table does not exist, logging occurs as follows. It does not matter whether IF NOT EXISTS is present.
- STATEMENT or MIXED format: The statement is logged as written.
- ROW format: The statement is logged as a CREATE TABLE statement followed by a series of insert-row events.
Prior to MySQL 8.0.21, the statement is logged as two transactions. As of MySQL 8.0.21, on storage engines that support atomic DDL, it is logged as one transaction.
• If the CREATE TABLE ... SELECT statement fails, nothing is logged. This includes the case that the destination table exists and IF NOT EXISTS is not given.
• If the destination table exists and IF NOT EXISTS is given, MySQL 8.0 ignores the statement completely; nothing is inserted or logged.
MySQL 8.0 does not allow a CREATE TABLE ... SELECT statement to make any changes in tables other than the table that is created by the statement.
4.8 Replication of CURRENT_USER()
The following statements support use of the CURRENT_USER() function to take the place of the name of, and possibly the host for, an affected user or a definer:
• DROP USER • RENAME USER • GRANT • REVOKE • CREATE FUNCTION • CREATE PROCEDURE • CREATE TRIGGER • CREATE EVENT • CREATE VIEW • ALTER EVENT • ALTER VIEW • SET PASSWORD
When binary logging is enabled and CURRENT_USER() or CURRENT_USER is used as the definer in any of these statements, MySQL Server ensures that the statement is applied to the same user on both the source and the replica when the statement is replicated. In some cases, such as statements that change passwords, the function reference is expanded【ɪkˈspændɪd 擴大,增加,增強(尺碼、數量或重要性);擴充套件,發展(業務);詳述;詳細闡明;細談;】 before it is written to the binary log, so that the statement includes the user name. For all other cases, the name of the current user on the source is replicated to the replica as metadata, and the replica applies the statement to the current user named in the metadata, rather than to the current user on the replica.
4.9 Replication with Differing Table Definitions on Source and Replica
Source and target tables for replication do not have to be identical【aɪˈdentɪkl 完全相同的;相同的;同一的;完全同樣的;】. A table on the source can have more or fewer columns than the replica's copy of the table. In addition, corresponding【ˌkɔːrəˈspɑːndɪŋ 相應的;相關的;符合的;】 table columns on the source and the replica can use different data types, subject to certain conditions.
In all cases where the source and target tables do not have identical definitions, the database and table names must be the same on both the source and the replica.
Replication with More Columns on Source or Replica
You can replicate a table from the source to the replica such that the source and replica copies of the table have differing numbers of columns, subject to the following conditions:
• Columns common to both versions of the table must be defined in the same order on the source and the replica. (This is true even if both tables have the same number of columns.)
• Columns common to both versions of the table must be defined before any additional columns.
This means that executing an ALTER TABLE statement on the replica where a new column is inserted into the table within the range of columns common to both tables causes replication to fail, as shown in the following example:
Suppose that a table t, existing on the source and the replica, is defined by the following CREATE TABLE statement:
CREATE TABLE t ( c1 INT, c2 INT, c3 INT );
Suppose that the ALTER TABLE statement shown here is executed on the replica:
ALTER TABLE t ADD COLUMN cnew1 INT AFTER c3;
The previous ALTER TABLE is permitted on the replica because the columns c1, c2, and c3 that are common to both versions of table t remain grouped together in both versions of the table, before any columns that differ.
However, the following ALTER TABLE statement cannot be executed on the replica without causing replication to break:
ALTER TABLE t ADD COLUMN cnew2 INT AFTER c2;
Replication fails after execution on the replica of the ALTER TABLE statement just shown, because the new column cnew2 comes between columns common to both versions of t.
• Each “extra” column in the version of the table having more columns must have a default value.
A column's default value is determined by a number of factors, including its type, whether it is defined with a DEFAULT option, whether it is declared as NULL, and the server SQL mode in effect at the time of its creation.
In addition, when the replica's copy of the table has more columns than the source's copy, each column common to the tables must use the same data type in both tables.
Replication of Columns Having Different Data Types
Corresponding columns on the source's and the replica's copies of the same table ideally should have the same data type. However, this is not always strictly enforced, as long as certain conditions are met.
It is usually possible to replicate from a column of a given data type to another column of the same type and same size or width, where applicable, or larger. For example, you can replicate from a CHAR(10) column to another CHAR(10), or from a CHAR(10) column to a CHAR(25) column without any problems. In certain cases, it also possible to replicate from a column having one data type (on the source) to a column having a different data type (on the replica); when the data type of the source's version of the column is promoted to a type that is the same size or larger on the replica, this is known as attribute promotion.
Attribute promotion can be used with both statement-based and row-based replication, and is not dependent on the storage engine used by either the source or the replica. However, the choice of logging format does have an effect on the type conversions that are permitted; the particulars are discussed later in this section.
【Whether you use statement-based or row-based replication, the replica's copy of the table cannot contain more columns than the source's copy if you wish to employ attribute promotion.】
Statement-based replication. When using statement-based replication, a simple rule of thumb to follow is, “If the statement run on the source would also execute successfully on the replica, it should also replicate successfully”. In other words, if the statement uses a value that is compatible with the type of a given column on the replica, the statement can be replicated. For example, you can insert any value that fits in a TINYINT column into a BIGINT column as well; it follows that, even if you change the type of a TINYINT column in the replica's copy of a table to BIGINT, any insert into that column on the source that succeeds should also succeed on the replica, since it is impossible to have a legal TINYINT value that is large enough to exceed a BIGINT column.
Row-based replication: attribute promotion and demotion. Row-based replication supports attribute promotion and demotion between smaller data types and larger types. It is also possible to specify whether or not to permit lossy (truncated) or non-lossy conversions of demoted column values, as explained later in this section.
4.10 Replication and DIRECTORY Table Options
If a DATA DIRECTORY or INDEX DIRECTORY table option is used in a CREATE TABLE statement on the source server, the table option is also used on the replica. This can cause problems if no corresponding directory exists in the replica host file system or if it exists but is not accessible to the replica MySQL server. This can be overridden by using the NO_DIR_IN_CREATE server SQL mode on the replica, which causes the replica to ignore the DATA DIRECTORY and INDEX DIRECTORY table options when replicating CREATE TABLE statements. The result is that MyISAM data and index files are created in the table's database directory.
4.11 Replication of DROP ... IF EXISTS Statements
The DROP DATABASE IF EXISTS, DROP TABLE IF EXISTS, and DROP VIEW IF EXISTS statements are always replicated, even if the database, table, or view to be dropped does not exist on the source. This is to ensure that the object to be dropped no longer exists on either the source or the replica, once the replica has caught up with the source.
DROP ... IF EXISTS statements for stored programs (stored procedures and functions, triggers, and events) are also replicated, even if the stored program to be dropped does not exist on the source.
4.12 Replication and Floating-Point Values
With statement-based replication, values are converted from decimal to binary. Because conversions between decimal and binary representations of them may be approximate, comparisons involving floatingpoint values are inexact. This is true for operations that use floating-point values explicitly, or that use values that are converted to floating-point implicitly. Comparisons of floating-point values might yield different results on source and replica servers due to differences in computer architecture, the compiler used to build MySQL, and so forth.
4.13 Replication and FLUSH
Some forms of the FLUSH statement are not logged because they could cause problems if replicated to a replica: FLUSH LOGS and FLUSH TABLES WITH READ LOCK.
The FLUSH TABLES, ANALYZE TABLE, OPTIMIZE TABLE, and REPAIR TABLE statements are written to the binary log and thus replicated to replicas. This is not normally a problem because these statements do not modify table data.
However, this behavior can cause difficulties under certain circumstances. If you replicate the privilege tables in the mysql database and update those tables directly without using GRANT, you must issue a FLUSH PRIVILEGES on the replicas to put the new privileges into effect. In addition, if you use FLUSH TABLES when renaming a MyISAM table that is part of a MERGE table, you must issue FLUSH TABLES manually on the replicas. These statements are written to the binary log unless you specify NO_WRITE_TO_BINLOG or its alias LOCAL.
4.14 Replication and System Functions
Certain functions do not replicate well under some conditions:
• The USER(), CURRENT_USER() (or CURRENT_USER), UUID(), VERSION(), and LOAD_FILE() functions are replicated without change and thus do not work reliably on the replica unless row-based replication is enabled.
USER() and CURRENT_USER() are automatically replicated using row-based replication when using MIXED mode, and generate a warning in STATEMENT mode.This is also true for VERSION() and RAND().
• For NOW(), the binary log includes the timestamp. This means that the value as returned by the call to this function on the source is replicated to the replica. To avoid unexpected results when replicating between MySQL servers in different time zones, set the time zone on both source and replica.
To explain the potential problems when replicating between servers which are in different time zones, suppose that the source is located in New York, the replica is located in Stockholm, and both servers are using local time. Suppose further that, on the source, you create a table mytable, perform an INSERT statement on this table, and then select from the table. Local time in Stockholm is 6 hours later than in New York; so, if you issue SELECT NOW() on the replica at that exact same instant, the value 2009-09-01 18:00:00 is returned. For this reason, if you select from the replica's copy of mytable after the CREATE TABLE and INSERT statements just shown have been replicated, you might expect mycol to contain the value 2009-09-01 18:00:00. However, this is not the case; when you select from the replica's copy of mytable, you obtain exactly the same result as on the source.
Unlike NOW(), the SYSDATE() function is not replication-safe because it is not affected by SET TIMESTAMP statements in the binary log and is nondeterministic if statement-based logging is used. This is not a problem if row-based logging is used.
An alternative is to use the --sysdate-is-now option to cause SYSDATE() to be an alias for NOW(). This must be done on the source and the replica to work correctly. In such cases, a warning is still issued by this function, but can safely be ignored as long as --sysdate-is-now is used on both the source and the replica.
SYSDATE() is automatically replicated using row-based replication when using MIXED mode, and generates a warning in STATEMENT mode.
• The following restriction applies to statement-based replication only, not to row-based replication. The GET_LOCK(), RELEASE_LOCK(), IS_FREE_LOCK(), and IS_USED_LOCK() functions that handle user-level locks are replicated without the replica knowing the concurrency context on the source. Therefore, these functions should not be used to insert into a source table because the content on the replica would differ. For example, do not issue a statement such as INSERT INTO mytable VALUES(GET_LOCK(...)).
These functions are automatically replicated using row-based replication when using MIXED mode, and generate a warning in STATEMENT mode.
As a workaround for the preceding limitations when statement-based replication is in effect, you can use the strategy of saving the problematic function result in a user variable and referring to the variable in a later statement.
For example, the following single-row INSERT is problematic due to the reference to the UUID() function:
INSERT INTO t VALUES(UUID());
To work around the problem, do this instead:
SET @my_uuid = UUID(); INSERT INTO t VALUES(@my_uuid);
That sequence of statements replicates because the value of @my_uuid is stored in the binary log as a user-variable event prior to the INSERT statement and is available for use in the INSERT.
4.15 Replication and Fractional Seconds Support
MySQL 8.0 permits fractional seconds for TIME, DATETIME, and TIMESTAMP values, with up to microseconds (6 digits) precision.
4.16 Replication of Invoked Features
Replication of invoked features such as loadable functions and stored programs (stored procedures and functions, triggers, and events) provides the following characteristics:
• The effects of the feature are always replicated.
• The following statements are replicated using statement-based replication:
• CREATE EVENT • ALTER EVENT • DROP EVENT • CREATE PROCEDURE • DROP PROCEDURE • CREATE FUNCTION • DROP FUNCTION • CREATE TRIGGER • DROP TRIGGER
However, the effects of features created, modified, or dropped using these statements are replicated using row-based replication.
【Attempting to replicate invoked features using statement-based replication produces the warning Statement is not safe to log in statement format. For example, trying to replicate a loadable function with statementbased replication generates this warning because it currently cannot be determined by the MySQL server whether the function is deterministic. If you are absolutely certain that the invoked feature's effects are deterministic, you can safely disregard such warnings.】
Before enabling events that were replicated from the source, you should disable the MySQL Event Scheduler on the replica (using a statement such as SET GLOBAL event_scheduler = OFF;), run any necessary ALTER EVENT statements, restart the server, then re-enable the Event Scheduler on the replica afterward (using a statement such as SET GLOBAL event_scheduler = ON;)。
4.17 Replication of JSON Documents
Before MySQL 8.0, an update to a JSON column was always written to the binary log as the complete document. In MySQL 8.0, it is possible to log partial updates to JSON documents , which is more efficient. The logging behavior depends on the format used, as described here:
Statement-based replication. JSON partial updates are always logged as partial updates. This cannot be disabled when using statement-based logging.
Row-based replication. JSON partial updates are not logged as such by default, but instead are logged as complete documents. To enable logging of partial updates, set binlog_row_value_options=PARTIAL_JSON. If a replication source has this variable set, partial updates received from that source are handled and applied by a replica regardless of the replica's own setting for the variable.
Servers running MySQL 8.0.2 or earlier do not recognize the log events used for JSON partial updates. For this reason, when replicating to such a server from a server running MySQL 8.0.3 or later, binlog_row_value_options must be disabled on the source by setting this variable to '' (empty string). See the description of this variable for more information.
4.18 Replication and LIMIT
Statement-based replication of LIMIT clauses in DELETE, UPDATE, and INSERT ... SELECT statements is unsafe since the order of the rows affected is not defined. (Such statements can be replicated correctly with statement-based replication only if they also contain an ORDER BY clause.) When such a statement is encountered:
• When using STATEMENT mode, a warning that the statement is not safe for statement-based replication is now issued.
When using STATEMENT mode, warnings are issued for DML statements containing LIMIT even when they also have an ORDER BY clause (and so are made deterministic). This is a known issue. (Bug #42851)
• When using MIXED mode, the statement is now automatically replicated using row-based mode.
4.19 Replication and LOAD DATA
LOAD DATA is considered unsafe for statement-based logging.When binlog_format=MIXED is set, the statement is logged in row-based format. When binlog_format=STATEMENT is set, note that LOAD DATA does not generate a warning, unlike other unsafe statements.
If you use LOAD DATA with binlog_format=STATEMENT, each replica on which the changes are to be applied creates a temporary file containing the data. The replica then uses a LOAD DATA statement to apply the changes. This temporary file is not encrypted, even if binary log encryption is active on the source, If encryption is required, use row-based or mixed binary logging format instead, for which replicas do not create the temporary file.
If a PRIVILEGE_CHECKS_USER account has been used to help secure the replication channel,it is strongly recommended that you log LOAD DATA operations using row-based binary logging (binlog_format=ROW). If REQUIRE_ROW_FORMAT is set for the channel, row-based binary logging is required. With this logging format, the FILE privilege is not needed to execute the event, so do not give the PRIVILEGE_CHECKS_USER account this privilege. If you need to recover from a replication error involving a LOAD DATA INFILE operation logged in statement format, and the replicated event is trusted, you could grant the FILE privilege to the PRIVILEGE_CHECKS_USER account temporarily, removing it after the replicated event has been applied.
When mysqlbinlog reads log events for LOAD DATA statements logged in statement-based format, a generated local file is created in a temporary directory. These temporary files are not automatically removed by mysqlbinlog or any other MySQL program. If you do use LOAD DATA statements with statement-based binary logging, you should delete the temporary files yourself after you no longer need the statement log.
4.20 Replication and max_allowed_packet
max_allowed_packet sets an upper limit on the size of any single message between the MySQL server and clients, including replicas. If you are replicating large column values (such as might be found in TEXT or BLOB columns) and max_allowed_packet is too small on the source, the source fails with an error, and the replica shuts down the replication I/O (receiver) thread. If max_allowed_packet is too small on the replica, this also causes the replica to stop the I/O thread.
Row-based replication sends all columns and column values for updated rows from the source to the replica, including values of columns that were not actually changed by the update. This means that, when you are replicating large column values using row-based replication, you must take care to set max_allowed_packet large enough to accommodate the largest row in any table to be replicated, even if you are replicating updates only, or you are inserting only relatively small values.
On a multi-threaded replica (with replica_parallel_workers > 0 or slave_parallel_workers > 0), ensure that the system variable replica_pending_jobs_size_max or slave_pending_jobs_size_max is set to a value equal to or greater than the setting for the max_allowed_packet system variable on the source. The default setting for replica_pending_jobs_size_max or slave_pending_jobs_size_max, 128M, is twice the default setting for max_allowed_packet, which is 64M. max_allowed_packet limits the packet size that the source can send, but the addition of an event header can produce a binary log event exceeding this size. Also, in row-based replication, a single event can be significantly larger than the max_allowed_packet size, because the value of max_allowed_packet only limits each column of the table.
The replica actually accepts packets up to the limit set by its replica_max_allowed_packet or slave_max_allowed_packet setting, which default to the maximum setting of 1GB, to prevent a replication failure due to a large packet. However, the value of replica_pending_jobs_size_max or slave_pending_jobs_size_max controls the memory that is made available on the replica to hold incoming packets. The specified memory is shared among all the replica worker queues.
The value of replica_pending_jobs_size_max or slave_pending_jobs_size_max is a soft limit, and if an unusually large event (consisting of one or multiple packets) exceeds this size, the transaction is held until all the replica workers have empty queues, and then processed. All subsequent transactions are held until the large transaction has been completed. So although unusual events larger than replica_pending_jobs_size_max or slave_pending_jobs_size_max can be processed, the delay to clear the queues of all the replica workers and the wait to queue subsequent transactions can cause lag on the replica and decreased concurrency of the replica workers. replica_pending_jobs_size_max or slave_pending_jobs_size_max should therefore be set high enough to accommodate most expected event sizes.
4.21 Replication and MEMORY Tables
When a replication source server shuts down and restarts, its MEMORY tables become empty. To replicate this effect to replicas, the first time that the source uses a given MEMORY table after startup, it logs an event that notifies replicas that the table must be emptied by writing a DELETE or (from MySQL 8.0.22) TRUNCATE TABLE statement for that table to the binary log. This generated event is identifiable by a comment in the binary log, and if GTIDs are in use on the server, it has a GTID assigned. The statement is always logged in statement format, even if the binary logging format is set to ROW, and it is written even if read_only or super_read_only mode is set on the server. Note that the replica still has outdated data in a MEMORY table during the interval between the source's restart and its first use of the table. To avoid this interval when a direct query to the replica could return stale data, you can set the init_file system variable to name a file containing statements that populate the MEMORY table on the source at startup.
4.22 Replication of the mysql System Schema
Data modification statements made to tables in the mysql schema are replicated according to the value of binlog_format; if this value is MIXED, these statements are replicated using row-based format. However, statements that would normally update this information indirectly—such GRANT, REVOKE, and statements manipulating triggers, stored routines, and views—are replicated to replicas using statementbased replication.
4.23 Replication and the Query Optimizer
It is possible for the data on the source and replica to become different if a statement is written in such a way that the data modification is nondeterministic; that is, left up the query optimizer. (In general, this is not a good practice, even outside of replication.) Examples of nondeterministic statements include DELETE or UPDATE statements that use LIMIT with no ORDER BY clause
4.24 Replication and Partitioning
Replication is supported between partitioned tables as long as they use the same partitioning scheme and otherwise have the same structure, except where an exception is specifically allowed.
Replication between tables that have different partitioning is generally not supported. This because statements (such as ALTER TABLE ... DROP PARTITION) that act directly on partitions in such cases might produce different results on the source and the replica. In the case where a table is partitioned on the source but not on the replica, any statements that operate on partitions on the source's copy of the replica fail on the replica. When the replica's copy of the table is partitioned but the source's copy is not, statements that act directly on partitions cannot be run on the source without causing errors there. To avoid stopping replication or creating inconsistencies between the source and replica, always ensure that a table on the source and the corresponding replicated table on the replica are partitioned in the same way.
4.25 Replication and REPAIR TABLE
When used on a corrupted or otherwise damaged table, it is possible for the REPAIR TABLE statement to delete rows that cannot be recovered. However, any such modifications of table data performed by this statement are not replicated, which can cause source and replica to lose synchronization. For this reason, in the event that a table on the source becomes damaged and you use REPAIR TABLE to repair it, you should first stop replication (if it is still running) before using REPAIR TABLE, then afterward compare the source's and replica's copies of the table and be prepared to correct any discrepancies manually, before restarting replication.
4.26 Replication and Server SQL Mode
Using different server SQL mode settings on the source and the replica may cause the same INSERT statements to be handled differently on the source and the replica, leading the source and replica to diverge. For best results, you should always use the same server SQL mode on the source and on the replica. This advice applies whether you are using statement-based or row-based replication.
If you are replicating partitioned tables, using different SQL modes on the source and the replica is likely to cause issues. At a minimum, this is likely to cause the distribution of data among partitions to be different in the source's and replica's copies of a given table. It may also cause inserts into partitioned tables that succeed on the source to fail on the replica.
4.27 Replication and Temporary Tables
In MySQL 8.0, when binlog_format is set to ROW or MIXED, statements that exclusively use temporary tables are not logged on the source, and therefore the temporary tables are not replicated. Statements that involve a mix of temporary and nontemporary tables are logged on the source only for the operations on nontemporary tables, and the operations on temporary tables are not logged. This means that there are never any temporary tables on the replica to be lost in the event of an unplanned shutdown by the replica.
When binlog_format is set to STATEMENT, operations on temporary tables are logged on the source and replicated on the replica, provided that the statements involving temporary tables can be logged safely using statement-based format. In this situation, loss of replicated temporary tables on the replica can be an issue. In statement-based replication mode, CREATE TEMPORARY TABLE and DROP TEMPORARY TABLE statements cannot be used inside a transaction, procedure, function, or trigger when GTIDs are in use on the server (that is, when the enforce_gtid_consistency system variable is set to ON). They can be used outside these contexts when GTIDs are in use, provided that autocommit=1 is set.
Because of the differences in behavior between row-based or mixed replication mode and statement-based replication mode regarding temporary tables, you cannot switch the replication format at runtime, if the change applies to a context (global or session) that contains any open temporary tables.
4.28 Replication Retries and Timeouts
The global value of the system variable replica_transaction_retries (from MySQL 8.0.26) or slave_transaction_retries (before MySQL 8.0.26) sets the maximum number of times for applier threads on a single-threaded or multithreaded replica to automatically retry failed transactions before stopping. Transactions are automatically retried when the SQL thread fails to execute them because of an InnoDB deadlock, or when the transaction's execution time exceeds the InnoDB innodb_lock_wait_timeout value. If a transaction has a non-temporary error that prevents it from succeeding, it is not retried.
The default setting for replica_transaction_retries or slave_transaction_retries is 10, meaning that a failing transaction with an apparently temporary error is retried 10 times before the applier thread stops. Setting the variable to 0 disables automatic retrying of transactions. On a multithreaded replica, the specified number of transaction retries can take place on all applier threads of all channels. The Performance Schema table replication_applier_status shows the total number of transaction retries that took place on each replication channel, in the COUNT_TRANSACTIONS_RETRIES column.
The process of retrying transactions can cause lag on a replica or on a Group Replication group member, which can be configured as a single-threaded or multithreaded replica. The Performance Schema table replication_applier_status_by_worker shows detailed information on transaction retries by the applier threads on a single-threaded or multithreaded replica. This data includes timestamps showing how long it took the applier thread to apply the last transaction from start to finish (and when the transaction currently in progress was started), and how long this was after the commit on the original source and the immediate source. The data also shows the number of retries for the last transaction and the transaction currently in progress, and enables you to identify the transient errors that caused the transactions to be retried. You can use this information to see whether transaction retries are the cause of replication lag, and investigate the root cause of the failures that led to the retries.
4.29 Replication and Time Zones
By default, source and replica servers assume that they are in the same time zone. If you are replicating between servers in different time zones, the time zone must be set on both source and replica. Otherwise, statements depending on the local time on the source are not replicated properly, such as statements that use the NOW() or FROM_UNIXTIME() functions.
Verify that your combination of settings for the system time zone (system_time_zone), server current time zone (the global value of time_zone), and per-session time zones (the session value of time_zone) on the source and replica is producing the correct results. In particular, if the time_zone system variable is set to the value SYSTEM, indicating that the server time zone is the same as the system time zone, this can cause the source and replica to apply different time zones. For example, a source could write the following statement in the binary log:
SET @@session.time_zone='SYSTEM';
If this source and its replica have a different setting for their system time zones, this statement can produce unexpected results on the replica, even if the replica's global time_zone value has been set to match the source's.
4.30 Replication and TRUNCATE TABLE
TRUNCATE TABLE is normally regarded as a DML statement, and so would be expected to be logged and replicated using row-based format when the binary logging mode is ROW or MIXED. However this caused issues when logging or replicating, in STATEMENT or MIXED mode, tables that used transactional storage engines such as InnoDB when the transaction isolation level was READ COMMITTED or READ UNCOMMITTED, which precludes statement-based logging