使用MMM搭建Mysql同步高可用
come from:http://blog.chinaunix.net/u/25264/showart_1811549.html
使用MMM搭建Mysql同步高可用性
先介紹下MMM
MMM是 mysql-master-master的縮寫。
MMM專案來自 Google:
MySQL本身沒有提供replication failover的解決方案,那麼如何使Replication方案具有HA的功能呢,那就是MMM。
大家都知道mysql的同步機制在解決實際問題上具有很高的可行性,特別對讀寫負載比較高的web2.0來說,那麼在實際應用中必須保證mysql的穩定性及效能。
Master-Master Replication
1、使用兩個MySQL資料庫db1,db2,互為Master和Slave,即:
一邊db1作為db2的master,一旦有資料寫向db0時,db2定時從db1更新
另一邊db2也作為db1的master,一旦有資料寫向db2時,db1也定時從db2獲得更新
2、但從AppServer的角度來說,同時只有一個結點db1扮演Master,另外一個結點db2扮演Slave,不能同時兩個結點扮演Master。即AppSever總是把write操作分配某個資料庫(db1),除非db1 failed,被切換。
3、如果扮演Slave的資料庫結點db2 Failed了:
a)此時appServer要能夠把所有的read,write分配給db1,read操作不再指向db2
b)一旦db2恢復過來後,繼續充當Slave角色,並告訴AppServer可以將read分配給它了
4、如果扮演Master的資料庫結點db1 Failed了
a)此時appServer要能夠把所有的寫操作從db1切換分配給db2,也就是切換Master由db2充當
b)db1恢復過來後,充當Slave的角色,Master由db2繼續扮演
MMM利用了虛擬IP的技術:1個網路卡可以同時使用多個IP。
(所以使用MMM時,需要2*n+1個IP,n為mysql資料庫結點個數,包括master,slave)
MMM有3個重要的器件:
1、mmmd_mon - 資料庫控制節點執行。
2、mmm_control - 資料庫控制節點執行。
3、mmmd_agent - 資料庫節點(master/slave)執行。
本文環境:
server1 ip: 192.168.1.225 ; virtual read ip:192.168.1.229
server2 ip: 192.168.1.226 ; virtual read ip:192.168.1.230
server3 ip: 192.168.1.227
virtual write ip: 192.168.1.231
配置過程很簡單,
server1 server2 replication雙向 master-master
server1 server2 安裝mmm並配置mmm_agent.conf
server3 安裝mmm並配置mmm_mon.conf
1,首先我們要先配置好master-master雙向同步,這一步就不多說了。
2,安裝部署MMM
3臺伺服器都要安裝MMM軟體,安裝過程如下:
安裝如下perl模組,為MMM安裝做好準備,
cpan Algorithm::Diff
cpan Proc::Daemon
cpan Time::HiRes
cpan DBI
cpan DBD::mysql
wget
tar xzf mmm-1.0-pre2.tar.bz2
cd mmm-1.0-pre2
./install.pl
3,在server1上配置MMM
修改配置 /usr/local/mmm/etc/mmm_agent.conf,如下:
#
# Master-Master Manager config (agent)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
# MMMD command socket tcp-port and ip
bind_port 9989
# Cluster interface
cluster_interface eth0
# Define current server id
this db1
mode master
# For masters
peer db2
# Cluster hosts addresses and access params
host db1
ip 192.168.1.225
port 3306
user rep_agent
password RepAgent
host db2
ip 192.168.1.226
port 3306
user rep_agent
password RepAgent
GRANT ALL PRIVILEGES on *.* to 'rep_agent'@'%′ identified by 'RepAgent';
GRANT ALL PRIVILEGES on *.* to 'rep_monitor'@'%' identified by 'RepMonitor';
3,在server2上配置MMM
修改配置 /usr/local/mmm/etc/mmm_agent.conf,如下:
#
# Master-Master Manager config (agent)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
# MMMD command socket tcp-port and ip
bind_port 9989
# Cluster interface
cluster_interface eth0
# Define current server id
this db2
mode master
# For masters
peer db1
# Cluster hosts addresses and access params
host db1
ip 192.168.1.225
port 3306
user rep_agent
password RepAgent
host db2
ip 192.168.1.226
port 3306
user rep_agent
password RepAgent
GRANT ALL PRIVILEGES on *.* to 'rep_agent'@'%′ identified by 'RepAgent';
GRANT ALL PRIVILEGES on *.* to 'rep_monitor'@'%' identified by 'RepMonitor';
3,在server3上配置MMM
修改配置 /usr/local/mmm/etc/mmm_mon.conf,如下:
#
# Master-Master Manager config (monitor)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd.pid
status_path /usr/local/mmm/var/mmmd.status
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
email root@localhost
# MMMD command socket tcp-port
bind_port 9988
agent_port 9989
monitor_ip 127.0.0.1
# Cluster interface
cluster_interface eth0
# Cluster hosts addresses and access params
host db1
ip 192.168.1.225
port 3306
user rep_monitor
password RepMonitor
mode master
peer db2
host db2
ip 192.168.1.226
port 3306
user rep_monitor
password RepMonitor
mode master
peer db1
#
# Define roles
#
active_master_role writer
# Mysql Reader role
role reader
mode balanced
servers db1, db2
ip 192.168.1.229, 192.168.1.230
# Mysql Writer role
role writer
mode exclusive
servers db1, db2
ip 192.168.1.231
#
# Checks parameters
#
# Ping checker
check ping
check_period 1
trap_period 5
timeout 2
# Mysql checker
# (restarts after 10000 checks to prevent memory leaks)
check mysql
check_period 1
trap_period 2
timeout 2
restart_after 10000
# Mysql replication backlog checker
# (restarts after 10000 checks to prevent memory leaks)
check rep_backlog
check_period 5
trap_period 10
max_backlog 60
timeout 2
restart_after 10000
# Mysql replication threads checker
# (restarts after 10000 checks to prevent memory leaks)
check rep_threads
check_period 1
trap_period 5
timeout 2
restart_after 10000
啟動
server1 與 server2上分別啟動mmmd_agent
[root@localhost ~]# mmmd_agent
[root@localhost ~]# netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9989 0.0.0.0:* LISTEN 26010/perl
出現9989埠說明啟動成功。
server3上啟動mmmd_mon
[root@localhost etc]# mmmd_mon
Reading config file: 'mmm_mon.conf'
$VAR1 = {
'db2' => {
'roles' => [
'reader(192.168.1.229;)'
],
'version' => '0',
'state' => 'ONLINE'
},
'db1' => {
'roles' => [
'reader(192.168.1.230;)',
'writer(192.168.1.231;)'
],
'version' => '0',
'state' => 'ONLINE'
}
};
Role: 'reader(192.168.1.229;)'
Adding role: 'reader' with ip '192.168.1.229'
Role: 'reader(192.168.1.230;)'
Adding role: 'reader' with ip '192.168.1.230'
Role: 'writer(192.168.1.231;)'
Adding role: 'writer' with ip '192.168.1.231'
在管理節點上(server3)啟動db節點:
mmm_control set_online db1
mmm_control set_online db2
檢視下節點狀態:
[root@localhost etc]# mmm_control show
Config file: mmm_mon.conf
Daemon is running!
Servers status:
db1(192.168.1.225): master/ONLINE. Roles: reader(192.168.1.230;), writer(192.168.1.231;)
db2(192.168.1.226): master/ONLINE. Roles: reader(192.168.1.229;)
OK,能看到這個效果說明你已經配置成功了,下面就可以隨心所欲的折騰mysql的異常狀態,來進行測試。
使用MMM搭建Mysql同步高可用性
先介紹下MMM
MMM是 mysql-master-master的縮寫。
MMM專案來自 Google:
MySQL本身沒有提供replication failover的解決方案,那麼如何使Replication方案具有HA的功能呢,那就是MMM。
大家都知道mysql的同步機制在解決實際問題上具有很高的可行性,特別對讀寫負載比較高的web2.0來說,那麼在實際應用中必須保證mysql的穩定性及效能。
Master-Master Replication
1、使用兩個MySQL資料庫db1,db2,互為Master和Slave,即:
一邊db1作為db2的master,一旦有資料寫向db0時,db2定時從db1更新
另一邊db2也作為db1的master,一旦有資料寫向db2時,db1也定時從db2獲得更新
2、但從AppServer的角度來說,同時只有一個結點db1扮演Master,另外一個結點db2扮演Slave,不能同時兩個結點扮演Master。即AppSever總是把write操作分配某個資料庫(db1),除非db1 failed,被切換。
3、如果扮演Slave的資料庫結點db2 Failed了:
a)此時appServer要能夠把所有的read,write分配給db1,read操作不再指向db2
b)一旦db2恢復過來後,繼續充當Slave角色,並告訴AppServer可以將read分配給它了
4、如果扮演Master的資料庫結點db1 Failed了
a)此時appServer要能夠把所有的寫操作從db1切換分配給db2,也就是切換Master由db2充當
b)db1恢復過來後,充當Slave的角色,Master由db2繼續扮演
MMM利用了虛擬IP的技術:1個網路卡可以同時使用多個IP。
(所以使用MMM時,需要2*n+1個IP,n為mysql資料庫結點個數,包括master,slave)
MMM有3個重要的器件:
1、mmmd_mon - 資料庫控制節點執行。
2、mmm_control - 資料庫控制節點執行。
3、mmmd_agent - 資料庫節點(master/slave)執行。
本文環境:
server1 ip: 192.168.1.225 ; virtual read ip:192.168.1.229
server2 ip: 192.168.1.226 ; virtual read ip:192.168.1.230
server3 ip: 192.168.1.227
virtual write ip: 192.168.1.231
配置過程很簡單,
server1 server2 replication雙向 master-master
server1 server2 安裝mmm並配置mmm_agent.conf
server3 安裝mmm並配置mmm_mon.conf
1,首先我們要先配置好master-master雙向同步,這一步就不多說了。
2,安裝部署MMM
3臺伺服器都要安裝MMM軟體,安裝過程如下:
安裝如下perl模組,為MMM安裝做好準備,
cpan Algorithm::Diff
cpan Proc::Daemon
cpan Time::HiRes
cpan DBI
cpan DBD::mysql
wget
tar xzf mmm-1.0-pre2.tar.bz2
cd mmm-1.0-pre2
./install.pl
3,在server1上配置MMM
修改配置 /usr/local/mmm/etc/mmm_agent.conf,如下:
#
# Master-Master Manager config (agent)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
# MMMD command socket tcp-port and ip
bind_port 9989
# Cluster interface
cluster_interface eth0
# Define current server id
this db1
mode master
# For masters
peer db2
# Cluster hosts addresses and access params
host db1
ip 192.168.1.225
port 3306
user rep_agent
password RepAgent
host db2
ip 192.168.1.226
port 3306
user rep_agent
password RepAgent
GRANT ALL PRIVILEGES on *.* to 'rep_agent'@'%′ identified by 'RepAgent';
GRANT ALL PRIVILEGES on *.* to 'rep_monitor'@'%' identified by 'RepMonitor';
3,在server2上配置MMM
修改配置 /usr/local/mmm/etc/mmm_agent.conf,如下:
#
# Master-Master Manager config (agent)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
# MMMD command socket tcp-port and ip
bind_port 9989
# Cluster interface
cluster_interface eth0
# Define current server id
this db2
mode master
# For masters
peer db1
# Cluster hosts addresses and access params
host db1
ip 192.168.1.225
port 3306
user rep_agent
password RepAgent
host db2
ip 192.168.1.226
port 3306
user rep_agent
password RepAgent
GRANT ALL PRIVILEGES on *.* to 'rep_agent'@'%′ identified by 'RepAgent';
GRANT ALL PRIVILEGES on *.* to 'rep_monitor'@'%' identified by 'RepMonitor';
3,在server3上配置MMM
修改配置 /usr/local/mmm/etc/mmm_mon.conf,如下:
#
# Master-Master Manager config (monitor)
#
# Debug mode
debug no
# Paths
pid_path /usr/local/mmm/var/mmmd.pid
status_path /usr/local/mmm/var/mmmd.status
bin_path /usr/local/mmm/bin
# Logging setup
log mydebug
file /usr/local/mmm/var/mmm-debug.log
level debug
log mytraps
file /usr/local/mmm/var/mmm-traps.log
level trap
email root@localhost
# MMMD command socket tcp-port
bind_port 9988
agent_port 9989
monitor_ip 127.0.0.1
# Cluster interface
cluster_interface eth0
# Cluster hosts addresses and access params
host db1
ip 192.168.1.225
port 3306
user rep_monitor
password RepMonitor
mode master
peer db2
host db2
ip 192.168.1.226
port 3306
user rep_monitor
password RepMonitor
mode master
peer db1
#
# Define roles
#
active_master_role writer
# Mysql Reader role
role reader
mode balanced
servers db1, db2
ip 192.168.1.229, 192.168.1.230
# Mysql Writer role
role writer
mode exclusive
servers db1, db2
ip 192.168.1.231
#
# Checks parameters
#
# Ping checker
check ping
check_period 1
trap_period 5
timeout 2
# Mysql checker
# (restarts after 10000 checks to prevent memory leaks)
check mysql
check_period 1
trap_period 2
timeout 2
restart_after 10000
# Mysql replication backlog checker
# (restarts after 10000 checks to prevent memory leaks)
check rep_backlog
check_period 5
trap_period 10
max_backlog 60
timeout 2
restart_after 10000
# Mysql replication threads checker
# (restarts after 10000 checks to prevent memory leaks)
check rep_threads
check_period 1
trap_period 5
timeout 2
restart_after 10000
啟動
server1 與 server2上分別啟動mmmd_agent
[root@localhost ~]# mmmd_agent
[root@localhost ~]# netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9989 0.0.0.0:* LISTEN 26010/perl
出現9989埠說明啟動成功。
server3上啟動mmmd_mon
[root@localhost etc]# mmmd_mon
Reading config file: 'mmm_mon.conf'
$VAR1 = {
'db2' => {
'roles' => [
'reader(192.168.1.229;)'
],
'version' => '0',
'state' => 'ONLINE'
},
'db1' => {
'roles' => [
'reader(192.168.1.230;)',
'writer(192.168.1.231;)'
],
'version' => '0',
'state' => 'ONLINE'
}
};
Role: 'reader(192.168.1.229;)'
Adding role: 'reader' with ip '192.168.1.229'
Role: 'reader(192.168.1.230;)'
Adding role: 'reader' with ip '192.168.1.230'
Role: 'writer(192.168.1.231;)'
Adding role: 'writer' with ip '192.168.1.231'
在管理節點上(server3)啟動db節點:
mmm_control set_online db1
mmm_control set_online db2
檢視下節點狀態:
[root@localhost etc]# mmm_control show
Config file: mmm_mon.conf
Daemon is running!
Servers status:
db1(192.168.1.225): master/ONLINE. Roles: reader(192.168.1.230;), writer(192.168.1.231;)
db2(192.168.1.226): master/ONLINE. Roles: reader(192.168.1.229;)
OK,能看到這個效果說明你已經配置成功了,下面就可以隨心所欲的折騰mysql的異常狀態,來進行測試。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/90618/viewspace-670918/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- MySQL MMM高可用方案MySql
- mysql高可用叢集之MMMMySql
- MySQL高可用MMM方案安裝部署MySql
- MMM高可用配置
- MySQL 高可用架構之 MMM 架構MySql架構
- MySQL高可用架構-MMM、MHA、MGR、PXCMySql架構
- MMM實現mysql高可用性薦MySql
- MySQL高可用架構-MMM環境部署記錄MySql架構
- MySQL主主複製+MMM實現高可用(一)MySql
- Mysql + keepalive高可用搭建MySql
- MySQL主主複製+slave+MMM實現高可用(二)MySql
- mysql高可用衡搭建(Keepalived)MySql
- mysql高可用架構MHA搭建MySql架構
- MySQL 半同步複製+MMM架構MySql架構
- 搭建 MySQL 高可用高效能叢集MySql
- MySQL 同步複製及高可用方案總結MySql
- 使用pgpool搭建高可用PostgreSQL叢集SQL
- mysql高可用架構MHA搭建(centos7+mysql5.7.28)MySql架構CentOS
- MySQL叢集搭建(6)-雙主+keepalived高可用MySql
- 使用nginx搭建高可用,高併發的wcf叢集Nginx
- RabbitMQ高階指南:從配置、使用到高可用叢集搭建MQ
- MongoDB高可用叢集搭建MongoDB
- zookeeper 高可用叢集搭建
- Mysql 5.7 MHA 高可用MySql
- MySQL MHA高可用方案MySql
- MySQL 高可用淺析MySql
- MySQL高可用淺析MySql
- 高可用 proxysql + mysql MGRMySql
- 使用Kubeadm搭建高可用Kubernetes叢集
- MySQL5.7 Galera Cluster安裝搭建及高可用測試MySql
- 搭建高併發、高可用的系統
- 搭建高可用k8sK8S
- MySQL高可用方案介紹MySql
- 【MHA】mysql高可用之MHAMySql
- mysql高可用之keepalivedMySql
- Mysql高可用架構方案MySql架構
- 基於Centos7.x 搭建MySQL Galera Cluster高可用架構CentOSMySql架構
- 如何在滴滴雲 DC2 上搭建高可用 MySQL 叢集MySql