mysql 叢集搭建
參考:http://blog.csdn.net/zklth/article/details/7522677
一、環境準備:
注:所有節點需要將其防牆關閉
/etc/init.d/iptables status #檢視防火牆狀態
/etc/init.d/iptables stop #關閉防火牆
ftp://mirror.switch.ch/mirror/mysql/Downloads/MySQL-Cluster-7.1/
選擇 mysql-cluster-gpl-7.1.19.tar.gz 下載
2.硬體需要五臺物理節點(虛擬機器上試過不行)
3.軟體安裝:在所有節點上做如下操作
######################################################################################################
groupadd mysql #建立mysql組
useradd -g mysql mysql #建立mysql使用者
#建立目錄
mkdir -p /opt/mysql-cluster
mkdir -p /opt/mysql-cluster/etc
mkdir -p /opt/mysql-cluster/tmp
mkdir -p /opt/mysql-cluster/data
#解壓軟體包
tar zxvf mysql-cluster-gpl-7.1.19.tar.gz
#配置安裝項
cd mydql-cluster-gpl-7.1.19
./configure --prefix=/opt/mysql-cluster --with-charset=gbk --with-collation=gbk_chinese_ci --with-client-ldflags=-all-static -with-mysqld-ldflags=-all-static --enable-assembler --with-extra-charsets=complex --enable-thread-safe-client --with-big-tables --with-readline --with-ssl --with-embedded-server --enable-local-infile --with-unix-socket-path=/opt/mysql-cluster/tmp/mysql.sock --sysconfdir=/opt/mysql-cluster/etc --without-debug --with-mysqld-user=mysql --with-plugins=max
#安裝
make && make install
######################################################################################################
二、管理節點配置
1.建立管理節點的叢集配置檔案config.ini
[root@sg204 mysql-cluster]# vi etc/config.ini
[NDBD DEFAULT]
NoOfReplicas= 2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
DataDir=/opt/mysql-cluster/data #資料存放目錄
[NDB_MGMD]
HostName= 10.30.9.204
[NDBD]
HostName= 10.30.9.206
DataDir=/opt/mysql-cluster/data
[NDBD]
HostName= 10.30.9.207
Datadir=/opt/mysql-cluster/data
[MYSQLD]
HostName= 10.30.9.208
[MYSQLD]
HostName= 10.30.9.211
2.在管理節點上啟動ndb_mgmd服務
[root@sg204 mysql-cluster]# libexec/ndb_mgmd -f etc/config.ini
MySQL Cluster Management Server mysql-5.1.56 ndb-7.1.19
這時ndb_mgmd服務已經啟動了,可以 用ps命令檢視其程式:
[root@sg204 mysql-cluster]# ps -ef | grep ndb
root 23505 1 0 May11 ? 00:00:00 libexec/ndb_mgmd -f etc/config.ini
root 24692 24238 0 01:29 pts/1 00:00:00 grep ndb
用ndb_mgm>show 命令檢視其狀態:
[root@sg204 mysql-cluster]# bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 10.30.9.206)
id=3 (not connected, accepting connect from 10.30.9.207)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.30.9.204 (mysql-5.1.56 ndb-7.1.19)
[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 10.30.9.208)
id=5 (not connected, accepting connect from 10.30.9.211)
從顯示資訊可以 看出兩個資料節點和兩個SQL節點都未啟動。
ndb_mgm> exit #退出
三、資料節點配置
在兩個資料節點上執行以下的1 2 3 4 步操作
1.建立資料字典
指定資料目錄和使用者
[root@sg206 mysql-cluster]# bin/mysql_install_db --user=mysql --datadir=/opt/mysql-cluster/data --basedir=/opt/mysql-cluster
WARNING: The host 'sg206' could not be looked up with resolveip.
This probably means that your libc libraries are not 100 % compatible
with this binary MySQL version. The MySQL daemon, mysqld, should work
normally with the exception that host name resolving will not work.
This means that you should use IP addresses instead of hostnames
when specifying MySQL privileges !
Installing MySQL system tables...
OK
Filling help tables...
OK
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/opt/mysql-cluster/bin/mysqladmin -u root password 'new-password'
/opt/mysql-cluster/bin/mysqladmin -u root -h sg206 password 'new-password'
Alternatively you can run:
/opt/mysql-cluster/bin/mysql_secure_installation
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
See the manual for more instructions.
You can start the MySQL daemon with:
cd /opt/mysql-cluster ; /opt/mysql-cluster/bin/mysqld_safe &
You can test the MySQL daemon with mysql-test-run.pl
cd /opt/mysql-cluster/mysql-test ; perl mysql-test-run.pl
Please report any problems with the /opt/mysql-cluster/scripts/mysqlbug script!
2.建立my.cnf配置檔案
[root@sg207 mysql-cluster]# vim etc/my.cnf
[mysqld]
basedir= /opt/mysql-cluster
datadir= /opt/mysql-cluster/data
user= mysql
port= 3306
socket= /opt/mysql-cluster/tmp/mysql.sock
ndbcluster
#管理節點IP
ndb-connectstring=10.30.9.204
[mysql_cluster]
#管理節點IP
ndb-connectstring=10.30.9.204
3.啟動nbdb服務
ndbd服務第一次啟動時需要加上--initial選項,以後不需要
[root@sg207 mysql-cluster]# libexec/ndbd --initial
2012-05-12 03:53:15 [ndbd] INFO -- Angel connected to '10.30.9.204:1186'
2012-05-12 03:53:15 [ndbd] INFO -- Angel allocated nodeid: 2
4.將mysql-cluster目錄下的所有內容的擁有者更換為mysql使用者,mysql組
[root@sg207 mysql-cluster]# chown -R mysql:mysql *
這時兩個資料結點nbdb服務都啟動後,再在管理節點或者資料節點均可檢視叢集狀態,
可以看出兩個資料節點上的ndbd服務都啟動了
################################################
[root@sg204 mysql-cluster]# bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.30.9.206 (mysql-5.1.56 ndb-7.1.19, Nodegroup: 0, Master)
id=3 @10.30.9.207 (mysql-5.1.56 ndb-7.1.19, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.30.9.204 (mysql-5.1.56 ndb-7.1.19)
[mysqld(API)] 2 node(s)
id=4 (not connected, accepting connect from 10.30.9.208)
id=5 (not connected, accepting connect from 10.30.9.211)
###############################################
四,SQL節點配置
對兩個SQL節點執行三中的1 2 4 步及下面的 1步
1.啟動mysqld程式
[root@sg208 mysql-cluster]# bin/mysqld_safe --defaults-file=/opt/mysql-cluster/etc/my.cnf --basedir=/opt/mysql-cluster --datadir=/opt/mysql-cluster/data --user=mysql &
若啟動失敗,則檢視錯誤日誌/opt/mysql-cluster/data/sg208.err
再在管理節點或者資料節點均可檢視叢集狀態
################################################
[root@sg204 mysql-cluster]# bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.30.9.206 (mysql-5.1.56 ndb-7.1.19, Nodegroup: 0, Master)
id=3 @10.30.9.207 (mysql-5.1.56 ndb-7.1.19, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.30.9.204 (mysql-5.1.56 ndb-7.1.19)
[mysqld(API)] 2 node(s)
id=4 @10.30.9.208 (mysql-5.1.56 ndb-7.1.19)
id=5 @10.30.9.211 (mysql-5.1.56 ndb-7.1.19)
################################################
可以 看到所有結點全部正常。
五、在兩個SQL結點上測試資料同步
在10.30.9.208SQL節點上登入資料庫
[root@sg208 mysql-cluster]#bin/mysqladmin -uroot password '111111' #修改root使用者的密碼為111111
[root@sg208 mysql-cluster]#bin/mysql -uroot -p #連線mysql cluster
Enter password: #輸入密碼111111
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.56-ndb-7.1.19 Source distribution
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database orcl #建立orcl 資料庫
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| ndbinfo |
| orcl |
| test |
+--------------------+
5 rows in set (0.00 sec)
10.30.9.208上orcl已經建立好
再在10.30.9.211上以同樣方法登入資料庫
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| ndbinfo |
| orcl |
| test |
+--------------------+
5 rows in set (0.00 sec)
可以 看到10.30.9.211上也有orcl庫
在10.30.9.208上建立表並插入資料
use orcl;
create table name(
id int(4) auto_increment not null primary key,
xm char(8),
xb char(2),
csny date) engine=ndbcluster ; # engine=ndbcluster很重要,不加的話表資料不能同步。
insert into name values('','jack','m','1900-01-01');
insert into name values('','rose','f','1904-01-01');
這時在10.30.9.211上
mysql> use orcl
Database changed
mysql> select * from name;
+----+------+------+------------+
| id | xm | xb | csny |
+----+------+------+------------+
| 1 | jack | m | 1900-01-01 |
| 2 | rose | f | 1903-01-01 |
+----+------+------+------------+
2 rows in set (0.00 sec)
可以看出資料同步了。
叢集的 關閉 :
ndb各節點的關閉和啟動
節點的啟動和關閉是有順序的,啟動的時候應該先啟動管理節點,然後是資料節點,最後是mysql節點。關閉節點的時候應該先關閉mysql節點,然後再通過管理節點關閉所有的管理節點和資料節點。
啟動:
/usr/bin/ndb_mgmd -f /usr/local/mysql/mysql-cluster/config.ini(啟動管理節點)
/usr/bin/ndbd --initial(啟動資料節點,只有第一次啟動或者有的配置變更後才需要--initial引數)
/etc/rc.d/init.d/mysqld start(啟動mysql節點)
停止:
bin/mysqladmin -u root -p shutdown
ndb_mgm -e shutdown
叢集的啟動:
1.啟動管理節點:
bin/ndb_mgmd -f /opt/mysql-cluster/etc/config.ini --reload --configdir=/opt/mysql-cluster #修改配置檔案後要加上--reload才會生效
2.啟動資料節點:
bin/ndbd --connect-string="nodeid=2;host=172.16.48.204:1186" #各個資料節點的nodeid可以在管理節點上show看到
3.啟動SQL節點:
bin/mysqld_safe --defaults-file=/opt/mysql-cluster/etc/my.cnf --basedir=/opt/mysql-cluster --datadir=/opt/mysql-cluster/data --user=mysql &
附:
config.ini檔案詳細配置可參照:http://www.linuxidc.com/Linux/2010-06/26640p2.htm
#scp -P 4400 -r root@10.0.24.103:/home2/backup/ /home/mover00/shadow_bak/sites/
拷貝遠端(10.0.24.103)的/home2/backup/ 到本地的 /home/mover00/shadow_bak/sites/
#scp -P 4400 -r /home2/backup/ root@10.0.24.99:/home/mover00/shadow_bak/sites/
拷貝本地的/home2/backup/ 到遠端(10.0.24.99)的 /home/mover00/shadow_bak/sites/
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/15187685/viewspace-1322261/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- mysql叢集搭建MySql
- MySQL MGR 叢集搭建MySql
- docker搭建mysql叢集DockerMySql
- MySQL 5.7 叢集搭建MySql
- MySQL叢集搭建方案(PXC)MySql
- mysql8叢集搭建MySql
- MySQL 5.7 MGR 叢集搭建MySql
- MySQL叢集搭建(1)-主備搭建MySql
- MySQL MGR 叢集搭建(單主模式)MySql模式
- Linux中Mysql的叢集搭建LinuxMySql
- 搭建zookeeper叢集(偽叢集)
- 叢集搭建
- 搭建 MySQL 高可用高效能叢集MySql
- MySQL 叢集7.4的搭建流程(CentOS 6.5)MySqlCentOS
- LAMP--原始碼MySQL叢集版搭建LAMP原始碼MySql
- zookeeper叢集及kafka叢集搭建Kafka
- Redis系列:搭建Redis叢集(叢集模式)Redis模式
- redis叢集搭建Redis
- nacos 叢集搭建
- kafka叢集搭建Kafka
- Ambari叢集搭建
- 搭建ELK叢集
- RabbitMQ叢集搭建MQ
- HBASE叢集搭建
- zookeeper 叢集搭建
- Zookeeper叢集搭建
- 搭建 Redis 叢集Redis
- 搭建 zookeeper 叢集
- Elasticsearch 叢集搭建Elasticsearch
- Storm叢集搭建ORM
- 從0到1搭建spark叢集---企業叢集搭建Spark
- linux下搭建ZooKeeper叢集(偽叢集)Linux
- MySQL叢集搭建(6)-雙主+keepalived高可用MySql
- 資料庫Mysql5.7 MGR叢集的搭建資料庫MySql
- Docker Compose搭建MySQL主從複製叢集DockerMySql
- Kafka學習之(五)搭建kafka叢集之Zookeeper叢集搭建Kafka
- 【環境搭建】RocketMQ叢集搭建MQ
- 【greenplum】greenplum叢集搭建