mysql cluster 7 動態增加資料節點配置步驟
vi /usr/local/mysql/mysql-cluster/config.ini
[ndbd default]
DataMemory = 100M
IndexMemory = 100M
NoOfReplicas = 2
DataDir = /usr/local/mysql/var/mysql-cluster
[ndbd]
Id = 1
HostName = 172.20.86.188
[ndbd]
Id = 2
HostName = 172.20.86.189
[mgm]
HostName = 172.20.86.185
Id = 10
[mysqld]
Id=20
HostName = 172.20.86.185
2 在185 上執行ndb_mgm
Ndb_mgm>show
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: 172.20.86.185:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @172.20.86.188 (5.1.32-ndb-7.0.5, Nodegroup: 0, Master)
id=2 @172.20.86.189 (5.1.32-ndb-7.0.5, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=10 @172.20.86.185 (5.1.32-ndb-7.0.5)
[mysqld(API)] 2 node(s)
id=20 @172.20.86.185 (5.1.32-ndb-7.0.5)
[ndbd default]
DataMemory = 100M
IndexMemory = 100M
NoOfReplicas = 2
DataDir = /usr/local/mysql/var/mysql-cluster
[ndbd]
Id = 1
HostName = 172.20.86.188
[ndbd]
Id = 2
HostName = 172.20.86.189
[ndbd]
Id = 3
HostName = 172.20.86.187
[ndbd]
Id = 4
HostName = 172.20.86.186
[mgm]
HostName = 172.20.86.185
Id = 10
[mysqld]
Id=20
HostName = 172.20.86.185
停止管理節點
ndb_mgm> 10 STOP
Node 10 has shut down.
Disconnecting to allow Management Server to shutdown
shell>
重新載入配置檔案
shell> ndb_mgmd -f config.ini --reload
2008-12-08 17:29:23 [MgmSrvr] INFO -- NDB Cluster Management Server. 5.1.34-ndb-7.0.7
2008-12-08 17:29:23 [MgmSrvr] INFO -- Reading cluster configuration from 'config.ini'
檢視新叢集狀態
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: 172.20.86.185:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @172.20.86.188 (5.1.32-ndb-7.0.5, Nodegroup: 0, Master)
id=2 @172.20.86.189 (5.1.32-ndb-7.0.5, Nodegroup: 0)
id=3 (not connected, accepting connect from 172.20.86.186)
id=4 (not connected, accepting connect from 172.20.86.187)
[ndb_mgmd(MGM)] 1 node(s)
id=10 @172.20.86.185 (5.1.32-ndb-7.0.5)
[mysqld(API)] 1 node(s)
id=20 @172.20.86.185 (5.1.32-ndb-7.0.5)
輪流重新啟動連線的資料節點
ndb_mgm> 1 RESTART
Node 1: Node shutdown initiated
Node 1: Node shutdown completed, restarting, no start.
Node 1 is being restarted
ndb_mgm> Node 1: Start initiated (version 7.0.5)
Node 1: Started (version 7.0.5)
ndb_mgm> 2 RESTART
Node 2: Node shutdown initiated
Node 2: Node shutdown completed, restarting, no start.
Node 2 is being restarted
輪流重啟sql節點
本案僅有一個sql節點
shell> mysqladmin -uroot -ppassword shutdown
shell> mysqld_safe &
初始化新的資料節點
在新增的資料節點上 186 和187 上執行
Shell>ndbmtd –initial
或者
Shell>ndbd –initial
登陸管理節點檢視叢集狀態
Connected to Management Server at: 172.20.86.185:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @172.20.86.188 (5.1.32-ndb-7.0.5, Nodegroup: 0, Master)
id=2 @172.20.86.189 (5.1.32-ndb-7.0.5, Nodegroup: 0)
id=3 @172.20.86.186 (5.1.32-ndb-7.0.5, no nodegroup)
id=4 @172.20.86.187 (5.1.32-ndb-7.0.5, no nodegroup)
[ndb_mgmd(MGM)] 1 node(s)
id=10 @172.20.86.185 (5.1.32-ndb-7.0.5)
[mysqld(API)] 2 node(s)
id=20 @172.20.86.185 (5.1.32-ndb-7.0.5)
ndb_mgm> CREATE NODEGROUP 3,4
Nodegroup 1 created
ndb_mgm> SHOW
Connected to Management Server at: 172.20.86.185:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @172.20.86.188 (5.1.32-ndb-7.0.5, Nodegroup: 0, Master)
id=2 @172.20.86.189 (5.1.32-ndb-7.0.5, Nodegroup: 0)
id=3 @172.20.86.186 (5.1.32-ndb-7.0.5, Nodegroup: 1)
id=4 @172.20.86.187 (5.1.32-ndb-7.0.5, Nodegroup: 1)
[ndb_mgmd(MGM)] 1 node(s)
id=10 @172.20.86.185 (5.1.32-ndb-7.0.5)
[mysqld(API)] 1 node(s)
id=20 @172.20.86.185 (5.1.32-ndb-7.0.5)
資料的重新分配
對於新增的資料節點一些訪問量較高的資料和比較重要的資料需要從新分配。
分發資料命令:
Alter online table ips reorganize partition;
檢視資料的分配情況:
ndb_mgm> ALL REPORT MEMORY
Node 1: Data usage is 5%(177 32K pages of total 3200)
Node 1: Index usage is 0%(108 8K pages of total 12832)
Node 2: Data usage is 5%(177 32K pages of total 3200)
Node 2: Index usage is 0%(108 8K pages of total 12832)
Node 3: Data usage is 0%(0 32K pages of total 3200)
Node 3: Index usage is 0%(0 8K pages of total 12832)
Node 4: Data usage is 0%(0 32K pages of total 3200)
Node 4: Index usage is 0%(0 8K pages of total 12832)
或者
shell> ndb_desc -c 192.168.0.10 -d n ips -p
-- ips --
Version: 1
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 6
Number of primary keys: 1
Length of frm data: 340
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 2
TableStatus: Retrieved
-- Attributes --
id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
country_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
type Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
ip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
addresses Bigunsigned NULL AT=FIXED ST=MEMORY
date Bigunsigned NULL AT=FIXED ST=MEMORY
-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedIndex
-- Per partition info --
Partition Row count Commit count Frag fixed memory Frag varsized memory
0 26086 26086 1572864 557056
1 26329 26329 1605632 557056
NDBT_ProgramExit: 0 - OK
You can cause the data to be redistributed among all of the data nodes by performing, for each NDBCLUSTER table, an ALTER ONLINE TABLE ... REORGANIZE PARTITION statement in the mysql client. After issuing the statement ALTER ONLINE TABLE ips REORGANIZE PARTITION, you can see using ndb_desc that the data for this table is now stored using 4 partitions, as shown here (with the relevant portions of the output in bold type):
shell> ndb_desc -c 172.20.86.185 -d n ips -p
-- ips --
Version: 16777217
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 6
Number of primary keys: 1
Length of frm data: 341
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 4
TableStatus: Retrieved
-- Attributes --
id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
country_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
type Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
ip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
addresses Bigunsigned NULL AT=FIXED ST=MEMORY
date Bigunsigned NULL AT=FIXED ST=MEMORY
-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedIndex
-- Per partition info --
Partition Row count Commit count Frag fixed memory Frag varsized memory
0 12981 52296 1572864 557056
1 13236 52515 1605632 557056
2 13105 13105 819200 294912
3 13093 13093 819200 294912
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/90618/viewspace-604979/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Mysql增加節點MySql
- Jaspersoft?Studio新增mysql資料庫配置步驟MySql資料庫
- OpenResty + Lua 動態增加 Zuul 節點RESTZuul
- Jaspersoft?Studio新增mysql資料庫配置步驟特破MySql資料庫
- MySQL資料庫安裝步驟-WindowsMySql資料庫Windows
- python連線mysql資料庫步驟PythonMySql資料庫
- Hive -------- 使用mysql儲存hive後設資料,Mysql的安裝以及配置步驟HiveMySql
- rac新增節點步驟(11g)
- Oracle資料庫啟動步驟Oracle資料庫
- VCS中檢查Cluster中節點的狀態
- 【JDBC的實現步驟……MySQL資料庫】JDBCMySql資料庫
- DM8動態增加讀寫分離叢集節點
- 11gR2 RAC新增節點步驟
- NEO共識節點推薦搭建步驟
- Redis 超詳細的手動搭建Cluster叢集步驟Redis
- Centos MySQL資料庫遷移詳細步驟CentOSMySql資料庫
- 步步為贏,做好資料分析的7個步驟
- 7. MySQL Galera Cluster全解析 Part 7 Galera Cluster部署指南MySql
- HDFS動態新增節點
- hadoop動態摘除節點Hadoop
- Suse Linux 10中MySql安裝與配置步驟LinuxMySql
- DKHhadoop叢集新增節點管理功能的操作步驟Hadoop
- 節點加入k8s 叢集的步驟K8S
- 11g rac新增節點步驟(11g)
- hadoop的一些知識點 配置步驟Hadoop
- 配置 Windows Terminal 步驟Windows
- 動態連結的步驟與實現
- linux下安裝redis 單節點安裝操作步驟LinuxRedis
- redis cluster節點/新增刪除操作Redis
- mysql 動態生成測試資料MySql
- Laravel 生成假資料步驟Laravel
- Mac OS 配置Maven步驟MacMaven
- V8R6叢集節點擴容步驟整理
- BIRT 如何配置動態資料來源
- Mysql通過ibd檔案恢復資料的步驟詳解MySql
- 資料探勘的步驟有哪些?
- TRMM降水資料下載步驟
- Mysql資料庫大表最佳化方案和Mysql大表最佳化步驟MySql資料庫
- CentOS7安裝及配置 Zabbix全步驟,超詳細教程CentOS