MongoDB健壯叢集——用副本集做分片

jx_yu發表於2015-01-16

1.    MongoDB分片+副本集

健壯的叢集方案

多個配置伺服器 多個mongos伺服器  每個片都是副本集 正確設定w

架構圖

說明:

1.   此實驗環境在一臺機器上透過不同portdbpath實現啟動不同的mongod例項

2.   總的9mongod例項,分別做成shard1shard2shard3三組副本集,每組12

3.   Mongos程式的數量不限,建議把mongos配置在每個應用伺服器本機上,這樣每個應用伺服器就與自身的mongos進行通訊,如果伺服器不工作了,並不會影響其他的應用伺服器與其自己的mongos通訊

4.   此實驗模擬2臺應用伺服器(2mongos服務)

5.   生產環境中每個片都應該是副本集,這樣單個伺服器壞了,才不會導致片失效


 


 

 

 

 

 

 





部署環境

建立相關目錄

~]# mkdir -p /data/mongodb/shard{1,2,3}/node{1,2,3}

~]# mkdir -p /data/mongodb/shard{1,2,3}/logs

~]# ls /data/mongodb/shard*

/data/mongodb/shard1:

logs  node1  node2  node3

/data/mongodb/shard2:

logs  node1  node2  node3

/data/mongodb/shard3:

logs  node1  node2  node3

~]# mkdir -p /data/mongodb/config/logs

~]# mkdir -p /data/mongodb/config/node{1,2,3}

~]# ls /data/mongodb/config/

logs  node1  node2  node3

~]# mkdir -p /data/mongodb/mongos/logs

啟動配置服務

Config server

/data/mongodb/config/node1

/data/mongodb/config/logs/node1.log

10000

/data/mongodb/config/node2

/data/mongodb/config/logs/node2.log

20000

/data/mongodb/config/node3

/data/mongodb/config/logs/node3.log

30000

#按規劃啟動3個:跟啟動單個配置服務一樣,只是重複3

~]# mongod --dbpath /data/mongodb/config/node1 --logpath /data/mongodb/config/logs/node1.log --logappend --fork --port 10000

~]# mongod --dbpath /data/mongodb/config/node2 --logpath /data/mongodb/config/logs/node2.log --logappend --fork --port 20000

~]# mongod --dbpath /data/mongodb/config/node3 --logpath /data/mongodb/config/logs/node3.log --logappend --fork --port 30000

~]# ps -ef|grep mongod|grep -v grep

root      3983     1  0 11:10 ?        00:00:03 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node1 --logpath /data/mongodb/config/logs/node1.log --logappend --fork --port 10000

root      4063     1  0 11:13 ?        00:00:02 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node2 --logpath /data/mongodb/config/logs/node2.log --logappend --fork --port 20000

root      4182     1  0 11:17 ?        00:00:03 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node3 --logpath /data/mongodb/config/logs/node3.log --logappend --fork --port 30000

啟動路由服務

Mongos server

——

/data/mongodb/mongos/logs/node1.log

40000

——

/data/mongodb/mongos/logs/node2.log

50000

#mongos的數量不受限制,通常應用一個伺服器執行一個mongos

~]#mongos --port 40000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos1.log  --logappend --fork

~]#mongos --port 50000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos2.log  --logappend –fork

~]# ps -ef|grep mongos|grep -v grep

root      4421     1  0 11:29 ?        00:00:00 /usr/local/mongodb/bin/mongos --port 40000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos1.log --logappend --fork

root      4485     1  0 11:29 ?        00:00:00 /usr/local/mongodb/bin/mongos --port 50000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos2.log --logappend  --fork

配置副本集

按規劃,配置啟動shard1shard2shard3三組副本集

#此處以shard1為例說明配置方法

#啟動三個mongod程式

~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node1 --logpath /data/mongodb/shard1/logs/node1.log --logappend --fork --port 10001

~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node2 --logpath /data/mongodb/shard1/logs/node2.log --logappend --fork --port 10002

~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node3 --logpath /data/mongodb/shard1/logs/node3.log --logappend --fork --port 10003

#初始化Replica Set:shard1

~]# /usr/local/mongodb/bin/mongo --port 10001

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:10001/test

> use admin

switched to db admin

> rsconf={

...   "_id" : "shard1",

...   "members" : [

...       {

...           "_id" : 0,

...           "host" : "192.168.211.217:10001"

...       }

...   ]

... }

{

        "_id" : "shard1",

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:10001"

                }

        ]

}

>  rs.initiate(rsconf)

{

        "info" : "Config now saved locally.  Should come online in about a minute.",

        "ok" : 1

}

> rs.add("192.168.211.217:10002")

{ "ok" : 1 }

shard1:PRIMARY> rs.add("192.168.211.217:10003")

{ "ok" : 1 }

shard1:PRIMARY>  rs.conf()

{

        "_id" : "shard1",

        "version" : 3,

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:10001"

                },

                {

                        "_id" : 1,

                        "host" : "192.168.211.217:10002"

                },

                {

                        "_id" : 2,

                        "host" : "192.168.211.217:10003"

                }

        ]

}

Shard2shard3shard1配置副本集

#最終副本集配置如下:

shard3:PRIMARY> rs.conf()

{

        "_id" : "shard3",

        "version" : 3,

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:30001"

                },

                {

                        "_id" : 1,

                        "host" : "192.168.211.217:30002"

                },

                {

                        "_id" : 2,

                        "host" : "192.168.211.217:30003"

                }

        ]

}

~]# /usr/local/mongodb/bin/mongo --port 20001

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:20001/test

shard2:PRIMARY> rs.conf()

{

        "_id" : "shard2",

        "version" : 3,

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:20001"

                },

                {

                        "_id" : 1,

                        "host" : "192.168.211.217:20002"

                },

                {

                        "_id" : 2,

                        "host" : "192.168.211.217:20003"

                }

        ]

}

~]# /usr/local/mongodb/bin/mongo --port 10001

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:10001/test

shard1:PRIMARY> rs.conf()

{

        "_id" : "shard1",

        "version" : 3,

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:10001"

                },

                {

                        "_id" : 1,

                        "host" : "192.168.211.217:10002"

                },

                {

                        "_id" : 2,

                        "host" : "192.168.211.217:10003"

                }

        ]

}

目前mongo相關程式埠情況如下:


#此時,剛好與環境規劃列表對應

新增(副本集)分片

#連線到mongs,並切換到admin這裡必須連線路由節點

~]# /usr/local/mongodb/bin/mongo --port 40000

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:40000/test

mongos> use admin

switched to db admin

mongos> db

admin

mongos> db.runCommand({"addShard":"shard1/192.168.211.217:10001"})

{ "shardAdded" : "shard1", "ok" : 1 }

mongos> db.runCommand({"addShard":"shard2/192.168.211.217:20001"})

{ "shardAdded" : "shard2", "ok" : 1 }

mongos> db.runCommand({"addShard":"shard3/192.168.211.217:30001"})

{ "shardAdded" : "shard3", "ok" : 1 }

mongos> db.runCommand({listshards:1})

mongos> db.runCommand({listshards:1})

{

        "shards" : [

                {

                        "_id" : "shard1",

                        "host" : "shard1/192.168.211.217:10001,192.168.211.217:10002,192.168.211.217:10003"

                },

                {

                        "_id" : "shard2",

                        "host" : "shard2/192.168.211.217:20001,192.168.211.217:20002,192.168.211.217:20003"

                },

                {

                        "_id" : "shard3",

                        "host" : "shard3/192.168.211.217:30001,192.168.211.217:30002,192.168.211.217:30003"

                }

        ],

        "ok" : 1

}

啟用dbcollections分片

啟用資料庫分片,命令

> db.runCommand( { enablesharding : “” } );

執行以上命令,可以讓資料庫跨shard,如果不執行這步,資料庫只會存放在一個shard

一旦啟用資料庫分片,資料庫中不同的collection將被存放在不同的shard

但一個collection仍舊存放在同一個shard上,要使單個collection也分片,還需單獨對collection作些操作

#如:啟用test資料庫分片功能,連線mongos程式

~]# /usr/local/mongodb/bin/mongo --port 50000

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:50000/test

mongos> use admin

switched to db admin

mongos> db.runCommand({"enablesharding":"test"})

{ "ok" : 1 }

要使單個collection也分片儲存,需要給collection指定一個分片key,透過以下命令操作

> db.runCommand( { shardcollection : “”,key : });

注:  a. 分片的collection系統會自動建立一個索引(也可使用者提前建立好)

         b. 分片的collection只能有一個在分片key上的唯一索引,其它唯一索引不被允許

#collectiontest.yujx分片

mongos> use admin

switched to db admin

mongos> db.runCommand({"shardcollection":"test.yujx","key":{"_id":1}})

{ "collectionsharded" : "test.yujx", "ok" : 1 }

生成測試資料

mongos> use test

#生成測試資料,此處只是為了說明問題,並不一定要生成那麼多行

mongos> for(var i=1;i<=888888;i++) db.yujx.save({"id":i,"a":123456789,"b":888888888,"c":100000000})

mongos> db.yujx.count()

271814             #此實驗使用了這麼多行測試資料

檢視集合分片

mongos> db.yujx.stats()

{

        "sharded" : true,

        "systemFlags" : 1,

        "userFlags" : 1,

        "ns" : "test.yujx",

        "count" : 271814,

        "numExtents" : 19,

        "size" : 30443168,

        "storageSize" : 51773440,

        "totalIndexSize" : 8862784,

        "indexSizes" : {

                "_id_" : 8862784

        },

        "avgObjSize" : 112,

        "nindexes" : 1,

        "nchunks" : 4,

        "shards" : {

                "shard1" : {

                        "ns" : "test.yujx",

                        "count" : 85563,

                        "size" : 9583056,

                        "avgObjSize" : 112,

                        "storageSize" : 11182080,

                        "numExtents" : 6,

                        "nindexes" : 1,

                        "lastExtentSize" : 8388608,

                        "paddingFactor" : 1,

                        "systemFlags" : 1,

                        "userFlags" : 1,

                        "totalIndexSize" : 2796192,

                        "indexSizes" : {

                                "_id_" : 2796192

                        },

                        "ok" : 1

                },

                "shard2" : {

                        "ns" : "test.yujx",

                        "count" : 180298,

                        "size" : 20193376,

                        "avgObjSize" : 112,

                        "storageSize" : 37797888,

                        "numExtents" : 8,

                        "nindexes" : 1,

                        "lastExtentSize" : 15290368,

                        "paddingFactor" : 1,

                        "systemFlags" : 1,

                        "userFlags" : 1,

                        "totalIndexSize" : 5862192,

                        "indexSizes" : {

                                "_id_" : 5862192

                        },

                        "ok" : 1

                },

                "shard3" : {

                        "ns" : "test.yujx",

                        "count" : 5953,

                        "size" : 666736,

                        "avgObjSize" : 112,

                        "storageSize" : 2793472,

                        "numExtents" : 5,

                        "nindexes" : 1,

                        "lastExtentSize" : 2097152,

                        "paddingFactor" : 1,

                        "systemFlags" : 1,

                        "userFlags" : 1,

                        "totalIndexSize" : 204400,

                        "indexSizes" : {

                                "_id_" : 204400

                        },

                        "ok" : 1

                }

        "ok" : 1

}

#此時我們連線各個分片的primary查詢自身擁有的記錄資料:


可以發現分別與上面的集合分片狀態顯示的一致

補充提示:secondary節點預設是無法執行查詢操作,需要執行setSlaveOK操作

檢視資料庫分片

mongos> db.printShardingStatus()


#或者連線mongosconfig資料庫查詢

mongos> db.shards.find()

{ "_id" : "shard1", "host" : "shard1/192.168.211.217:10001,192.168.211.217:10002,192.168.211.217:10003" }

{ "_id" : "shard2", "host" : "shard2/192.168.211.217:20001,192.168.211.217:20002,192.168.211.217:20003" }

{ "_id" : "shard3", "host" : "shard3/192.168.211.217:30001,192.168.211.217:30002,192.168.211.217:30003" }

mongos> db.databases.find()

{ "_id" : "admin", "partitioned" : false, "primary" : "config" }

{ "_id" : "test", "partitioned" : true, "primary" : "shard3" }

mongos> db.chunks.find()

{ "_id" : "test.yujx-_id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : ObjectId("54b8b58ddb2797bbb973a718") }, "shard" : "shard1" }

{ "_id" : "test.yujx-_id_ObjectId('54b8b58ddb2797bbb973a718')", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b58ddb2797bbb973a718") }, "max" : { "_id" : ObjectId("54b8b599db2797bbb973be59") }, "shard" : "shard3" }

{ "_id" : "test.yujx-_id_ObjectId('54b8b599db2797bbb973be59')", "lastmod" : Timestamp(4, 1), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b599db2797bbb973be59") }, "max" : { "_id" : ObjectId("54b8b6cfdb2797bbb9767ea2") }, "shard" : "shard2" }

{ "_id" : "test.yujx-_id_ObjectId('54b8b6cfdb2797bbb9767ea2')", "lastmod" : Timestamp(4, 0), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b6cfdb2797bbb9767ea2") }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard1" }

單點故障分析

 由於這是為了瞭解入門mongodb做的實驗,而故障模擬太浪費時間,所以這裡就不一一列出,關於故障場景分析,可以參考:
http://blog.itpub.net/27000195/viewspace-1404402/

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/27000195/viewspace-1404405/,如需轉載,請註明出處,否則將追究法律責任。

相關文章