MongoDB之分片

stonebox1122發表於2017-08-24
1、環境
作業系統資訊:
IP 作業系統 MongoDB
10.163.91.15 RHLE6.5_x64 mongodb-linux-x86_64-rhel62-3.4.7.tgz
10.163.91.16 RHLE6.5_x64 mongodb-linux-x86_64-rhel62-3.4.7.tgz
10.163.91.17 RHLE6.5_x64 mongodb-linux-x86_64-rhel62-3.4.7.tgz


伺服器規劃:
10.163.97.15 10.163.97.16 10.163.97.17
mongos mongos mongos 20000
config server config server config server 21000
shard server1 主節點 shard server1 副節點 shard server1 仲裁 27001
shard server2 仲裁 shard server2 主節點 shard server2 副節點 27002
shard server3 副節點 shard server3 仲裁 shard server3 主節點 27003


從上表可以看到有四個元件:mongos、config server、shard、replica set。
mongos:資料庫叢集請求的入口,所有的請求都透過mongos進行協調,不需要在應用程式新增一個路由選擇器,mongos自己就是一個請求分發中心,它負責把對應的資料請求轉發到對應的shard伺服器上。在生產環境通常有多mongos作為請求的入口,防止其中一個掛掉所有的mongodb請求都沒有辦法操作。
config server:顧名思義為配置伺服器,儲存所有資料庫元資訊(路由、分片)的配置。mongos本身沒有物理儲存分片伺服器和資料路由資訊,只是快取在記憶體裡,配置伺服器則實際儲存這些資料。mongos第一次啟動或者關掉重啟就會從 config server 載入配置資訊,以後如果配置伺服器資訊變化會通知到所有的 mongos 更新自己的狀態,這樣 mongos 就能繼續準確路由。在生產環境通常有多個 config server 配置伺服器(必須配置為1個或者3個),因為它儲存了分片路由的後設資料,防止資料丟失!
shard:分片(sharding)是指將資料庫拆分,將其分散在不同的機器上的過程。將資料分散到不同的機器上,不需要功能強大的伺服器就可以儲存更多的資料和處理更大的負載。基本思想就是將集合切成小塊,這些塊分散到若干片裡,每個片只負責總資料的一部分,最後透過一個均衡器來對各個分片進行均衡(資料遷移)。
replica set:中文翻譯副本集,其實就是shard的備份,防止shard掛掉之後資料丟失。複製提供了資料的冗餘備份,並在多個伺服器上儲存資料副本,提高了資料的可用性, 並可以保證資料的安全性。
仲裁者(Arbiter):是複製集中的一個MongoDB例項,它並不儲存資料。仲裁節點使用最小的資源,不能將Arbiter部署在同一個資料集節點中,可以部署在其他應用伺服器或者監視伺服器中,也可部署在單獨的虛擬機器中。為了確保複製集中有奇數的投票成員(包括primary),需要新增仲裁節點做為投票,否則primary不能執行時不會自動切換primary。
簡單瞭解之後,可以這樣總結一下,應用請求mongos來操作mongodb的增刪改查,配置伺服器儲存資料庫元資訊,並且和mongos做同步,資料最終存入在shard(分片)上,為了防止資料丟失同步在副本集中儲存了一份,仲裁在資料儲存到分片的時候決定儲存到哪個節點。
總共規劃了mongos 3個, config server 3個,資料分3片 shard server 3個,每個shard 有一個副本一個仲裁也就是 3 * 2 = 6 個,總共需要部署15個例項。這些例項可以部署在獨立機器也可以部署在一臺機器,我們這裡測試資源有限,只准備了 3臺機器,在同一臺機器只要埠不同就可以了。


2、安裝mongodb
分別在3臺機器上面安裝mongodb
[root@D2-POMS15 ~]# tar -xvzf mongodb-linux-x86_64-rhel62-3.4.7.tgz -C /usr/local/
[root@D2-POMS15 ~]# mv /usr/local/mongodb-linux-x86_64-rhel62-3.4.7/ /usr/local/mongodb

配置環境變數
[root@D2-POMS15 ~]# vim .bash_profile
export PATH=$PATH:/usr/local/mongodb/bin/
[root@D2-POMS15 ~]# source .bash_profile


分別在每臺機器建立conf、mongos、config、shard1、shard2、shard3六個目錄,因為mongos不儲存資料,只需要建立日誌檔案目錄即可。
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/conf
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/mongos/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/config/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/config/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard1/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard1/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard2/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard2/log
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard3/data
[root@D2-POMS15 ~]# mkdir -p /usr/local/mongodb/shard3/log


3、config server配置伺服器
mongodb3.4以後要求配置伺服器也建立副本集,不然叢集搭建不成功。
在三臺伺服器上面新增配置檔案:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/config.conf
## 配置檔案內容
pidfilepath = /usr/local/mongodb/config/log/configsrv.pid
dbpath = /usr/local/mongodb/config/data
logpath = /usr/local/mongodb/config/log/congigsrv.log
logappend = true

bind_ip = 0.0.0.0
port = 21000
fork = true

#declare this is a config db of a cluster;
configsvr = true

#副本集名稱
replSet=configs

#設定最大連線數
maxConns=20000

分別啟動三臺伺服器的config server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15368
child process started successfully, parent exiting

登入任意一臺配置伺服器,初始化配置副本集
[root@D2-POMS15 ~]# mongo --port 21000
> config = {
...    _id : "configs",
...     members : [
...         {_id : 0, host : "10.163.97.15:21000" },
...         {_id : 1, host : "10.163.97.16:21000" },
...         {_id : 2, host : "10.163.97.17:21000" }
...     ]
... }
{
        "_id" : "configs",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.163.97.15:21000"
                },
                {
                        "_id" : 1,
                        "host" : "10.163.97.16:21000"
                },
                {
                        "_id" : 2,
                        "host" : "10.163.97.17:21000"
                }
        ]
}
> rs.initiate(config)
{ "ok" : 1 }

其中,"_id" : "configs"應與配置檔案中配置的replSet一致,"members" 中的 "host" 為三個節點的 ip 和 port。

4、配置分片副本集(三臺機器)
設定第一個分片副本集
新增配置檔案:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard1.conf
#配置檔案內容
pidfilepath = /usr/local/mongodb/shard1/log/shard1.pid
dbpath = /usr/local/mongodb/shard1/data
logpath = /usr/local/mongodb/shard1/log/shard1.log
logappend = true

bind_ip = 0.0.0.0
port = 27001
fork = true

#開啟web監控
httpinterface=true
rest=true

#副本集名稱
replSet=shard1

#declare this is a shard db of a cluster;
shardsvr = true

#設定最大連線數
maxConns=20000

啟動三臺伺服器的shard1 server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15497
child process started successfully, parent exiting

登陸一臺伺服器(不要在仲裁節點),初始化副本集
[root@D2-POMS15 ~]# mongo --port 27001
#使用admin資料庫
> use admin
switched to db admin
#定義副本集配置,第三個節點的 "arbiterOnly":true 代表其為仲裁節點。
> config = {
...    _id : "shard1",
...     members : [
...         {_id : 0, host : "10.163.97.15:27001" },
...         {_id : 1, host : "10.163.97.16:27001" },
...         {_id : 2, host : "10.163.97.17:27001" , arbiterOnly: true }
...     ]
... }
{
        "_id" : "shard1",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.163.97.15:27001"
                },
                {
                        "_id" : 1,
                        "host" : "10.163.97.16:27001"
                },
                {
                        "_id" : 2,
                        "host" : "10.163.97.17:27001",
                        "arbiterOnly" : true
                }
        ]
}
#初始化副本集配置
> rs.initiate(config);
{ "ok" : 1 }


設定第二個分片副本集
新增配置檔案:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard2.conf
#配置檔案內容
pidfilepath = /usr/local/mongodb/shard2/log/shard2.pid
dbpath = /usr/local/mongodb/shard2/data
logpath = /usr/local/mongodb/shard2/log/shard2.log
logappend = true

bind_ip = 0.0.0.0
port = 27002
fork = true

#開啟web監控
httpinterface=true
rest=true

#副本集名稱
replSet=shard2

#declare this is a shard db of a cluster;
shardsvr = true

#設定最大連線數
maxConns=20000

啟動三臺伺服器的shard2 server:
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15622
child process started successfully, parent exiting

登陸一臺伺服器(不要在仲裁節點),初始化副本集
[root@D2-POMS15 ~]# mongo --port 27002
> use admin
switched to db admin
> config = {
...    _id : "shard2",
...     members : [
...         {_id : 0, host : "10.163.97.15:27002"  , arbiterOnly: true },
...         {_id : 1, host : "10.163.97.16:27002" },
...         {_id : 2, host : "10.163.97.17:27002" }
...     ]
... }
{
        "_id" : "shard2",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.163.97.15:27002",
                        "arbiterOnly" : true
                },
                {
                        "_id" : 1,
                        "host" : "10.163.97.16:27002"
                },
                {
                        "_id" : 2,
                        "host" : "10.163.97.17:27002"
                }
        ]
}
> rs.initiate(config);
{ "ok" : 1 }

設定第三個分片副本集
新增配置檔案:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/shard3.conf
#配置檔案內容
pidfilepath = /usr/local/mongodb/shard3/log/shard3.pid
dbpath = /usr/local/mongodb/shard3/data
logpath = /usr/local/mongodb/shard3/log/shard3.log
logappend = true

bind_ip = 0.0.0.0
port = 27003
fork = true

#開啟web監控
httpinterface=true
rest=true

#副本集名稱
replSet=shard3

#declare this is a shard db of a cluster;
shardsvr = true

#設定最大連線數
maxConns=20000

啟動三臺伺服器的shard3 server
[root@D2-POMS15 ~]# mongod -f /usr/local/mongodb/conf/shard3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 15742
child process started successfully, parent exiting

登陸一臺伺服器(不要在仲裁節點),初始化副本集
> use admin
switched to db admin
> config = {
...    _id : "shard3",
...     members : [
...         {_id : 0, host : "10.163.97.15:27003" },
...         {_id : 1, host : "10.163.97.16:27003" , arbiterOnly: true},
...         {_id : 2, host : "10.163.97.17:27003" }
...     ]
... }
{
        "_id" : "shard3",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.163.97.15:27003"
                },
                {
                        "_id" : 1,
                        "host" : "10.163.97.16:27003",
                        "arbiterOnly" : true
                },
                {
                        "_id" : 2,
                        "host" : "10.163.97.17:27003"
                }
        ]
}
> rs.initiate(config);
{ "ok" : 1 }

可以看到目前已經啟動了配置伺服器和分片伺服器。
[root@D2-POMS15 ~]# ps -ef | grep mongo | grep -v grep
root     15368     1  0 15:52 ?        00:00:07 mongod -f /usr/local/mongodb/conf/config.conf
root     15497     1  0 16:00 ?        00:00:04 mongod -f /usr/local/mongodb/conf/shard1.conf
root     15622     1  0 16:06 ?        00:00:02 mongod -f /usr/local/mongodb/conf/shard2.conf
root     15742     1  0 16:21 ?        00:00:00 mongod -f /usr/local/mongodb/conf/shard3.conf


5、配置路由伺服器 mongos
在三臺伺服器上面新增配置檔案:
[root@D2-POMS15 ~]# vi /usr/local/mongodb/conf/mongos.conf
#內容
pidfilepath = /usr/local/mongodb/mongos/log/mongos.pid
logpath = /usr/local/mongodb/mongos/log/mongos.log
logappend = true

bind_ip = 0.0.0.0
port = 20000
fork = true

#監聽的配置伺服器,只能有1個或者3個 configs為配置伺服器的副本集名字
configdb = configs/10.163.97.15:21000,10.163.97.16:21000,10.163.97.17:21000

#設定最大連線數
maxConns=20000

啟動三臺伺服器的mongos server
[root@D2-POMS15 ~]# mongos -f /usr/local/mongodb/conf/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 20563
child process started successfully, parent exiting

[root@D2-POMS15 ~]# mongo --port 20000
mongos> db.stats()
{
        "raw" : {
                "shard1/10.163.97.15:27001,10.163.97.16:27001" : {
                        "db" : "admin",
                        "collections" : 1,
                        "views" : 0,
                        "objects" : 3,
                        "avgObjSize" : 146.66666666666666,
                        "dataSize" : 440,
                        "storageSize" : 36864,
                        "numExtents" : 0,
                        "indexes" : 2,
                        "indexSize" : 65536,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("7fffffff0000000000000001")
                        }
                },
                "shard2/10.163.97.16:27002,10.163.97.17:27002" : {
                        "db" : "admin",
                        "collections" : 1,
                        "views" : 0,
                        "objects" : 2,
                        "avgObjSize" : 114,
                        "dataSize" : 228,
                        "storageSize" : 16384,
                        "numExtents" : 0,
                        "indexes" : 2,
                        "indexSize" : 32768,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("7fffffff0000000000000001")
                        }
                },
                "shard3/10.163.97.15:27003,10.163.97.17:27003" : {
                        "db" : "admin",
                        "collections" : 1,
                        "views" : 0,
                        "objects" : 2,
                        "avgObjSize" : 114,
                        "dataSize" : 228,
                        "storageSize" : 16384,
                        "numExtents" : 0,
                        "indexes" : 2,
                        "indexSize" : 32768,
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("7fffffff0000000000000002")
                        }
                }
        },
        "objects" : 7,
        "avgObjSize" : 127.71428571428571,
        "dataSize" : 896,
        "storageSize" : 69632,
        "numExtents" : 0,
        "indexes" : 6,
        "indexSize" : 131072,
        "fileSize" : 0,
        "extentFreeList" : {
                "num" : 0,
                "totalSize" : 0
        },
        "ok" : 1
}


6、啟用分片
目前搭建了mongodb配置伺服器、路由伺服器,各個分片伺服器,不過應用程式連線到mongos路由伺服器並不能使用分片機制,還需要在路由伺服器裡設定分片配置,讓分片生效。
登陸任意一臺mongos
[root@D2-POMS15 ~]# mongo --port 20000
#使用admin資料庫
mongos> use admin
switched to db admin
#串聯路由伺服器與分配副本集
mongos> sh.addShard("shard1/10.163.97.15:27001,10.163.97.16:27001,10.163.97.17:27001")
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> sh.addShard("shard2/10.163.97.15:27002,10.163.97.16:27002,10.163.97.17:27002")
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.addShard("shard3/10.163.97.15:27003,10.163.97.16:27003,10.163.97.17:27003")
{ "shardAdded" : "shard3", "ok" : 1 }
#檢視叢集狀態
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003",  "state" : 1 }
  active mongoses:
        "3.4.7" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:


7、測試
目前配置服務、路由服務、分片服務、副本集服務都已經串聯起來了,但我們的目的是希望插入資料,資料能夠自動分片。連線在mongos上,準備讓指定的資料庫、指定的集合分片生效。
[root@D2-POMS15 ~]# mongo --port 20000
mongos> use admin
switched to db admin
#指定testdb分片生效
mongos> db.runCommand({enablesharding :"testdb"});
{ "ok" : 1 }
#指定資料庫裡需要分片的集合和片鍵
mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003",  "state" : 1 }
  active mongoses:
        "3.4.7" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                4 : Success
  databases:
        {  "_id" : "testdb",  "primary" : "shard1",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)

以上就是設定testdb的 table1 表需要分片,根據 id 自動分片到 shard1 ,shard2,shard3 上面去。要這樣設定是因為不是所有mongodb 的資料庫和表 都需要分片!

測試分片:
#連線mongos伺服器
[root@D2-POMS15 ~]# mongo --port 20000
#使用testdb
mongos> use testdb
switched to db testdb
#插入測試資料
mongos> for(var i=1;i<=100000;i++){db.table1.insert({id:i,"test1":"testval1"})}
WriteResult({ "nInserted" : 1 })
#檢視分片情況
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003",  "state" : 1 }
  active mongoses:
        "3.4.7" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                6 : Success
  databases:
        {  "_id" : "testdb",  "primary" : "shard1",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1
                                shard2  1
                                shard3  1
                        { "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard2 Timestamp(2, 0)
                        { "id" : 2 } -->> { "id" : 20 } on : shard3 Timestamp(3, 0)
                        { "id" : 20 } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(3, 1)

可以看到這裡分片很不均衡,原因是預設的chunkSize為64M,這裡的資料量沒有達到64M,可以修改一下chunkSize的大小,方便測試:
[root@D2-POMS15 ~]# mongo --port 20000
mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 1 } )
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })
mongos> db.settings.find();
{ "_id" : "balancer", "stopped" : false, "mode" : "full" }
{ "_id" : "chunksize", "value" : 1 }

修改後重新來測試:
mongos> use testdb
switched to db testdb
mongos> db.table1.drop();
true
mongos> use admin
switched to db admin
mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
{ "collectionsharded" : "testdb.table1", "ok" : 1 }
mongos> use testdb
switched to db testdb
mongos> for(var i=1;i<=100000;i++){db.table1.insert({id:i,"test1":"testval1"})}
WriteResult({ "nInserted" : 1 })
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("599d34bf612249caec3fc9fe")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.163.97.15:27001,10.163.97.16:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.163.97.16:27002,10.163.97.17:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.163.97.15:27003,10.163.97.17:27003",  "state" : 1 }
  active mongoses:
        "3.4.7" : 1
 autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
                Balancer lock taken at Wed Aug 23 2017 15:54:40 GMT+0800 (CST) by ConfigServer:Balancer
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                14 : Success
  databases:
        {  "_id" : "testdb",  "primary" : "shard1",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  4
                                shard2  4
                                shard3  3
                        { "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : shard2 Timestamp(5, 1)
                        { "id" : 2 } -->> { "id" : 20 } on : shard3 Timestamp(6, 1)
                        { "id" : 20 } -->> { "id" : 9729 } on : shard1 Timestamp(7, 1)
                        { "id" : 9729 } -->> { "id" : 21643 } on : shard1 Timestamp(3, 3)
                        { "id" : 21643 } -->> { "id" : 31352 } on : shard2 Timestamp(4, 2)
                        { "id" : 31352 } -->> { "id" : 43021 } on : shard2 Timestamp(4, 3)
                        { "id" : 43021 } -->> { "id" : 52730 } on : shard3 Timestamp(5, 2)
                        { "id" : 52730 } -->> { "id" : 64695 } on : shard3 Timestamp(5, 3)
                        { "id" : 64695 } -->> { "id" : 74404 } on : shard1 Timestamp(6, 2)
                        { "id" : 74404 } -->> { "id" : 87088 } on : shard1 Timestamp(6, 3)
                        { "id" : 87088 } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(7, 0)

mongos> db.table1.stats()
{
        "sharded" : true,
        "capped" : false,
        "ns" : "testdb.table1",
        "count" : 100000,
        "size" : 5400000,
        "storageSize" : 1736704,
        "totalIndexSize" : 2191360,
        "indexSizes" : {
                "_id_" : 946176,
                "id_1" : 1245184
        },
        "avgObjSize" : 54,
        "nindexes" : 2,
        "nchunks" : 11,
        "shards" : {
                "shard1" : {
                        "ns" : "testdb.table1",
                        "size" : 2376864,
                        "count" : 44016,
                        "avgObjSize" : 54,
                        "storageSize" : 753664,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 933888,
                        "indexSizes" : {
                                "_id_" : 405504,
                                "id_1" : 528384
                        },
                        "ok" : 1
                },
                "shard2" : {
                        "ns" : "testdb.table1",
                        "size" : 1851768,
                        "count" : 34292,
                        "avgObjSize" : 54,
                        "storageSize" : 606208,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 774144,
                        "indexSizes" : {
                                "_id_" : 335872,
                                "id_1" : 438272
                        },
                        "ok" : 1
                },
                "shard3" : {
                        "ns" : "testdb.table1",
                        "size" : 1171368,
                        "count" : 21692,
                        "avgObjSize" : 54,
                        "storageSize" : 376832,
                        "capped" : false,
                        "nindexes" : 2,
                        "totalIndexSize" : 483328,
                        "indexSizes" : {
                                "_id_" : 204800,
                                "id_1" : 278528
                        },
                        "ok" : 1
                }
        },
        "ok" : 1
}

可以看到現在的資料分佈就均勻多了。


8、後期運維
啟動關閉
mongodb的啟動順序是,先啟動配置伺服器,在啟動分片,最後啟動mongos.
mongod -f /usr/local/mongodb/conf/config.conf
mongod -f /usr/local/mongodb/conf/shard1.conf
mongod -f /usr/local/mongodb/conf/shard2.conf
mongod -f /usr/local/mongodb/conf/shard3.conf
mongod -f /usr/local/mongodb/conf/mongos.conf
關閉時,直接killall殺掉所有程式
killall mongod
killall mongos

參考:


http://www.cnblogs.com/ityouknow/p/7344005.html

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28536251/viewspace-2144115/,如需轉載,請註明出處,否則將追究法律責任。

相關文章