MongoDB 3.2.7 for rhel6.4 副本集-分片叢集部署
今天,一同事反映,他安裝部署mongodb副本集--分片叢集,初始化分片時遇到問題:初始化分片必須使用主機名(也就是必須有相當於DNS服務的解析),這樣以來,mongo副本集--分片叢集就
會出現DNS伺服器單點故障問題。為了驗證這一問題,我單獨使用ip部署mongo 3.2.7 for rhel 6.4 副本集--分片叢集,驗證結果是:副本集及分片初始化時使用IP,則同時均使用IP方式,使用host或
DNS解析,則副本集及分片初始化時均使用主機名或域名解析方式,可成功部署mongo 3.2.7 for rhel 6.4 副本集--分片叢集。另外,個人觀點:建議使用DNS或host域名解析,因為主機名可以不改變,
而主機的IP地址的改變可能性是很大的。
mongo 3.2.7 for rhel 6.4 副本集--分片叢集的部署過程如下:
首先,確保rhel 6.4環境能支援3.2.7的安裝部署,mongodb3.2.7單例項安裝過程及可能遇到的問題可參考:
MongoDB 3.2 for RHEL6.4 installation(地址:http://blog.itpub.net/29357786/viewspace-2119891/)
本次實驗過程涉及3臺伺服器:
角色:副本集仲裁者--分片叢集配置伺服器(192.168.144.111)
[root@arbiter ~]# hostname
arbiter
[root@arbiter ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@arbiter ~]#
角色:副本集firstset主節點--分片叢集分片1(192.168.144.130)
[root@mongo2 ~]# hostname
mongo2
[root@mongo2 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@mongo2 ~]#
角色:副本集secondset主節點--分片叢集分片2(192.168.144.120)
[root@mongo1 ~]# hostname
mongo1
[root@mongo1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@mongo1 ~]#
副本集仲裁者--分片叢集配置伺服器(192.168.144.111)需要建立的檔案目錄
#資料存放目錄
/opt/mongo/data/dns_arbiter1
/opt/mongo/data/dns_arbiter2
/opt/mongo/data/dns_sdconfig1
/opt/mongo/data/dns_sdconfig2
#日誌存放目錄
/opt/mongo/logs/dns_aribter1
/opt/mongo/logs/dns_aribter2
/opt/mongo/logs/dns_config1
/opt/mongo/logs/dns_config2
副本集firstset主節點--分片叢集分片1(192.168.144.130)需要建立的檔案目錄
#資料存放目錄
/opt/mongo/data/dns_repset1
/opt/mongo/data/dns_repset2
/opt/mongo/data/dns_shard2
#日誌存放目錄
/opt/mongo/logs/dns_sd2
/opt/mongo/logs/firstset
/opt/mongo/logs/secondset
副本集second主節點--分片叢集分片2(192.168.144.120)需要建立的檔案目錄
#資料存放目錄
/opt/mongo/data/dns_repset1
/opt/mongo/data/dns_repset2
/opt/mongo/data/dns_shard2
#日誌存放目錄
/opt/mongo/logs/dns_sd1
/opt/mongo/logs/firstset
/opt/mongo/logs/secondset
第一步:初始化副本集1
#初始化副本集1的例項程式,三個節點需要執行的命令
aibiter執行命令
mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb
mongo1r執行命令
mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
mongo2r執行命令
mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
會出現DNS伺服器單點故障問題。為了驗證這一問題,我單獨使用ip部署mongo 3.2.7 for rhel 6.4 副本集--分片叢集,驗證結果是:副本集及分片初始化時使用IP,則同時均使用IP方式,使用host或
DNS解析,則副本集及分片初始化時均使用主機名或域名解析方式,可成功部署mongo 3.2.7 for rhel 6.4 副本集--分片叢集。另外,個人觀點:建議使用DNS或host域名解析,因為主機名可以不改變,
而主機的IP地址的改變可能性是很大的。
mongo 3.2.7 for rhel 6.4 副本集--分片叢集的部署過程如下:
首先,確保rhel 6.4環境能支援3.2.7的安裝部署,mongodb3.2.7單例項安裝過程及可能遇到的問題可參考:
MongoDB 3.2 for RHEL6.4 installation(地址:http://blog.itpub.net/29357786/viewspace-2119891/)
本次實驗過程涉及3臺伺服器:
角色:副本集仲裁者--分片叢集配置伺服器(192.168.144.111)
[root@arbiter ~]# hostname
arbiter
[root@arbiter ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@arbiter ~]#
角色:副本集firstset主節點--分片叢集分片1(192.168.144.130)
[root@mongo2 ~]# hostname
mongo2
[root@mongo2 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@mongo2 ~]#
角色:副本集secondset主節點--分片叢集分片2(192.168.144.120)
[root@mongo1 ~]# hostname
mongo1
[root@mongo1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@mongo1 ~]#
副本集仲裁者--分片叢集配置伺服器(192.168.144.111)需要建立的檔案目錄
#資料存放目錄
/opt/mongo/data/dns_arbiter1
/opt/mongo/data/dns_arbiter2
/opt/mongo/data/dns_sdconfig1
/opt/mongo/data/dns_sdconfig2
#日誌存放目錄
/opt/mongo/logs/dns_aribter1
/opt/mongo/logs/dns_aribter2
/opt/mongo/logs/dns_config1
/opt/mongo/logs/dns_config2
副本集firstset主節點--分片叢集分片1(192.168.144.130)需要建立的檔案目錄
#資料存放目錄
/opt/mongo/data/dns_repset1
/opt/mongo/data/dns_repset2
/opt/mongo/data/dns_shard2
#日誌存放目錄
/opt/mongo/logs/dns_sd2
/opt/mongo/logs/firstset
/opt/mongo/logs/secondset
副本集second主節點--分片叢集分片2(192.168.144.120)需要建立的檔案目錄
#資料存放目錄
/opt/mongo/data/dns_repset1
/opt/mongo/data/dns_repset2
/opt/mongo/data/dns_shard2
#日誌存放目錄
/opt/mongo/logs/dns_sd1
/opt/mongo/logs/firstset
/opt/mongo/logs/secondset
第一步:初始化副本集1
#初始化副本集1的例項程式,三個節點需要執行的命令
aibiter執行命令
mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb
mongo1r執行命令
mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
mongo2r執行命令
mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
#三個節點命令執行過程日誌
[mongo@arbiter logs]$ mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:14.653-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:14.653-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 6566
child process started successfully, parent exiting
[mongo@arbiter logs]$
[mongo@mongo1 logs]$ mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:26.838-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:26.838-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 10478
child process started successfully, parent exiting
[mongo@mongo1 logs]$
[mongo@mongo2 logs]$ mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:43.808-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:43.808-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 6374
child process started successfully, parent exiting
[mongo@mongo2 logs]$
#初始化副本集1(firstset)
#mongo2需要執行的命令
config={_id:"firstset",members:[]}
config.members.push({_id:0,host:"192.168.144.120:10001"})
config.members.push({_id:1,host:"192.168.144.130:10001"})
config.members.push({_id:2,host:"192.168.144.111:10001",arbiterOnly:true})
rs.initiate(config);
#mongo2執行命令過程日誌
[mongo@mongo2 logs]$ mongo --port 10001
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:10001/test
Server has startup warnings:
2016-11-14T18:53:43.808-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:43.808-0800 I CONTROL [main] ** enabling http interface
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten]
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten]
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten]
> config={_id:"firstset",members:[]}
{ "_id" : "firstset", "members" : [ ] }
> config.members.push({_id:0,host:"192.168.144.120:10001"})
1
> config.members.push({_id:1,host:"192.168.144.130:10001"})
2
> config.members.push({_id:2,host:"192.168.144.111:10001",arbiterOnly:true})
3
> rs.initiate(config);
{ "ok" : 1 }
firstset:OTHER> use dns_testdb
switched to db dns_testdb
firstset:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.144.120:10001",
"192.168.144.130:10001"
],
"arbiters" : [
"192.168.144.111:10001"
],
"setName" : "firstset",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.144.130:10001",
"me" : "192.168.144.130:10001",
"electionId" : ObjectId("7fffffff0000000000000001"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-11-15T03:07:06.392Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
#透過mongo2向firstset副本集插入100000條資料
firstset:OTHER> use dns_testdb
switched to db dns_testdb
firstset:PRIMARY> animal = ["dog", "tiger", "cat", "lion", "elephant", "bird", "horse", "pig", "rabbit", "cow", "dragon", "snake"];
[
"dog",
"tiger",
"cat",
"lion",
"elephant",
"bird",
"horse",
"pig",
"rabbit",
"cow",
"dragon",
"snake"
]
firstset:PRIMARY> for(var i=0; i<100000; i++){
... name = animal[Math.floor(Math.random()*animal.length)];
... user_id = i;
... boolean = [true, false][Math.floor(Math.random()*2)];
... added_at = new Date();
... number = Math.floor(Math.random()*10001);
... db.test_collection.save({"name":name, "user_id":user_id, "boolean": boolean, "added_at":added_at, "number":number });
... }
WriteResult({ "nInserted" : 1 })
firstset:PRIMARY> firstset:PRIMARY> show collections
test_collection
firstset:PRIMARY> db.test_collection.findOne();
{
"_id" : ObjectId("582a7c095490e553bc98919e"),
"name" : "snake",
"user_id" : 0,
"boolean" : false,
"added_at" : ISODate("2016-11-15T03:07:53.561Z"),
"number" : 746
}
firstset:PRIMARY> db.test_collection.count();
100000
firstset:PRIMARY> show dbs
dns_testdb 0.004GB
local 0.006GB
第二步:初始化分片
#啟動配置伺服器程式,三個節點需要執行的命令
aribter
mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend
mongo1
mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend
mongo2
mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
switched to db admin
mongos> db.runCommand( { addShard : "firstset/192.168.144.120:10001,192.168.144.130:10001,192.168.144.111:10001" } )
{ "shardAdded" : "firstset", "ok" : 1 }
mongos>
第三步:初始化副本集2(secondset)
#初始化副本集2的例項程式,三個節點需要執行的命令
aibiter執行命令
mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb
mongo1執行命令
mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
mongo2執行命令
mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
switched to db admin
mongos> sh.enableSharding("dns_testdb");
{ "ok" : 1 }
mongos> db.runCommand({"shardcollection":"dns_testdb.test_collection","key":{"_id":1}});
{ "collectionsharded" : "dns_testdb.test_collection", "ok" : 1 }
#檢視副本集--分片叢集資訊
mongos> use config
switched to db config
mongos> db.shards.find();
{ "_id" : "firstset", "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{ "_id" : "secondset", "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
mongos>
mongos> use config
switched to db config
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("582a888bd617c6f7926f8843")
}
shards:
{ "_id" : "firstset", "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{ "_id" : "secondset", "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
active mongoses:
"3.2.7" : 2
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Mon Nov 14 2016 21:34:45 GMT-0800 (PST) by mongo2:27017:1479182487:368039610:Balancer:970925433
Collections with active migrations:
dns_testdb.test_collection started at Mon Nov 14 2016 21:34:45 GMT-0800 (PST)
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
6 : Success
databases:
{ "_id" : "dns_testdb", "primary" : "firstset", "partitioned" : true }
dns_testdb.test_collection
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
firstset 13
secondset 6
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("582a7c0c5490e553bc98a683") } on : secondset Timestamp(2, 0)
{ "_id" : ObjectId("582a7c0c5490e553bc98a683") } -->> { "_id" : ObjectId("582a7c0e5490e553bc98bb69") } on : secondset Timestamp(3, 0)
{ "_id" : ObjectId("582a7c0e5490e553bc98bb69") } -->> { "_id" : ObjectId("582a7c115490e553bc98d04f") } on : secondset Timestamp(4, 0)
{ "_id" : ObjectId("582a7c115490e553bc98d04f") } -->> { "_id" : ObjectId("582a7c135490e553bc98e535") } on : secondset Timestamp(5, 0)
{ "_id" : ObjectId("582a7c135490e553bc98e535") } -->> { "_id" : ObjectId("582a7c155490e553bc98fa1b") } on : secondset Timestamp(6, 0)
{ "_id" : ObjectId("582a7c155490e553bc98fa1b") } -->> { "_id" : ObjectId("582a7c175490e553bc990f01") } on : secondset Timestamp(7, 0)
{ "_id" : ObjectId("582a7c175490e553bc990f01") } -->> { "_id" : ObjectId("582a7c1a5490e553bc9923e7") } on : firstset Timestamp(7, 1)
{ "_id" : ObjectId("582a7c1a5490e553bc9923e7") } -->> { "_id" : ObjectId("582a7c1c5490e553bc9938cd") } on : firstset Timestamp(1, 7)
{ "_id" : ObjectId("582a7c1c5490e553bc9938cd") } -->> { "_id" : ObjectId("582a7c1e5490e553bc994db3") } on : firstset Timestamp(1, 8)
{ "_id" : ObjectId("582a7c1e5490e553bc994db3") } -->> { "_id" : ObjectId("582a7c215490e553bc996299") } on : firstset Timestamp(1, 9)
{ "_id" : ObjectId("582a7c215490e553bc996299") } -->> { "_id" : ObjectId("582a7c235490e553bc99777f") } on : firstset Timestamp(1, 10)
{ "_id" : ObjectId("582a7c235490e553bc99777f") } -->> { "_id" : ObjectId("582a7c255490e553bc998c65") } on : firstset Timestamp(1, 11)
{ "_id" : ObjectId("582a7c255490e553bc998c65") } -->> { "_id" : ObjectId("582a7c275490e553bc99a14b") } on : firstset Timestamp(1, 12)
{ "_id" : ObjectId("582a7c275490e553bc99a14b") } -->> { "_id" : ObjectId("582a7c2a5490e553bc99b631") } on : firstset Timestamp(1, 13)
{ "_id" : ObjectId("582a7c2a5490e553bc99b631") } -->> { "_id" : ObjectId("582a7c2c5490e553bc99cb17") } on : firstset Timestamp(1, 14)
{ "_id" : ObjectId("582a7c2c5490e553bc99cb17") } -->> { "_id" : ObjectId("582a7c2e5490e553bc99dffd") } on : firstset Timestamp(1, 15)
{ "_id" : ObjectId("582a7c2e5490e553bc99dffd") } -->> { "_id" : ObjectId("582a7c305490e553bc99f4e3") } on : firstset Timestamp(1, 16)
{ "_id" : ObjectId("582a7c305490e553bc99f4e3") } -->> { "_id" : ObjectId("582a7c335490e553bc9a09c9") } on : firstset Timestamp(1, 17)
{ "_id" : ObjectId("582a7c335490e553bc9a09c9") } -->> { "_id" : { "$maxKey" : 1 } } on : firstset Timestamp(1, 18)
mongos>
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("582a888bd617c6f7926f8843")
}
shards:
{ "_id" : "firstset", "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{ "_id" : "secondset", "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
active mongoses:
"3.2.7" : 2
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
9 : Success
databases:
{ "_id" : "dns_testdb", "primary" : "firstset", "partitioned" : true }
dns_testdb.test_collection
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
firstset 10
secondset 9
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("582a7c0c5490e553bc98a683") } on : secondset Timestamp(2, 0)
{ "_id" : ObjectId("582a7c0c5490e553bc98a683") } -->> { "_id" : ObjectId("582a7c0e5490e553bc98bb69") } on : secondset Timestamp(3, 0)
{ "_id" : ObjectId("582a7c0e5490e553bc98bb69") } -->> { "_id" : ObjectId("582a7c115490e553bc98d04f") } on : secondset Timestamp(4, 0)
{ "_id" : ObjectId("582a7c115490e553bc98d04f") } -->> { "_id" : ObjectId("582a7c135490e553bc98e535") } on : secondset Timestamp(5, 0)
{ "_id" : ObjectId("582a7c135490e553bc98e535") } -->> { "_id" : ObjectId("582a7c155490e553bc98fa1b") } on : secondset Timestamp(6, 0)
{ "_id" : ObjectId("582a7c155490e553bc98fa1b") } -->> { "_id" : ObjectId("582a7c175490e553bc990f01") } on : secondset Timestamp(7, 0)
{ "_id" : ObjectId("582a7c175490e553bc990f01") } -->> { "_id" : ObjectId("582a7c1a5490e553bc9923e7") } on : secondset Timestamp(8, 0)
{ "_id" : ObjectId("582a7c1a5490e553bc9923e7") } -->> { "_id" : ObjectId("582a7c1c5490e553bc9938cd") } on : secondset Timestamp(9, 0)
{ "_id" : ObjectId("582a7c1c5490e553bc9938cd") } -->> { "_id" : ObjectId("582a7c1e5490e553bc994db3") } on : secondset Timestamp(10, 0)
{ "_id" : ObjectId("582a7c1e5490e553bc994db3") } -->> { "_id" : ObjectId("582a7c215490e553bc996299") } on : firstset Timestamp(10, 1)
{ "_id" : ObjectId("582a7c215490e553bc996299") } -->> { "_id" : ObjectId("582a7c235490e553bc99777f") } on : firstset Timestamp(1, 10)
{ "_id" : ObjectId("582a7c235490e553bc99777f") } -->> { "_id" : ObjectId("582a7c255490e553bc998c65") } on : firstset Timestamp(1, 11)
{ "_id" : ObjectId("582a7c255490e553bc998c65") } -->> { "_id" : ObjectId("582a7c275490e553bc99a14b") } on : firstset Timestamp(1, 12)
{ "_id" : ObjectId("582a7c275490e553bc99a14b") } -->> { "_id" : ObjectId("582a7c2a5490e553bc99b631") } on : firstset Timestamp(1, 13)
{ "_id" : ObjectId("582a7c2a5490e553bc99b631") } -->> { "_id" : ObjectId("582a7c2c5490e553bc99cb17") } on : firstset Timestamp(1, 14)
{ "_id" : ObjectId("582a7c2c5490e553bc99cb17") } -->> { "_id" : ObjectId("582a7c2e5490e553bc99dffd") } on : firstset Timestamp(1, 15)
{ "_id" : ObjectId("582a7c2e5490e553bc99dffd") } -->> { "_id" : ObjectId("582a7c305490e553bc99f4e3") } on : firstset Timestamp(1, 16)
{ "_id" : ObjectId("582a7c305490e553bc99f4e3") } -->> { "_id" : ObjectId("582a7c335490e553bc9a09c9") } on : firstset Timestamp(1, 17)
{ "_id" : ObjectId("582a7c335490e553bc9a09c9") } -->> { "_id" : { "$maxKey" : 1 } } on : firstset Timestamp(1, 18)
mongos>
[mongo@arbiter logs]$ mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:14.653-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:14.653-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 6566
child process started successfully, parent exiting
[mongo@arbiter logs]$
[mongo@mongo1 logs]$ mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:26.838-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:26.838-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 10478
child process started successfully, parent exiting
[mongo@mongo1 logs]$
[mongo@mongo2 logs]$ mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:43.808-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:43.808-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 6374
child process started successfully, parent exiting
[mongo@mongo2 logs]$
#初始化副本集1(firstset)
#mongo2需要執行的命令
config={_id:"firstset",members:[]}
config.members.push({_id:0,host:"192.168.144.120:10001"})
config.members.push({_id:1,host:"192.168.144.130:10001"})
config.members.push({_id:2,host:"192.168.144.111:10001",arbiterOnly:true})
rs.initiate(config);
#mongo2執行命令過程日誌
[mongo@mongo2 logs]$ mongo --port 10001
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:10001/test
Server has startup warnings:
2016-11-14T18:53:43.808-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:43.808-0800 I CONTROL [main] ** enabling http interface
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten]
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten]
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-14T18:53:43.921-0800 I CONTROL [initandlisten]
> config={_id:"firstset",members:[]}
{ "_id" : "firstset", "members" : [ ] }
> config.members.push({_id:0,host:"192.168.144.120:10001"})
1
> config.members.push({_id:1,host:"192.168.144.130:10001"})
2
> config.members.push({_id:2,host:"192.168.144.111:10001",arbiterOnly:true})
3
> rs.initiate(config);
{ "ok" : 1 }
firstset:OTHER> use dns_testdb
switched to db dns_testdb
firstset:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.144.120:10001",
"192.168.144.130:10001"
],
"arbiters" : [
"192.168.144.111:10001"
],
"setName" : "firstset",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.144.130:10001",
"me" : "192.168.144.130:10001",
"electionId" : ObjectId("7fffffff0000000000000001"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-11-15T03:07:06.392Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
#透過mongo2向firstset副本集插入100000條資料
firstset:OTHER> use dns_testdb
switched to db dns_testdb
firstset:PRIMARY> animal = ["dog", "tiger", "cat", "lion", "elephant", "bird", "horse", "pig", "rabbit", "cow", "dragon", "snake"];
[
"dog",
"tiger",
"cat",
"lion",
"elephant",
"bird",
"horse",
"pig",
"rabbit",
"cow",
"dragon",
"snake"
]
firstset:PRIMARY> for(var i=0; i<100000; i++){
... name = animal[Math.floor(Math.random()*animal.length)];
... user_id = i;
... boolean = [true, false][Math.floor(Math.random()*2)];
... added_at = new Date();
... number = Math.floor(Math.random()*10001);
... db.test_collection.save({"name":name, "user_id":user_id, "boolean": boolean, "added_at":added_at, "number":number });
... }
WriteResult({ "nInserted" : 1 })
firstset:PRIMARY> firstset:PRIMARY> show collections
test_collection
firstset:PRIMARY> db.test_collection.findOne();
{
"_id" : ObjectId("582a7c095490e553bc98919e"),
"name" : "snake",
"user_id" : 0,
"boolean" : false,
"added_at" : ISODate("2016-11-15T03:07:53.561Z"),
"number" : 746
}
firstset:PRIMARY> db.test_collection.count();
100000
firstset:PRIMARY> show dbs
dns_testdb 0.004GB
local 0.006GB
第二步:初始化分片
#啟動配置伺服器程式,三個節點需要執行的命令
aribter
mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend
mongo1
mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend
mongo2
mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
#三個節點命令執行過程日誌
[mongo@arbiter dns_config1]$ mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 7038
child process started successfully, parent exiting
[mongo@arbiter dns_config1]$
[mongo@mongo1 dns_shard1]$ mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 11566
child process started successfully, parent exiting
[mongo@mongo1 dns_shard1]$
[mongo@mongo2 logs]$ mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 6670
child process started successfully, parent exiting
[mongo@mongo2 logs]$ l
#在節點mongo1、mongo2啟動Mongos程式,2個節點需要執行的命令
mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
#2個節點執行命令過程日誌
[mongo@mongo1 logs]$ mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 14689
child process started successfully, parent exiting
[mongo@mongo1 logs]$
[mongo@mongo2 logs]$ mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 7093
child process started successfully, parent exiting
[mongo@mongo2 logs]$
#在mongo1配置分片,將firstset新增到分片
mongo1需要執行的命令
mongo --port 27017
use admin
db.runCommand( { addShard : "firstset/192.168.144.120:10001,192.168.144.130:10001,192.168.144.111:10001" } )
#mongo1執行命令的日誌
[mongo@mongo1 logs]$ mongo --port 27017[mongo@arbiter dns_config1]$ mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 7038
child process started successfully, parent exiting
[mongo@arbiter dns_config1]$
[mongo@mongo1 dns_shard1]$ mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 11566
child process started successfully, parent exiting
[mongo@mongo1 dns_shard1]$
[mongo@mongo2 logs]$ mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 6670
child process started successfully, parent exiting
[mongo@mongo2 logs]$ l
#在節點mongo1、mongo2啟動Mongos程式,2個節點需要執行的命令
mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
#2個節點執行命令過程日誌
[mongo@mongo1 logs]$ mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 14689
child process started successfully, parent exiting
[mongo@mongo1 logs]$
[mongo@mongo2 logs]$ mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 7093
child process started successfully, parent exiting
[mongo@mongo2 logs]$
#在mongo1配置分片,將firstset新增到分片
mongo1需要執行的命令
mongo --port 27017
use admin
db.runCommand( { addShard : "firstset/192.168.144.120:10001,192.168.144.130:10001,192.168.144.111:10001" } )
#mongo1執行命令的日誌
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
switched to db admin
mongos> db.runCommand( { addShard : "firstset/192.168.144.120:10001,192.168.144.130:10001,192.168.144.111:10001" } )
{ "shardAdded" : "firstset", "ok" : 1 }
mongos>
第三步:初始化副本集2(secondset)
#初始化副本集2的例項程式,三個節點需要執行的命令
aibiter執行命令
mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb
mongo1執行命令
mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
mongo2執行命令
mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
#三個節點執行命令的日誌
[mongo@arbiter dns_aribter2]$ mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:29.478-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:29.478-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 10192
child process started successfully, parent exiting
[mongo@arbiter dns_aribter2]$
[mongo@mongo1 secondset]$ mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:38.786-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:38.786-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 15770
child process started successfully, parent exiting
[mongo@mongo1 secondset]$
[mongo@mongo2 secondset]$ mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:53.327-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:53.327-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 7344
child process started successfully, parent exiting
[mongo@mongo2 secondset]$
#初始化副本集2(secondset)
mongo1需要執行的命令
mongo 192.168.144.120:30001/admin
config={_id:"secondset",members:[]}
config.members.push({_id:0,host:"192.168.144.120:30001"})
config.members.push({_id:1,host:"192.168.144.130:30001"})
config.members.push({_id:2,host:"192.168.144.111:30001",arbiterOnly:true})
rs.initiate(config);
#mongo1執行命令的日誌(紅色部分個人覺得有點莫名其妙,但是不妨礙操作正常程式)
[mongo@mongo1 secondset]$ mongo 192.168.144.120:30001/admin
MongoDB shell version: 3.2.7
connecting to: 192.168.144.120:30001/admin
Server has startup warnings:
2016-11-14T20:32:38.786-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:38.786-0800 I CONTROL [main] ** enabling http interface
2016-11-14T20:32:38.858-0800 I CONTROL [initandlisten]
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten]
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten]
> config={_id:"secondset",members:[]}
{ "_id" : "secondset", "members" : [ ] }
> config.members.push({_id:0,host:"192.168.144.120:30001"})
1
> config.members.push({_id:1,host:"192.168.144.130:30001"})
2
> config.members.push({_id:2,host:"192.168.144.111:30001",arbiterOnly:true})
3
> rs.initiate(config);
{ "ok" : 1 }
secondset:OTHER> firstset:PRIMARY> rs.isMaster()
2016-11-14T20:33:32.215-0800 E QUERY [thread1] ReferenceError: PRIMARY is not defined :
@(shell):1:10
secondset:SECONDARY> firstset:PRIMARY> rs.isMaster();
2016-11-14T20:33:40.892-0800 E QUERY [thread1] ReferenceError: PRIMARY is not defined :
@(shell):1:10
secondset:PRIMARY> rs.isMaster();
{
"hosts" : [
"192.168.144.120:30001",
"192.168.144.130:30001"
],
"arbiters" : [
"192.168.144.111:30001"
],
"setName" : "secondset",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.144.120:30001",
"me" : "192.168.144.120:30001",
"electionId" : ObjectId("7fffffff0000000000000001"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-11-15T04:34:35.210Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
secondset:PRIMARY>
第四部:將secondset新增到分片叢集
#mongo1需要執行的命令
mongo --port 27017
use admin
db.runCommand( { addShard : "secondset/192.168.144.120:30001,192.168.144.130:30001,192.168.144.111:30001" } )
#mongo1執行命令的日誌
[mongo@mongo1 logs]$ mongo --port 27017
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
mongos> db.runCommand( { addShard : "secondset/192.168.144.120:30001,192.168.144.130:30001,192.168.144.111:30001" } )
{ "shardAdded" : "secondset", "ok" : 1 }
mongos> db.runCommand({listShards:1})
{
"shards" : [
{
"_id" : "firstset",
"host" : "firstset/192.168.144.120:10001,192.168.144.130:10001"
},
{
"_id" : "secondset",
"host" : "secondset/192.168.144.120:30001,192.168.144.130:30001"
}
],
"ok" : 1
}
mongos>
第五步:開啟測試資料庫dns_testdb的分片功能並開啟集合分片
#mongo1需要執行的命令
mongo --port 27017
use admin
sh.enableSharding("dns_testdb");
db.runCommand({"shardcollection":"dns_testdb.test_collection","key":{"_id":1}});
#mongo1執行命令日誌
[mongo@mongo1 logs]$ mongo --port 27017[mongo@arbiter dns_aribter2]$ mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:29.478-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:29.478-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 10192
child process started successfully, parent exiting
[mongo@arbiter dns_aribter2]$
[mongo@mongo1 secondset]$ mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:38.786-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:38.786-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 15770
child process started successfully, parent exiting
[mongo@mongo1 secondset]$
[mongo@mongo2 secondset]$ mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:53.327-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:53.327-0800 I CONTROL [main] ** enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 7344
child process started successfully, parent exiting
[mongo@mongo2 secondset]$
#初始化副本集2(secondset)
mongo1需要執行的命令
mongo 192.168.144.120:30001/admin
config={_id:"secondset",members:[]}
config.members.push({_id:0,host:"192.168.144.120:30001"})
config.members.push({_id:1,host:"192.168.144.130:30001"})
config.members.push({_id:2,host:"192.168.144.111:30001",arbiterOnly:true})
rs.initiate(config);
#mongo1執行命令的日誌(紅色部分個人覺得有點莫名其妙,但是不妨礙操作正常程式)
[mongo@mongo1 secondset]$ mongo 192.168.144.120:30001/admin
MongoDB shell version: 3.2.7
connecting to: 192.168.144.120:30001/admin
Server has startup warnings:
2016-11-14T20:32:38.786-0800 I CONTROL [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:38.786-0800 I CONTROL [main] ** enabling http interface
2016-11-14T20:32:38.858-0800 I CONTROL [initandlisten]
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten]
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-11-14T20:32:38.859-0800 I CONTROL [initandlisten]
> config={_id:"secondset",members:[]}
{ "_id" : "secondset", "members" : [ ] }
> config.members.push({_id:0,host:"192.168.144.120:30001"})
1
> config.members.push({_id:1,host:"192.168.144.130:30001"})
2
> config.members.push({_id:2,host:"192.168.144.111:30001",arbiterOnly:true})
3
> rs.initiate(config);
{ "ok" : 1 }
secondset:OTHER> firstset:PRIMARY> rs.isMaster()
2016-11-14T20:33:32.215-0800 E QUERY [thread1] ReferenceError: PRIMARY is not defined :
@(shell):1:10
secondset:SECONDARY> firstset:PRIMARY> rs.isMaster();
2016-11-14T20:33:40.892-0800 E QUERY [thread1] ReferenceError: PRIMARY is not defined :
@(shell):1:10
secondset:PRIMARY> rs.isMaster();
{
"hosts" : [
"192.168.144.120:30001",
"192.168.144.130:30001"
],
"arbiters" : [
"192.168.144.111:30001"
],
"setName" : "secondset",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.144.120:30001",
"me" : "192.168.144.120:30001",
"electionId" : ObjectId("7fffffff0000000000000001"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-11-15T04:34:35.210Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
secondset:PRIMARY>
第四部:將secondset新增到分片叢集
#mongo1需要執行的命令
mongo --port 27017
use admin
db.runCommand( { addShard : "secondset/192.168.144.120:30001,192.168.144.130:30001,192.168.144.111:30001" } )
#mongo1執行命令的日誌
[mongo@mongo1 logs]$ mongo --port 27017
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
mongos> db.runCommand( { addShard : "secondset/192.168.144.120:30001,192.168.144.130:30001,192.168.144.111:30001" } )
{ "shardAdded" : "secondset", "ok" : 1 }
mongos> db.runCommand({listShards:1})
{
"shards" : [
{
"_id" : "firstset",
"host" : "firstset/192.168.144.120:10001,192.168.144.130:10001"
},
{
"_id" : "secondset",
"host" : "secondset/192.168.144.120:30001,192.168.144.130:30001"
}
],
"ok" : 1
}
mongos>
第五步:開啟測試資料庫dns_testdb的分片功能並開啟集合分片
#mongo1需要執行的命令
mongo --port 27017
use admin
sh.enableSharding("dns_testdb");
db.runCommand({"shardcollection":"dns_testdb.test_collection","key":{"_id":1}});
#mongo1執行命令日誌
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
switched to db admin
mongos> sh.enableSharding("dns_testdb");
{ "ok" : 1 }
mongos> db.runCommand({"shardcollection":"dns_testdb.test_collection","key":{"_id":1}});
{ "collectionsharded" : "dns_testdb.test_collection", "ok" : 1 }
#檢視副本集--分片叢集資訊
mongos> use config
switched to db config
mongos> db.shards.find();
{ "_id" : "firstset", "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{ "_id" : "secondset", "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
mongos>
mongos> use config
switched to db config
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("582a888bd617c6f7926f8843")
}
shards:
{ "_id" : "firstset", "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{ "_id" : "secondset", "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
active mongoses:
"3.2.7" : 2
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Mon Nov 14 2016 21:34:45 GMT-0800 (PST) by mongo2:27017:1479182487:368039610:Balancer:970925433
Collections with active migrations:
dns_testdb.test_collection started at Mon Nov 14 2016 21:34:45 GMT-0800 (PST)
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
6 : Success
databases:
{ "_id" : "dns_testdb", "primary" : "firstset", "partitioned" : true }
dns_testdb.test_collection
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
firstset 13
secondset 6
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("582a7c0c5490e553bc98a683") } on : secondset Timestamp(2, 0)
{ "_id" : ObjectId("582a7c0c5490e553bc98a683") } -->> { "_id" : ObjectId("582a7c0e5490e553bc98bb69") } on : secondset Timestamp(3, 0)
{ "_id" : ObjectId("582a7c0e5490e553bc98bb69") } -->> { "_id" : ObjectId("582a7c115490e553bc98d04f") } on : secondset Timestamp(4, 0)
{ "_id" : ObjectId("582a7c115490e553bc98d04f") } -->> { "_id" : ObjectId("582a7c135490e553bc98e535") } on : secondset Timestamp(5, 0)
{ "_id" : ObjectId("582a7c135490e553bc98e535") } -->> { "_id" : ObjectId("582a7c155490e553bc98fa1b") } on : secondset Timestamp(6, 0)
{ "_id" : ObjectId("582a7c155490e553bc98fa1b") } -->> { "_id" : ObjectId("582a7c175490e553bc990f01") } on : secondset Timestamp(7, 0)
{ "_id" : ObjectId("582a7c175490e553bc990f01") } -->> { "_id" : ObjectId("582a7c1a5490e553bc9923e7") } on : firstset Timestamp(7, 1)
{ "_id" : ObjectId("582a7c1a5490e553bc9923e7") } -->> { "_id" : ObjectId("582a7c1c5490e553bc9938cd") } on : firstset Timestamp(1, 7)
{ "_id" : ObjectId("582a7c1c5490e553bc9938cd") } -->> { "_id" : ObjectId("582a7c1e5490e553bc994db3") } on : firstset Timestamp(1, 8)
{ "_id" : ObjectId("582a7c1e5490e553bc994db3") } -->> { "_id" : ObjectId("582a7c215490e553bc996299") } on : firstset Timestamp(1, 9)
{ "_id" : ObjectId("582a7c215490e553bc996299") } -->> { "_id" : ObjectId("582a7c235490e553bc99777f") } on : firstset Timestamp(1, 10)
{ "_id" : ObjectId("582a7c235490e553bc99777f") } -->> { "_id" : ObjectId("582a7c255490e553bc998c65") } on : firstset Timestamp(1, 11)
{ "_id" : ObjectId("582a7c255490e553bc998c65") } -->> { "_id" : ObjectId("582a7c275490e553bc99a14b") } on : firstset Timestamp(1, 12)
{ "_id" : ObjectId("582a7c275490e553bc99a14b") } -->> { "_id" : ObjectId("582a7c2a5490e553bc99b631") } on : firstset Timestamp(1, 13)
{ "_id" : ObjectId("582a7c2a5490e553bc99b631") } -->> { "_id" : ObjectId("582a7c2c5490e553bc99cb17") } on : firstset Timestamp(1, 14)
{ "_id" : ObjectId("582a7c2c5490e553bc99cb17") } -->> { "_id" : ObjectId("582a7c2e5490e553bc99dffd") } on : firstset Timestamp(1, 15)
{ "_id" : ObjectId("582a7c2e5490e553bc99dffd") } -->> { "_id" : ObjectId("582a7c305490e553bc99f4e3") } on : firstset Timestamp(1, 16)
{ "_id" : ObjectId("582a7c305490e553bc99f4e3") } -->> { "_id" : ObjectId("582a7c335490e553bc9a09c9") } on : firstset Timestamp(1, 17)
{ "_id" : ObjectId("582a7c335490e553bc9a09c9") } -->> { "_id" : { "$maxKey" : 1 } } on : firstset Timestamp(1, 18)
mongos>
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("582a888bd617c6f7926f8843")
}
shards:
{ "_id" : "firstset", "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{ "_id" : "secondset", "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
active mongoses:
"3.2.7" : 2
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
9 : Success
databases:
{ "_id" : "dns_testdb", "primary" : "firstset", "partitioned" : true }
dns_testdb.test_collection
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
firstset 10
secondset 9
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("582a7c0c5490e553bc98a683") } on : secondset Timestamp(2, 0)
{ "_id" : ObjectId("582a7c0c5490e553bc98a683") } -->> { "_id" : ObjectId("582a7c0e5490e553bc98bb69") } on : secondset Timestamp(3, 0)
{ "_id" : ObjectId("582a7c0e5490e553bc98bb69") } -->> { "_id" : ObjectId("582a7c115490e553bc98d04f") } on : secondset Timestamp(4, 0)
{ "_id" : ObjectId("582a7c115490e553bc98d04f") } -->> { "_id" : ObjectId("582a7c135490e553bc98e535") } on : secondset Timestamp(5, 0)
{ "_id" : ObjectId("582a7c135490e553bc98e535") } -->> { "_id" : ObjectId("582a7c155490e553bc98fa1b") } on : secondset Timestamp(6, 0)
{ "_id" : ObjectId("582a7c155490e553bc98fa1b") } -->> { "_id" : ObjectId("582a7c175490e553bc990f01") } on : secondset Timestamp(7, 0)
{ "_id" : ObjectId("582a7c175490e553bc990f01") } -->> { "_id" : ObjectId("582a7c1a5490e553bc9923e7") } on : secondset Timestamp(8, 0)
{ "_id" : ObjectId("582a7c1a5490e553bc9923e7") } -->> { "_id" : ObjectId("582a7c1c5490e553bc9938cd") } on : secondset Timestamp(9, 0)
{ "_id" : ObjectId("582a7c1c5490e553bc9938cd") } -->> { "_id" : ObjectId("582a7c1e5490e553bc994db3") } on : secondset Timestamp(10, 0)
{ "_id" : ObjectId("582a7c1e5490e553bc994db3") } -->> { "_id" : ObjectId("582a7c215490e553bc996299") } on : firstset Timestamp(10, 1)
{ "_id" : ObjectId("582a7c215490e553bc996299") } -->> { "_id" : ObjectId("582a7c235490e553bc99777f") } on : firstset Timestamp(1, 10)
{ "_id" : ObjectId("582a7c235490e553bc99777f") } -->> { "_id" : ObjectId("582a7c255490e553bc998c65") } on : firstset Timestamp(1, 11)
{ "_id" : ObjectId("582a7c255490e553bc998c65") } -->> { "_id" : ObjectId("582a7c275490e553bc99a14b") } on : firstset Timestamp(1, 12)
{ "_id" : ObjectId("582a7c275490e553bc99a14b") } -->> { "_id" : ObjectId("582a7c2a5490e553bc99b631") } on : firstset Timestamp(1, 13)
{ "_id" : ObjectId("582a7c2a5490e553bc99b631") } -->> { "_id" : ObjectId("582a7c2c5490e553bc99cb17") } on : firstset Timestamp(1, 14)
{ "_id" : ObjectId("582a7c2c5490e553bc99cb17") } -->> { "_id" : ObjectId("582a7c2e5490e553bc99dffd") } on : firstset Timestamp(1, 15)
{ "_id" : ObjectId("582a7c2e5490e553bc99dffd") } -->> { "_id" : ObjectId("582a7c305490e553bc99f4e3") } on : firstset Timestamp(1, 16)
{ "_id" : ObjectId("582a7c305490e553bc99f4e3") } -->> { "_id" : ObjectId("582a7c335490e553bc9a09c9") } on : firstset Timestamp(1, 17)
{ "_id" : ObjectId("582a7c335490e553bc9a09c9") } -->> { "_id" : { "$maxKey" : 1 } } on : firstset Timestamp(1, 18)
mongos>
到此,驗證操作已經完畢,總結:副本集--分片叢集部署過程中,可以只使用IP地址,但是需要注意的是副本集及分片配置時,如果使用IP則均使用IP地址。
附副本集--分片叢集部署完成後,3個節點的mong0程式:
[mongo@arbiter ~]$ ps -ef|grep mongo
root 6125 5922 0 18:17 pts/5 00:00:00 su - mongo
mongo 6126 6125 0 18:17 pts/5 00:00:00 -bash
mongo 6566 1 0 18:53 ? 00:01:55 mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb
mongo 7038 1 0 19:22 ? 00:01:47 mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend
mongo 10192 1 0 20:32 ? 00:00:56 mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb
mongo 12503 6126 7 23:03 pts/5 00:00:00 ps -ef
mongo 12504 6126 0 23:03 pts/5 00:00:00 grep mongo
[mongo@arbiter ~]$
[root@mongo1 ~]# ps -ef|grep mongo
root 9467 9040 0 18:20 pts/4 00:00:00 su - mongo
mongo 9468 9467 0 18:20 pts/4 00:00:00 -bash
mongo 10478 1 1 18:53 ? 00:04:20 mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
mongo 11566 1 0 19:24 ? 00:01:29 mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend
mongo 14689 1 0 20:01 ? 00:00:35 mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
mongo 14793 9468 0 20:02 pts/4 00:00:00 mongo --port 27017
root 15383 15365 0 20:21 pts/0 00:00:00 su - mongo
mongo 15384 15383 0 20:21 pts/0 00:00:00 -bash
mongo 15770 1 1 20:32 ? 00:01:55 mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
mongo 15796 15384 0 20:32 pts/0 00:00:00 mongo 192.168.144.120:30001/admin
root 20392 18350 0 23:03 pts/1 00:00:00 grep mongo
[root@mongo1 ~]#
[root@mongo2 ~]# ps -ef|grep mongo
root 6187 3834 0 18:18 pts/1 00:00:00 su - mongo
mongo 6188 6187 0 18:18 pts/1 00:00:00 -bash
mongo 6374 1 1 18:53 ? 00:04:26 mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
mongo 6401 6188 0 18:55 pts/1 00:00:28 mongo --port 10001
root 6589 6534 0 19:16 pts/2 00:00:00 su - mongo
mongo 6590 6589 0 19:16 pts/2 00:00:00 -bash
mongo 6670 1 0 19:25 ? 00:01:26 mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
mongo 7093 1 0 20:01 ? 00:00:34 mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
mongo 7344 1 1 20:32 ? 00:01:53 mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
mongo 7666 6590 0 21:11 pts/2 00:00:00 mongo --port 27017
root 8253 7909 0 23:04 pts/0 00:00:00 grep mongo
[root@mongo2 ~]#
附副本集--分片叢集部署完成後,3個節點的mong0程式:
[mongo@arbiter ~]$ ps -ef|grep mongo
root 6125 5922 0 18:17 pts/5 00:00:00 su - mongo
mongo 6126 6125 0 18:17 pts/5 00:00:00 -bash
mongo 6566 1 0 18:53 ? 00:01:55 mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb
mongo 7038 1 0 19:22 ? 00:01:47 mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend
mongo 10192 1 0 20:32 ? 00:00:56 mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb
mongo 12503 6126 7 23:03 pts/5 00:00:00 ps -ef
mongo 12504 6126 0 23:03 pts/5 00:00:00 grep mongo
[mongo@arbiter ~]$
[root@mongo1 ~]# ps -ef|grep mongo
root 9467 9040 0 18:20 pts/4 00:00:00 su - mongo
mongo 9468 9467 0 18:20 pts/4 00:00:00 -bash
mongo 10478 1 1 18:53 ? 00:04:20 mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
mongo 11566 1 0 19:24 ? 00:01:29 mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend
mongo 14689 1 0 20:01 ? 00:00:35 mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
mongo 14793 9468 0 20:02 pts/4 00:00:00 mongo --port 27017
root 15383 15365 0 20:21 pts/0 00:00:00 su - mongo
mongo 15384 15383 0 20:21 pts/0 00:00:00 -bash
mongo 15770 1 1 20:32 ? 00:01:55 mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
mongo 15796 15384 0 20:32 pts/0 00:00:00 mongo 192.168.144.120:30001/admin
root 20392 18350 0 23:03 pts/1 00:00:00 grep mongo
[root@mongo1 ~]#
[root@mongo2 ~]# ps -ef|grep mongo
root 6187 3834 0 18:18 pts/1 00:00:00 su - mongo
mongo 6188 6187 0 18:18 pts/1 00:00:00 -bash
mongo 6374 1 1 18:53 ? 00:04:26 mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
mongo 6401 6188 0 18:55 pts/1 00:00:28 mongo --port 10001
root 6589 6534 0 19:16 pts/2 00:00:00 su - mongo
mongo 6590 6589 0 19:16 pts/2 00:00:00 -bash
mongo 6670 1 0 19:25 ? 00:01:26 mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
mongo 7093 1 0 20:01 ? 00:00:34 mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
mongo 7344 1 1 20:32 ? 00:01:53 mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
mongo 7666 6590 0 21:11 pts/2 00:00:00 mongo --port 27017
root 8253 7909 0 23:04 pts/0 00:00:00 grep mongo
[root@mongo2 ~]#
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29357786/viewspace-2128515/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Mongodb副本集+分片叢集環境部署記錄MongoDB
- Mongodb分散式叢集副本集+分片MongoDB分散式
- MongoDB 3.2.7 基於keyFile的認證在副本集+叢集分片中的使用MongoDB
- linux下Mongodb叢集搭建:分片+副本集LinuxMongoDB
- 【MongoDB】分片(sharding)+副本集(replSet)叢集搭建MongoDB
- MongoDB健壯叢集——用副本集做分片MongoDB
- Mongodb主從複製/ 副本集/分片叢集介紹MongoDB
- mongodb 分片叢集建立分片集合MongoDB
- MongoDB分片叢集新增分片(自用)MongoDB
- 部署分片叢集
- MongoDB 分片叢集搭建MongoDB
- MongoDB叢集之分片MongoDB
- 搭建MongoDB分片叢集MongoDB
- mongodb 分片叢集設定MongoDB
- MongoDB分片叢集常用操作MongoDB
- 使用副本集搭建MongoDB叢集MongoDB
- mongodb副本叢集和分片叢集佈署MongoDB
- 【最佳實踐】高可用mongodb叢集(1分片+3副本):規劃及部署MongoDB
- MongoDB分片叢集chunk的概念MongoDB
- 高可用mongodb叢集(分片+副本)MongoDB
- mongoDB研究筆記:分片叢集部署MongoDB筆記
- mongodb 4.0副本集搭建MongoDB
- MongoDB 6.0.3副本集搭建MongoDB
- MongoDB Sharding(二) -- 搭建分片叢集MongoDB
- mongodb的分散式叢集(3、分片)MongoDB分散式
- 搭建高可用MongoDB叢集(四):分片MongoDB
- 搭建高可用MongoDB叢集(二): 副本集MongoDB
- MongoDB部署副本集MongoDB
- redis叢集之分片叢集的原理和常用代理環境部署Redis
- 使用Docker搭建MongoDB 5.0版本副本集叢集DockerMongoDB
- 搭建 MongoDB分片(sharding) / 分割槽 / 叢集環境MongoDB
- MongoDB分片儲存的叢集架構實現MongoDB架構
- MongoDB叢集設定集合分片生效及檢視集合分片情況MongoDB
- MongoDB 副本集分片叢集一分片config庫主機斷電導致該分片config庫無法啟動MongoDB
- MongoDB日常運維-04副本集搭建MongoDB運維
- 分片叢集元件元件
- docker 下部署mongodb Replica Set 叢集DockerMongoDB
- MongoDB 分片叢集均衡器導致的效能下降MongoDB