MongoDB副本集實踐
MongoDB副本集
副本集(Replica Sets)是有自動故障轉移和恢復特性的主從複製
mongoDB官方已經不建議使用主從模式了,替代方案是採用副本集的模式
mongoDB官方建議最小化的副本集為Three Member Sets,一個primary和兩個secondary
備份集與主從複製主要的不同:
1:該叢集沒有特定的主資料庫
2:主節點故障(如當機)了,叢集會推選出新的節點作為主節點頂上,從而實現自動故障切換
建立三個[db|log]path
~]# mkdir /data/mongodb/node{1,2,3}
~]# mkdir /data/mongodb/logs
啟動三個mongod程式
#埠分別為31234,32345,33456
#定義副本集名字,如yujx;使用--replSet選項指定
~]#mongod --fork --dbpath /data/mongodb/node1 --logpath /data/mongodb/logs/node1.log --rest --httpinterface --replSet yujx --port 31234
~]#mongod --fork --dbpath /data/mongodb/node2 --logpath /data/mongodb/logs/node2.log --rest --httpinterface --replSet yujx --port 32345
~]#mongod --fork --dbpath /data/mongodb/node3 --logpath /data/mongodb/logs/node3.log --rest --httpinterface --replSet yujx --port 33456
初始化一個Replica Set
#首先建立一個副本集配置物件
~]# /usr/local/mongodb/bin/mongo --port 31234
MongoDB shell version: 2.6.6
connecting to: 127.0.0.1:31234/test
> use admin
switched to db admin
> rsconf={
... "_id" : "yujx",
... "members" : [
... {
... "_id" : 0,
... "host" : "192.168.211.217:31234"
... }
... ]
... }
{
"_id" : "yujx",
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234"
}
]
}
#使用rs.initiate()進行初始化
> rs.initiate(rsconf)
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
新增其他mongodb到副本集中
#使用rs.add()將另外兩個mongod新增到副本集當中
> rs.add("192.168.211.217:32345")
{ "ok" : 1 }
yujx:PRIMARY> rs.add("192.168.211.217:33456")
{ "ok" : 1 }
#發現31234這個mongod預設就是PRIMARY節點了
檢視叢集配置
#透過rs.conf()可以檢視叢集的配置情況
yujx:PRIMARY> rs.conf()
{
"_id" : "yujx",
"version" : 3,
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234"
},
{
"_id" : 1,
"host" : "192.168.211.217:32345"
},
{
"_id" : 2,
"host" : "192.168.211.217:33456"
}
]
}
檢視叢集狀態
#透過rs.status()檢視叢集狀態
yujx:PRIMARY> rs.status()
{
"set" : "yujx",
"date" : ISODate("2015-01-14T08:07:50Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.211.217:31234",
"health" : 1, #1表明狀態是正常,0表明異常
"state" : 1, #1表明是primary,2表明是slave
"stateStr" : "PRIMARY",
"uptime" : 2261,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"electionTime" : Timestamp(1421222174, 1),
"electionDate" : ISODate("2015-01-14T07:56:14Z"),
"self" : true
},
{
"_id" : 1,
"name" : "192.168.211.217:32345",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 607,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:07:49Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:07:50Z"),
"pingMs" : 0,
"syncingTo" : "192.168.211.217:31234"
},
{
"_id" : 2,
"name" : "192.168.211.217:33456",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 596,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:07:50Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:07:50Z"),
"pingMs" : 0,
"syncingTo" : "192.168.211.217:31234"
}
],
"ok" : 1
}
故障轉移測試
#把node1:31234停止,檢視叢集狀態,發現node3變成primary角色了
yujx:SECONDARY> rs.status()
{
"set" : "yujx",
"date" : ISODate("2015-01-14T08:13:37Z"),
"myState" : 2,
"syncingTo" : "192.168.211.217:33456",
"members" : [
{
"_id" : 0,
"name" : "192.168.211.217:31234",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:13:36Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:13:25Z"),
"pingMs" : 0
},
{
"_id" : 1,
"name" : "192.168.211.217:32345",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2497,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"infoMessage" : "syncing to: 192.168.211.217:33456",
"self" : true
},
{
"_id" : 2,
"name" : "192.168.211.217:33456",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 943,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:13:36Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:13:36Z"),
"pingMs" : 0,
"electionTime" : Timestamp(1421223211, 1),
"electionDate" : ISODate("2015-01-14T08:13:31Z")
}
],
"ok" : 1
#再啟動node1,node1成為"SECONDARY"角色
yujx:SECONDARY> rs.status()
{
"set" : "yujx",
"date" : ISODate("2015-01-14T08:16:27Z"),
"myState" : 2,
"syncingTo" : "192.168.211.217:33456",
"members" : [
{
"_id" : 0,
"name" : "192.168.211.217:31234",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 20,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"infoMessage" : "syncing to: 192.168.211.217:33456",
"self" : true
。。。。。。。
檢視slave延遲
yujx:PRIMARY> db.printSlaveReplicationInfo()
source: 192.168.211.217:31234
syncedTo: Wed Jan 14 2015 15:57:54 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: 192.168.211.217:32345
syncedTo: Wed Jan 14 2015 15:57:54 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
更改優先順序
#如修改node1:31234的優先順序,把node1變為primary
yujx:PRIMARY> rs.conf()
yujx:PRIMARY> cfg=rs.conf();
yujx:PRIMARY> cfg.members[0].priority=10;
yujx:PRIMARY> rs.reconfig(cfg); #注:修改節點優先順序需要登入Master節點執行
yujx:SECONDARY> rs.status()
{
"set" : "yujx",
"date" : ISODate("2015-01-14T08:51:20Z"),
"myState" : 2,
"syncingTo" : "192.168.211.217:31234",
"members" : [
{
"_id" : 0,
"name" : "192.168.211.217:31234",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 11,
"optime" : Timestamp(1421225469, 1),
"optimeDate" : ISODate("2015-01-14T08:51:09Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:51:19Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:51:18Z"),
"pingMs" : 0,
"electionTime" : Timestamp(1421225470, 1),
"electionDate" : ISODate("2015-01-14T08:51:10Z")
}
新增、刪除節點
Mongodb可以做到在不停機的情況下無縫增加節點。命令也很簡單,兩步就可以完成
1.啟動新的Mongodb,並指定副本集
2.把副本集新增到"串"中
~]# mkdir /data/mongodb/node4
~]#mongod --fork --dbpath /data/mongodb/node4 --logpath /data/mongodb/logs/node4.log --rest --httpinterface --replSet yujx --port 34567
yujx:PRIMARY> rs.add("192.168.211.217:34567")
{ "ok" : 1 }
yujx:PRIMARY> rs.conf()
{
"_id" : "yujx",
"version" : 5,
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234",
"priority" : 10
},
{
"_id" : 1,
"host" : "192.168.211.217:32345"
},
{
"_id" : 2,
"host" : "192.168.211.217:33456"
},
{
"_id" : 3,
"host" : "192.168.211.217:34567"
}
]
}
yujx:PRIMARY> rs.remove("192.168.211.217:34567")
2015-01-14T18:04:34.255+0800 DBClientCursor::init call() failed
2015-01-14T18:04:34.256+0800 Error: error doing query: failed at src/mongo/shell/query.js:81
2015-01-14T18:04:34.269+0800 trying reconnect to 127.0.0.1:31234 (127.0.0.1) failed
2015-01-14T18:04:34.269+0800 reconnect 127.0.0.1:31234 (127.0.0.1) ok
yujx:PRIMARY> rs.conf()
{
"_id" : "yujx",
"version" : 6,
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234",
"priority" : 10
},
{
"_id" : 1,
"host" : "192.168.211.217:32345"
},
{
"_id" : 2,
"host" : "192.168.211.217:33456"
}
]
}
新增仲裁節點
仲裁節點不儲存資料,只是用於投票。所以仲裁節點對於伺服器負載很低。
節點一旦以仲裁者的身份加入叢集,他就只能是仲裁者,無法將仲裁者配置為非仲裁者,反之也是一樣。
另外一個叢集最多隻能使用一個仲裁者,額外的仲裁者拖累選舉新Master節點的速度,同時也不能提供更好的資料安全性。
#初始化叢集時,設定仲裁者的配置如下
config = { _id:"mvbox",
members:[
{_id:0,host:"192.168.1.1:27017"},
{_id:1,host:"192.168.1.2:27017",arbiterOnly:true},
{_id:2,host:"192.168.1.3:27017"}]
}
使用仲裁者主要是因為MongoDB副本集需要奇數成員,而又沒有足夠伺服器的情況。在伺服器充足的情況下,不應該使用仲裁者節點。
#線上新增仲裁節點
yujx:PRIMARY> rs.addArb("192.168.211.217:34567")
{ "down" : [ "192.168.211.217:34567" ], "ok" : 1 }
yujx:PRIMARY> rs.conf()
{
"_id" : "yujx",
"version" : 9,
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234",
"priority" : 10
},
{
"_id" : 1,
"host" : "192.168.211.217:32345"
},
{
"_id" : 2,
"host" : "192.168.211.217:33456"
},
{
"_id" : 3,
"host" : "192.168.211.217:34567",
"arbiterOnly" : true
}
]
}
yujx:PRIMARY> rs.status()
………..
{
"_id" : 3,
"name" : "192.168.211.217:34567",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 11,
"lastHeartbeat" : ISODate("2015-01-14T10:12:56Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T10:12:56Z"),
"pingMs" : 0
}
],
"ok" : 1
}
讀寫分離配置
預設SECONDARY是不允許讀寫的,在寫多讀少的應用中,使用Replica Sets來實現讀寫分離。透過在連線時指定或者在主庫指定slaveOk,由Secondary來分擔讀的壓力,Primary只承擔寫操作
#在secondary節點本身設定為slave
#使用rs.slaveOk()或者db.getMongo().setSlaveOk()
問題:
mongodb副本集自己實現了自動故障轉移,但是應用怎麼實現自動主節點IP切換呢?
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/27000195/viewspace-1402616/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- mongodb 4.0副本集搭建MongoDB
- MongoDB 6.0.3副本集搭建MongoDB
- MongoDB Replica Set 副本集實踐MongoDB
- MongoDB日常運維-04副本集搭建MongoDB運維
- MongoDB日常運維-05副本集故障切換MongoDB運維
- MongoDB 4.2副本集新增/刪除副本(一主一副一仲裁)MongoDB
- Mongodb3.0.5副本集搭建及spring和java連線副本集配置MongoDBSpringJava
- MongoDB 4.2副本集自動故障轉移(一主一副一仲裁)MongoDB
- mongodb簡單副本集實驗MongoDB
- MongoDB副本集MongoDB
- 修改mongodb3.0副本集使用者密碼遇到的坑MongoDB密碼
- MongoDB 副本集搭建MongoDB
- MongoDB部署副本集MongoDB
- MongoDB 副本集管理MongoDB
- MongoDB之副本集MongoDB
- 再看MongoDB副本集MongoDB
- 搭建MongoDB副本集MongoDB
- 2.MongoDB 4.2副本集環境基於時間點的恢復MongoDB
- MongoDB - 副本集簡介MongoDB
- 如何配置 MongoDB 副本集MongoDB
- MongoDB副本集replica set (二)--副本集環境搭建MongoDB
- MongoDB 副本集原理及管理MongoDB
- MongoDB 副本集切換方法MongoDB
- 006.MongoDB副本集MongoDB
- 小丸子學MongoDB系列之——部署MongoDB副本集MongoDB
- MongoDB 最佳實踐MongoDB
- MongoDB最佳實踐MongoDB
- 使用副本集搭建MongoDB叢集MongoDB
- MongoDB 刪除,新增副本集,並修改副本集IP等資訊MongoDB
- MongoDB實戰-如何在Windows環境下管理副本集MongoDBWindows
- MongoDB最佳安全實踐MongoDB
- mongodb副本集新增刪除節點MongoDB
- 手把手教你搭建mongodb副本集MongoDB
- Mongodb分散式叢集副本集+分片MongoDB分散式
- MongoDB副本集管理方法介紹MongoDB
- MongoDB 副本集的原理、搭建、應用MongoDB
- MongoDB副本集心跳和同步機制MongoDB
- 【MongoDB】 MongoDB 3.2.x 安裝實踐MongoDB