mongo 叢集配置
master-slave
主的配置檔案 /Users/olifer/middle/mongo/master-slave/master/mongod.conf
bind_ip = 127.0.0.1
port = 27017
dbpath = /Users/olifer/middle/mongo/master-slave/master/data/
master = true
從的配置檔案
bind_ip = 127.0.0.1
port = 27018
dbpath = /Users/olifer/middle/mongo/master-slave/slave/data/
slave = true
source = 127.0.0.1:27017
啟動主伺服器,並且看到下面的日誌,說明配置成功。
mongod -f ~/middle/mongo/master-slave/master/mongod.conf
2017-12-05T15:35:22.479+0800 I JOURNAL [initandlisten] journal dir=/Users/olifer/middle/mongo/master-slave/master/data/journal
2017-12-05T15:35:22.479+0800 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed
2017-12-05T15:35:22.500+0800 I JOURNAL [durability] Durability thread started
2017-12-05T15:35:22.500+0800 I JOURNAL [journal writer] Journal writer thread started
2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] MongoDB starting : pid=6368 port=27017 dbpath=/Users/olifer/middle/mongo/master-slave/master/data/ master=1 64-bit host=oliferdeMacBook-Pro.local
2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] db version v3.0.7
2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] git version: nogitversion
2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] build info: Darwin yosemitevm.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49
2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] allocator: system
2017-12-05T15:35:22.501+0800 I CONTROL [initandlisten] options: { config: "/Users/olifer/middle/mongo/master-slave/master/mongod.conf", master: true, net: { bindIp: "127.0.0.1", port: 27017 }, storage: { dbPath: "/Users/olifer/middle/mongo/master-slave/master/data/" } }
2017-12-05T15:35:22.505+0800 I INDEX [initandlisten] allocating new ns file /Users/olifer/middle/mongo/master-slave/master/data/local.ns, filling with zeroes...
2017-12-05T15:35:22.570+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/master/data/local.0, filling with zeroes...
2017-12-05T15:35:22.570+0800 I STORAGE [FileAllocator] creating directory /Users/olifer/middle/mongo/master-slave/master/data/_tmp
2017-12-05T15:35:22.786+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/master/data/local.0, size: 64MB, took 0.215 secs
2017-12-05T15:35:22.972+0800 I REPL [initandlisten] ******
2017-12-05T15:35:22.972+0800 I REPL [initandlisten] creating replication oplog of size: 192MB...
2017-12-05T15:35:22.972+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/master/data/local.1, filling with zeroes...
2017-12-05T15:35:24.061+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/master/data/local.1, size: 256MB, took 1.088 secs
2017-12-05T15:35:24.117+0800 I REPL [initandlisten] ******
2017-12-05T15:35:24.119+0800 I NETWORK [initandlisten] waiting for connections on port 27017
啟動從伺服器,並且看到下面的日誌,說明配置成功。
mongod -f ~/middle/mongo/master-slave/slave/mongod.conf
2017-12-05T15:37:11.518+0800 I JOURNAL [initandlisten] journal dir=/Users/olifer/middle/mongo/master-slave/slave/data/journal
2017-12-05T15:37:11.518+0800 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed
2017-12-05T15:37:11.535+0800 I JOURNAL [durability] Durability thread started
2017-12-05T15:37:11.535+0800 I JOURNAL [journal writer] Journal writer thread started
2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] MongoDB starting : pid=7304 port=27018 dbpath=/Users/olifer/middle/mongo/master-slave/slave/data/ slave=1 64-bit host=oliferdeMacBook-Pro.local
2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] db version v3.0.7
2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] git version: nogitversion
2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] build info: Darwin yosemitevm.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49
2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] allocator: system
2017-12-05T15:37:11.535+0800 I CONTROL [initandlisten] options: { config: "/Users/olifer/middle/mongo/master-slave/slave/mongod.conf", net: { bindIp: "127.0.0.1", port: 27018 }, slave: true, source: "127.0.0.1:27017", storage: { dbPath: "/Users/olifer/middle/mongo/master-slave/slave/data/" } }
2017-12-05T15:37:11.536+0800 I INDEX [initandlisten] allocating new ns file /Users/olifer/middle/mongo/master-slave/slave/data/local.ns, filling with zeroes...
2017-12-05T15:37:11.605+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/slave/data/local.0, filling with zeroes...
2017-12-05T15:37:11.605+0800 I STORAGE [FileAllocator] creating directory /Users/olifer/middle/mongo/master-slave/slave/data/_tmp
2017-12-05T15:37:11.824+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/slave/data/local.0, size: 64MB, took 0.218 secs
2017-12-05T15:37:11.890+0800 I NETWORK [initandlisten] waiting for connections on port 27018
2017-12-05T15:37:12.894+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:27017
2017-12-05T15:37:17.930+0800 I REPL [replslave] repl: sleep 2 sec before next pass
2017-12-05T15:37:19.935+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:27017
2017-12-05T15:37:24.958+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:27017
2017-12-05T15:37:33.188+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:27017
2017-12-05T15:37:43.193+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:27017
2017-12-05T15:37:53.198+0800 I REPL [replslave] repl: syncing from host:127.0.0.1:27017
我們可以通過slave的日誌中可以看到,slave一直同master保持同步資料的聯絡。
我們通過客戶端連結上服務端
mongo 127.0.0.1:27017 #主
mongo 127.0.0.1:27018 #從
在主上操作
> db.test.insert({"name":"linyang"})
WriteResult({ "nInserted" : 1 })
> db.test.find({});
{ "_id" : ObjectId("5a264e0ecb6f7d3713c516a7"), "name" : "linyang" }
往裡面插入了一條記錄,可以再從的日誌中發現同步的記錄
2017-12-05T15:43:10.387+0800 I INDEX [replslave] allocating new ns file /Users/olifer/middle/mongo/master-slave/slave/data/test.ns, filling with zeroes...
2017-12-05T15:43:10.450+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/slave/data/test.0, filling with zeroes...
2017-12-05T15:43:10.646+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/slave/data/test.0, size: 64MB, took 0.196 secs
2017-12-05T15:43:10.702+0800 I REPL [replslave] resync: dropping database test
2017-12-05T15:43:10.709+0800 I JOURNAL [replslave] journalCleanup...
2017-12-05T15:43:10.710+0800 I JOURNAL [replslave] removeJournalFiles
2017-12-05T15:43:10.713+0800 I JOURNAL [replslave] journalCleanup...
2017-12-05T15:43:10.713+0800 I JOURNAL [replslave] removeJournalFiles
2017-12-05T15:43:10.715+0800 I REPL [replslave] resync: cloning database test to get an initial copy
2017-12-05T15:43:10.718+0800 I INDEX [replslave] allocating new ns file /Users/olifer/middle/mongo/master-slave/slave/data/test.ns, filling with zeroes...
2017-12-05T15:43:10.803+0800 I STORAGE [FileAllocator] allocating new datafile /Users/olifer/middle/mongo/master-slave/slave/data/test.0, filling with zeroes...
2017-12-05T15:43:11.056+0800 I STORAGE [FileAllocator] done allocating datafile /Users/olifer/middle/mongo/master-slave/slave/data/test.0, size: 64MB, took 0.252 secs
2017-12-05T15:43:11.107+0800 I INDEX [replslave] build index on: test.test properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "test.test" }
2017-12-05T15:43:11.107+0800 I INDEX [replslave] building index using bulk method
2017-12-05T15:43:11.107+0800 I INDEX [replslave] build index done. scanned 1 total records. 0 secs
2017-12-05T15:43:11.107+0800 I STORAGE [replslave] copying indexes for: { name: "test", options: {} }
2017-12-05T15:43:11.108+0800 I REPL [replslave] resync: done with initial clone for db: test
我們再從的client中檢視有沒有資訊。
mongo 127.0.0.1:27018
MongoDB shell version: 3.0.7
connecting to: 127.0.0.1:27018/test
> db.test.find({});
{ "_id" : ObjectId("5a264e0ecb6f7d3713c516a7"), "name" : "linyang" }
>
也確認同步過來了。然後再從上插入記錄
> db.test.insert({"age":34});
WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })
>
從上不能插入記錄。如果主掛掉了,從又不能寫資料,那麼是否mongo叢集就掛掉了。答案當然是否定的,因為,mongo還有副本集。
副本集
我們使用三臺mongo來模擬.
第一臺的配置檔案 /Users/olifer/middle/mongo/replica/a/mongod.conf
dbpath = /Users/olifer/middle/mongo/replica/a/data
port = 8001
bind_ip = 127.0.0.1
replSet = child/127.0.0.1:8002
第二臺的配置檔案 /Users/olifer/middle/mongo/replica/b/mongod.conf
dbpath = /Users/olifer/middle/mongo/replica/a/data
port = 8002
bind_ip = 127.0.0.1
replSet = child/127.0.0.1:8003
第三臺的配置檔案 /Users/olifer/middle/mongo/replica/c/mongod.conf
dbpath = /Users/olifer/middle/mongo/replica/a/data
port = 8003
bind_ip = 127.0.0.1
replSet = child/127.0.0.1:8001
啟動伺服器
mongod -f ~/middle/mongo/replica/a/mongod.conf
mongod -f ~/middle/mongo/replica/b/mongod.conf
mongod -f ~/middle/mongo/replica/c/mongod.conf
啟動成功後,進入三個服務的任何一個客戶端
mongo 127.0.0.1:8002
> config = {_id: 'child', members: [
{
"_id":1,
"host":"127.0.0.1:8001"
},
{
"_id":2,
"host":"127.0.0.1:8002"
},
{
"_id":3,
"host":"127.0.0.1:8003"
}
]
}
{
"_id" : "child",
"members" : [
{
"_id" : 1,
"host" : "127.0.0.1:8001"
},
{
"_id" : 2,
"host" : "127.0.0.1:8002"
},
{
"_id" : 3,
"host" : "127.0.0.1:8003"
}
]
}
> rs.initiate(config);
{ "ok" : 1 }
child:SECONDARY>
配置完後發現字首發生了改變,我們進入另外兩個渠道的客戶端
mongo 127.0.0.1:8001
MongoDB shell version: 3.0.7
connecting to: 127.0.0.1:8001/test
child:PRIMARY>
mongo 127.0.0.1:8003
MongoDB shell version: 3.0.7
connecting to: 127.0.0.1:8003/test
child:SECONDARY>
其中child:PRIMARY>表示活躍節點。其餘為備份節點。注意:只有活躍節點才能進行查詢資料庫的資訊操作,備份節點不能進行會報錯,在任意客戶端執行rs.status() 來檢視所有狀態
child:PRIMARY> rs.status()
{
"set" : "child",
"date" : ISODate("2017-12-05T08:56:44.523Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "127.0.0.1:8001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 620,
"optime" : Timestamp(1512463818, 1),
"optimeDate" : ISODate("2017-12-05T08:50:18Z"),
"electionTime" : Timestamp(1512463821, 1),
"electionDate" : ISODate("2017-12-05T08:50:21Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "127.0.0.1:8002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 387,
"optime" : Timestamp(1512463818, 1),
"optimeDate" : ISODate("2017-12-05T08:50:18Z"),
"lastHeartbeat" : ISODate("2017-12-05T08:56:43.756Z"),
"lastHeartbeatRecv" : ISODate("2017-12-05T08:56:43.756Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 3,
"name" : "127.0.0.1:8003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 387,
"optime" : Timestamp(1512463818, 1),
"optimeDate" : ISODate("2017-12-05T08:50:18Z"),
"lastHeartbeat" : ISODate("2017-12-05T08:56:43.756Z"),
"lastHeartbeatRecv" : ISODate("2017-12-05T08:56:43.756Z"),
"pingMs" : 0,
"configVersion" : 1
}
],
"ok" : 1
}
搭建成功後,我們來驗證一下。
在活躍的節點建立資料,在備份庫拉取資料
child:PRIMARY> db.repl.insert({"name":"123"});
WriteResult({ "nInserted" : 1 })
child:SECONDARY> db.repl.find();
{ "_id" : ObjectId("5a26602f5cefb1fdb377843b"), "name" : "123" }
功能正常。
從伺服器不能寫
child:SECONDARY> db.repl.insert({"age":444});
WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })
關閉活躍的節點,從伺服器會通過選舉得到最新的活躍的節點
當我關閉原來活躍的伺服器8001後,通過內部選舉,8002成了最新的活躍節點沒從字首也可以看出
child:SECONDARY>
child:PRIMARY>
現在看一下最新的狀態
child:PRIMARY> rs.status();
{
"set" : "child",
"date" : ISODate("2017-12-05T09:09:31.666Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "127.0.0.1:8001",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-12-05T09:09:31.193Z"),
"lastHeartbeatRecv" : ISODate("2017-12-05T09:07:12.773Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "Failed attempt to connect to 127.0.0.1:8001; couldn't connect to server 127.0.0.1:8001 (127.0.0.1), connection attempt failed",
"configVersion" : -1
},
{
"_id" : 2,
"name" : "127.0.0.1:8002",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1308,
"optime" : Timestamp(1512464432, 2),
"optimeDate" : ISODate("2017-12-05T09:00:32Z"),
"electionTime" : Timestamp(1512464835, 1),
"electionDate" : ISODate("2017-12-05T09:07:15Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 3,
"name" : "127.0.0.1:8003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1152,
"optime" : Timestamp(1512464432, 2),
"optimeDate" : ISODate("2017-12-05T09:00:32Z"),
"lastHeartbeat" : ISODate("2017-12-05T09:09:30.963Z"),
"lastHeartbeatRecv" : ISODate("2017-12-05T09:09:30.963Z"),
"pingMs" : 0,
"configVersion" : 1
}
],
"ok" : 1
}
說明內部的切換確實成功了。還剩下一個分片,我們下次再講。
參考資料
相關文章
- mongo副本集叢集安裝配置Go
- redis偽叢集配置Cluster叢集模式Redis模式
- Kafka叢集配置Kafka
- MySQL叢集配置MySql
- 【Mongo】mongo配置檔案Go
- Docker Elasticsearch 叢集配置DockerElasticsearch
- MySQL叢集配置(轉)MySql
- 【MongoDB】windows平臺搭建Mongo資料庫複製集(相似叢集)(一)MongoDBWindows資料庫
- MONGO 叢集 修改linux主機時間後的影響GoLinux
- HA叢集heartbeat配置--NginxNginx
- FastDFS 叢集 安裝 配置AST
- xCAT叢集管理配置工具
- 【Mongodb】Sharding 叢集配置MongoDB
- 39_配置redis叢集Redis
- es 叢集配置推薦
- linux 怎麼配置叢集Linux
- spark叢集的配置檔案Spark
- docker 配置 Mysql主從叢集DockerMySql
- Vert.x 叢集配置 TCPTCP
- redis sentinel 叢集監控 配置Redis
- weblogic8.1叢集配置Web
- Weblogic10 叢集配置Web
- Elastic Search 7.x 叢集配置AST
- redis原理及叢集主從配置Redis
- Hadoop完全分散式叢集配置Hadoop分散式
- 大資料Spark叢集模式配置大資料Spark模式
- Elasticsearch分散式搜尋叢集配置Elasticsearch分散式
- Quartz叢集原理及配置應用quartz
- hadoop叢集配置和啟動Hadoop
- 【Mongo】mongo分片加複製集的備份恢復Go
- Zookeeper叢集 + Kafka叢集Kafka
- Hadoop叢集是配置時間同步!Hadoop
- CentOS上zookeeper叢集模式安裝配置CentOS模式
- 搭建高可用MongoDB叢集(一):配置MongoDBMongoDB
- mongodb叢集搭建及配置安全認證MongoDB
- Weblogic 10.3.6叢集配置手冊Web
- 如何配置一個 Docker Swarm 原生叢集DockerSwarm
- 3 安裝配置oracle叢集和RACOracle