利用Mongodb的複製集搭建高可用分片,Replica Sets + Sharding的搭建過程
參考資料 reference: http://mongodb.blog.51cto.com/1071559/740131
感謝網友Mr.Sharp,他給了我很多很有用的建議。
概念梳理
Sharded cluster has the following components: shards, query
routers and config servers.
Shards A :
A shard is a MongoDB instance that holds a subset of a collection’s data. Each
shard is either a single
mongod instance or a replica set. In production, all shards are replica sets.
Config Servers Each :
config server (page 10) is a mongod instance that holds metadata about the
cluster. The metadata
maps chunks to shards. For more information, see Config Servers (page 10).
Routing Instances :
Each router is a mongos instance that routes the reads and writes from
applications to the shards.
Applications do not access the shards directly.
Sharding is the only solution for some classes of deployments. Use
sharded clusters if:
1 your data set approaches or exceeds the storage capacity of a single MongoDB
instance.
2 the size of your system’s active working set will soon exceed the capacity of
your system’s maximum RAM.
3 a single MongoDB instance cannot meet the demands of your write operations,
and all other approaches have
not reduced contention.
開始部署搭建
Deploy a
Sharded Cluster
1 Start the Config Server
Database Instances?
1.1 create data
directories for each of the three config server instances
mkdir -p /opt/data/configdb
1.2 Start the three config server
instances. Start each
by issuing a command using the following syntax:
ignore this,do it later
1.3 config file
[root@472322 configdb]# vim /etc/mongodb/37018.conf
# This is an example config file for MongoDB
dbpath = /opt/data/configdb2
port = 37018
rest = true
fork = true
logpath = /var/log/mongodb12.log
logappend = true
directoryperdb = true
configsvr = true
[root@472322 configdb]# vim /etc/mongodb/37017.conf
# This is an example config file for MongoDB
dbpath = /opt/data/configdb1
port = 37017
rest = true
fork = true
logpath = /var/log/mongodb12.log
logappend = true
directoryperdb = true
configsvr = true
[root@472322 configdb]# vim /etc/mongodb/37019.conf
# This is an example config file for MongoDB
dbpath = /opt/data/configdb3
port = 37019
rest = true
fork = true
logpath = /var/log/mongodb13.log
logappend = true
directoryperdb = true
configsvr = true
1.4 start mongodb server
/db/mongodb/bin/mongod -f /etc/mongodb/37019.conf
/db/mongodb/bin/mongod -f /etc/mongodb/37018.conf
/db/mongodb/bin/mongod -f /etc/mongodb/37017.conf
2 Start the
mongos Instances
The mongos instances are lightweight and do not require data
directories. You can run a mongos instance on a system that runs other cluster
components,
such as on an application server or a server running a mongod process. By
default, a mongos instance runs on port 27017.
/db/mongodb/bin/mongos --configdb
20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019
or run " nohup /db/mongodb/bin/mongos --configdb
20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 & "
2.1 start mongos on default port 27017
[root@472322 ~]# nohup /db/mongodb/bin/mongos --configdb
20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 &
Tue Aug 6 07:40:09 /db/mongodb/bin/mongos db version v2.0.1, pdfile
version 4.5 starting (--help for usage)
Tue Aug 6 07:40:09 git version:
3a5cf0e2134a830d38d2d1aae7e88cac31bdd684
Tue Aug 6 07:40:09 build info: Linux bs-linux64.10gen.cc
2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64
BOOST_LIB_VERSION=1_41
Tue Aug 6 07:40:09 SyncClusterConnection connecting to
[20.10x.91.119:37017]
Tue Aug 6 07:40:09 SyncClusterConnection connecting to
[20.10x.91.119:37018]
Tue Aug 6 07:40:09 SyncClusterConnection connecting to
[20.10x.91.119:37019]
Tue Aug 6 07:40:09 [Balancer] about to contact config servers and
shards
Tue Aug 6 07:40:09 [websvr] admin web console waiting for
connections on port 28017
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37017]
Tue Aug 6 07:40:09 [mongosMain] waiting for connections on port
27017
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37018]
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37019]
Tue Aug 6 07:40:09 [Balancer] config servers and shards contacted
successfully
Tue Aug 6 07:40:09 [Balancer] balancer id: 472322.ea.com:27017
started at Aug 6 07:40:09
Tue Aug 6 07:40:09 [Balancer] created new distributed lock for
balancer on 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 ( lock
timeout : 900000, ping interval : 30000, process : 0 )
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37017]
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37018]
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37019]
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37017]
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37018]
Tue Aug 6 07:40:09 [Balancer] SyncClusterConnection connecting to
[20.10x.91.119:37019]
Tue Aug 6 07:40:09 [LockPinger] creating distributed lock ping
thread for 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 and
process 472322.ea.com:27017:1375774809:1804289383 (sleeping for 30000ms)
Tue Aug 6 07:40:10 [Balancer] distributed lock
'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8598caa3e21e9888bd3
Tue Aug 6 07:40:10 [Balancer] distributed lock
'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.
Tue Aug 6 07:40:20 [Balancer] distributed lock
'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8648caa3e21e9888bd4
Tue Aug 6 07:40:20 [Balancer] distributed lock
'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.
Tue Aug 6 07:40:30 [Balancer] distributed lock
'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a86e8caa3e21e9888bd5
Tue Aug 6 07:40:30 [Balancer] distributed lock
'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.
Tue Aug 6 07:40:40 [Balancer] distributed lock
'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8788caa3e21e9888bd6
Tue Aug 6 07:40:40 [Balancer] distributed lock
'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.
2.2
start others mongos in appointed port 27018,27019
nohup /db/mongodb/bin/mongos --configdb
20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 --port 27018
--chunkSize 1 --logpath /var/log/mongos2.log --fork &
nohup /db/mongodb/bin/mongos --configdb
20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 --port 27019
--chunkSize 1 --logpath /var/log/mongos3.log --fork &
3
[root@472322 ~]# /db/mongodb/bin/mongo --host 20.10x.91.119 --port 27017
the replica sets have not been created, so do it
soon.
4 prepare
the replica sets
4.1 the first sets
4.1.1 create the data directory
mkdir -p /opt/db/mongodb-data/db37
mkdir -p /opt/db/mongodb-data/db38
mkdir -p /opt/db/mongodb-data/db39
4.1.2 set the config files 27037.conf
[root@472322 mongodb]# vi 27037.conf
dbpath =/opt/db/mongodb-data/db37
port = 27037
rest = true
fork = true
logpath = /var/log/mongodb37.log
logappend = true
replSet = rpl
#diaglog = 3
profile = 3
slowms=50
oplogSize=4000
4.1.3 set the config files 27039.conf
[root@472322 mongodb]# vi 27039.conf
# This is an example config file for MongoDB
dbpath =/opt/db/mongodb-data/db39
port = 27039
rest = true
fork = true
logpath = /var/log/mongodb39.log
logappend = true
replSet = rpl
#diaglog = 3
profile = 3
slowms=50
oplogSize=4000
4.1.4 set the config files
27038.conf
[root@472322 mongodb]# vi 27038.conf
# This is an example config file for MongoDB
dbpath =/opt/db/mongodb-data/db38
port = 27038
rest = true
fork = true
logpath = /var/log/mongodb38.log
logappend = true
replSet = rpl
#diaglog = 3
profile = 3
slowms=50
oplogSize=4000
4.1.5 start replica set
config = {_id: 'rpl1', members: [
{_id: 0, host: '127.0.0.1:27027'},
{_id: 1, host: '127.0.0.1:27028'},
{_id: 3, host: '127.0.0.1:27029', arbiterOnly: true}
]};
> config = {_id: 'rpl1', members: [
... {_id: 0, host: '127.0.0.1:27027'},
... {_id: 1, host: '127.0.0.1:27028'},
... {_id: 3, host: '127.0.0.1:27029', arbiterOnly: true}
... ]};
{
"_id" : "rpl1",
"members" : [
{
"_id" : 0,
"host" :
"127.0.0.1:27027"
},
{
"_id" : 1,
"host" :
"127.0.0.1:27028"
},
{
"_id" : 3,
"host" :
"127.0.0.1:27029",
"arbiterOnly" : true
}
]
}
rs.initiate(config);
> rs.initiate(config);
{
"info" : "Config now saved
locally. Should come online in about a minute.",
"ok" : 1
}
rs.status();
PRIMARY> rs.status();
{
"set" : "rpl1",
"date" :
ISODate("2013-08-06T09:18:39Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" :
"127.0.0.1:27027",
"health" : 1,
"state" : 1,
"stateStr" :
"PRIMARY",
"optime" : {
"t" :
1375780672000,
"i" : 1
},
"optimeDate" :
ISODate("2013-08-06T09:17:52Z"),
"self" : true
},
{
"_id" : 1,
"name" :
"127.0.0.1:27028",
"health" : 1,
"state" : 2,
"stateStr" :
"SECONDARY",
"uptime" : 29,
"optime" : {
"t" :
1375780672000,
"i" : 1
},
"optimeDate" :
ISODate("2013-08-06T09:17:52Z"),
"lastHeartbeat" :
ISODate("2013-08-06T09:18:38Z"),
"pingMs" : 0
},
{
"_id" : 3,
"name" :
"127.0.0.1:27029",
"health" : 1,
"state" : 7,
"stateStr" :
"ARBITER",
"uptime" : 31,
"optime" : {
"t" : 0,
"i" : 0
},
"optimeDate" :
ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" :
ISODate("2013-08-06T09:18:38Z"),
"pingMs" : 0
}
],
"ok" : 1
}
4.2 the second sets
do the same to 4.1 in 27037,27038,27039 ports
config = {_id: 'rpl2', members: [
{_id: 0, host: '127.0.0.1:27037'},
{_id: 1, host: '127.0.0.1:27038'},
{_id: 3, host: '127.0.0.1:27039', arbiterOnly: true}
]};
rs.initiate(config);
rs.status();
5 Add Shards
to the Cluster
A shard can be a standalone mongod or a replica set. In a
production environment, each shard should be a replica set.
5.1 From a mongo shell, connect to the
mongos instance. Issue
a command using the following syntax:
[root@472322 mongodb]# /db/mongodb/bin/mongo --host 20.10x.91.119 --port
27017
MongoDB shell version: 2.0.1
connecting to: 20.10x.91.119:27017/test
mongos>
5.2 add the replica rpl1 into
shards
sh.addShard( "rpl1/127.0.0.1:27027" );
mongos> sh.addShard( "rpl1/127.0.0.1:27027" );
command failed: {
"ok" : 0,
"errmsg" : "can't use localhost as a shard
since all shards need to communicate. either use all shards and configdbs in
localhost or all in actual IPs host: 127.0.0.1:27027 isLocalHost:1"
}
Tue Aug 6 09:41:16 uncaught exception: error { "$err" :
"can't find a shard to put new db on", "code" : 10185 }
mongos>
can't use the localhost or 127.0.0.1, so change to the real ip address of
"20.10x.91.119".
mongos> sh.addShard( "rpl1/20.10x.91.119:27027" );
command failed: {
"ok" : 0,
"errmsg" : "in seed list
rpl1/20.10x.91.119:27027, host 20.10x.91.119:27027 does not belong to replica
set rpl1"
}
Tue Aug 6 09:42:26 uncaught exception: error { "$err" :
"can't find a shard to put new db on", "code" : 10185 }
mongos>
sh.addShard(
"rpl1/20.10x.91.119:27027,20.10x.91.119:27028,20.10x.91.119:27029" );
so, replica sets config
127.0.0.1, 20.10x.91.119 can't be recognized,it is wrong. i should config it
again.
5.2.1 delete all db files, then restart
mongodb, then config again
config = {_id: 'rpl1', members: [
{_id: 0, host: '20.10x.91.119:27027'},
{_id: 1, host: '20.10x.91.119:27028'},
{_id: 3, host: '20.10x.91.119:27029', arbiterOnly: true}
]};
rs.initiate(config);
rs.status();
config = {_id: 'rpl2', members: [
{_id: 0, host: '20.10x.91.119:27037'},
{_id: 1, host: '20.10x.91.119:27038'},
{_id: 3, host: '20.10x.91.119:27039', arbiterOnly: true}
]};
rs.initiate(config);
rs.status();
5.2.2 add replica sets to shards again,
it is ok.
[root@472322 ~]# /db/mongodb/bin/mongo --host 20.10x.91.119 --port
27017
MongoDB shell version: 2.0.1
connecting to: 20.10x.91.119:27017/test
mongos> sh.addShard( "rpl1/20.10x.91.119:27027,20.10x.91.119:27028,20.10x.91.119:27029"
);
mongos>
6 Enable
Sharding for a Database
Before you can shard a collection, you must enable sharding
for the collection’s database.
Enabling sharding for a database does not redistribute data but make it
possible to shard the collections in that database.
6.1 1.From a mongo shell, connect to the mongos
instance. Issue a command using the following syntax:
sh.enableSharding("
6.2 Optionally, you can enable sharding for a database
using the enableSharding command, which uses the following syntax:
db.runCommand( { enableSharding:
7
Convert a Replica Set to a Replicated Sharded Cluster
簡介梳理The
procedure, from a high level, is as follows:
7.1.Create or select a 3-member replica set and insert some data into a
collection. done
7.2.Start the config databases and create a cluster with a single
shard. done.
7.3.Create a second replica set with three new mongod instances.
done
7.4.Add the second replica set as a shard in the cluster. haven't
7.5.Enable sharding on the desired collection or collections. haven't
開始操作
7.2 Add the second replica set as a shard in
the cluster.
sh.addShard( "rpl2/20.10x.91.119:27037,20.10x.91.119:27038,20.10x.91.119:27039"
);
it is okay.
7.3 Verify that both shards are
properly configured by running the listShards command.
mongos> use admin
switched to db admin
mongos> db.runCommand({listShards:1})
{
"shards" : [
{
"_id" : "rpl1",
"host" :
"rpl1/20.10x.91.119:27027,20.10x.91.119:27028,20.10x.91.119:27029"
},
{
"_id" : "rpl2",
"host" :
"rpl2/20.10x.91.119:27037,20.10x.91.119:27038,20.10x.91.119:27039"
}
],
"ok" : 1
}
mongos>
7.4 insert into 1000,000 data to
test.tickets,and shared the tickets collections.
mongos> db.runCommand( { enableSharding : "test" } );
{ "ok" : 1 }
mongos> use admin
mongos> db.runCommand( { shardCollection : "test.tickets",
key : {"number":1} });
{
"proposedKey" : {
"number" : 1
},
"curIndexes" : [
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" :
"test.tickets",
"name" : "_id_"
}
],
"ok" : 0,
"errmsg" : "please create an index over the
sharding key before sharding."
}
mongos>
if enable index to share, must create index first.
db.tickets.ensureIndex({ip:1});
db.runCommand( { shardCollection : "test.tickets", key :
{"ip":1} })
mongos> db.runCommand( { shardCollection : "test.tickets",
key : {"ip":1} })
{ "collectionsharded" : "test.tickets",
"ok" : 1 }
mongos>
The collection tickets is now sharded!
7.5 check in mongos
use test;
db.stats();
db.printShardingStatus();
there is no data chunks in rpl2,so sharedCollection is error.
7.6 i decided to add the second shared
again, first remove,and then add.
7.6.1 Use the listShards command, as in the
following:
db.adminCommand( { listShards: 1 } );
get the share name
7.6.2 Remove Chunks from the Shard
db.runCommand( { removeShard: "rpl2" } );
mongos> use admin
switched to db admin
mongos> db.runCommand( { removeShard: "rpl2" } );
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "rpl2",
"ok" : 1
}
mongos>
check
db.adminCommand( { listShards: 1 } );
it is ok. when i insert data for tickets in mongo 27027 of first shard,
the data can't be switched to the second shard.
why ? i
asked for HYG, he told me that we must insert data in mongos windown,if not
,the data can't be switched,so i will try to do it soon.
7.7 insert data in mongos command window
db.tc2.ensureIndex({xx:1});
db.runCommand( { shardCollection : "test.tc2", key :
{"xx":1} })
for( var i = 1; i < 2000000; i++ ) db.tc2.insert({ "xx":i,
"fapp" : "84eb9fb556074d6481e31915ac2427f0",
"dne" :
"ueeDEhIB6tmP4cfY43NwWvAenzKWx19znmbheAuBl4j39U8uFXS1QGi2GCMHO7L21szgeF6Iquqmnw8kfJbvZUs/11RyxcoRm+otbUJyPPxFkevzv4SrI3kGxczG6Lsd19NBpyskaElCTtVKxwvQyBNgciXYq6cO/8ntV2C6cwQ=",
"eml" : "q5x68h3qyVBqp3ollJrY3XEkXECjPEncXhbJjga+3hYoa4zYNhrNmBN91meL3o7jsBI/N6qe2bb2BOnOJNAnBMDzhNmPqJKG/ZVLfT9jpNkUD/pQ4oJENMv72L2GZoiyym2IFT+oT3N0KFhcv08b9ke9tm2EHTGcBsGg1R40Ah+Y/5z89OI4ERmI/48qjvaw",
"uid" : "sQt92NUPr3CpCVnbpotU2lqRNVfZD6k/9TGW62UT7ExZYF8Dp1cWIVQoYNQVyFRLkxjmCoa8m6DiLiL/fPdG1k7WYGUH4ueXXK2yfVn/AGUk3pQbIuh7nFbqZCrAQtEY7gU0aIGC4sotAE8kghvCa5qWnSX0SWTViAE/esaWORo=",
"agt" : "PHP/eaSSOPlugin", "sd" :
"S15345853557133877", "pt" : 3795, "ip"
: "211.223.160.34", "av" : "http://secure.download.dm.origin.com/production/avatar/prod/1/599/40x40.JPEG",
"nex" : ISODate("2013-01-18T04:00:32.41Z"),
"exy" : ISODate("2013-01-18T01:16:32.015Z"),
"chk" : "sso4648609868740971", "aid" :
"Ciyvab0tregdVsBtboIpeChe4G6uzC1v5_-SIxmvSLLINJClUkXJhNvWkSUnajzi8xsv1DGYm0D3V46LLFI-61TVro9-HrZwyRwyTf9NwYIyentrgAY_qhs8fh7unyfB",
"tid" : "rUqFhONysi0yA==13583853927872",
"date" : ISODate("2012-02-17T01:16:32.787Z"),
"v" : "2.0.0", "scope" : [],
"rug" : false, "schk" : true, "fjs" :
false, "sites" : [{
"name" : "Origin.com",
"id" : "nb09xrt8384147bba27c2f12e112o8k9",
"last" :
ISODate("2013-01-17T01:16:32.787Z"),
"_id" :
ObjectId("50f750f06a56028661000f20") }]});
watch the status of sharding, use the
command of "db.printShardingStatus();"
mongos> db.printShardingStatus();
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "rpl1",
"host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
{ "_id" : "rpl2",
"host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
databases:
{ "_id" : "admin",
"partitioned" : false, "primary" : "config"
}
{ "_id" : "test",
"partitioned" : true, "primary" : "rpl1" }
test.tc chunks:
rpl1 1
{ "ip" : { $minKey : 1 } }
-->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" :
1000, "i" : 0 }
test.tc2 chunks:
rpl1 62
rpl2 61
too many chunks to
print, use verbose if you want to force print
test.tickets chunks:
rpl1 1
{ "ip" : { $minKey : 1 } }
-->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" :
1000, "i" : 0 }
{ "_id" : "adin",
"partitioned" : false, "primary" : "rpl1" }
mongos>
if we want to see the detail of
"too many chunks to print, use verbose if you want to force print",we
should use the parameter vvvv.
db.printShardingStatus("vvvv");
mongos> db.printShardingStatus("vvvv")
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "rpl1",
"host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
{ "_id" : "rpl2",
"host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
databases:
{ "_id" : "admin",
"partitioned" : false, "primary" : "config"
}
{ "_id" : "test",
"partitioned" : true, "primary" : "rpl1" }
test.tc chunks:
rpl1 1
{ "ip" : { $minKey : 1 } }
-->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" :
1000, "i" : 0 }
test.tc2 chunks:
rpl1 62
rpl2 61
{ "xx" : { $minKey : 1 } }
-->> { "xx" : 1 } on : rpl1 { "t" : 59000,
"i" : 1 }
{ "xx" : 1 } -->> {
"xx" : 2291 } on : rpl1 { "t" : 2000, "i" : 2 }
{ "xx" : 2291 } -->>
{ "xx" : 4582 } on : rpl1 { "t" : 2000, "i" : 4 }
{ "xx" : 4582 } -->>
{ "xx" : 6377 } on : rpl1 { "t" : 2000, "i" : 6 }
{ "xx" : 6377 } -->>
{ "xx" : 8095 } on : rpl1 { "t" : 2000, "i" : 8 }
{ "xx" : 8095 } -->>
{ "xx" : 9813 } on : rpl1 { "t" : 2000, "i" : 10
}
{ "xx" : 9813 } -->>
{ "xx" : 11919 } on : rpl1 { "t" : 2000, "i" : 12
}
{ "xx" : 11919 } -->>
{ "xx" : 14210 } on : rpl1 { "t" : 2000, "i" : 14
}
{ "xx" : 14210 } -->>
{ "xx" : 18563 } on : rpl1 { "t" : 2000, "i" : 16
}
{ "xx" : 18563 } -->>
{ "xx" : 23146 } on : rpl1 { "t" : 2000, "i" : 18
}
{ "xx" : 23146 } -->>
{ "xx" : 27187 } on : rpl1 { "t" : 2000, "i" : 20
}
{ "xx" : 27187 } -->>
{ "xx" : 31770 } on : rpl1 { "t" : 2000, "i" : 22
}
{ "xx" : 31770 } -->>
{ "xx" : 35246 } on : rpl1 { "t" : 2000, "i" : 24
}
{ "xx" : 35246 } -->>
{ "xx" : 38683 } on : rpl1 { "t" : 2000, "i" : 26
}
{ "xx" : 38683 } -->>
{ "xx" : 42120 } on : rpl1 { "t" : 2000, "i" : 28
}
{ "xx" : 42120 } -->>
{ "xx" : 45557 } on : rpl1 { "t" : 2000, "i" : 30
}
{ "xx" : 45557 } -->>
{ "xx" : 48994 } on : rpl1 { "t" : 2000, "i" : 32
}
{ "xx" : 48994 } -->>
{ "xx" : 53242 } on : rpl1 { "t" : 2000, "i" : 34
}
{ "xx" : 53242 } -->>
{ "xx" : 62409 } on : rpl1 { "t" : 2000, "i" : 36
}
{ "xx" : 62409 } -->>
{ "xx" : 71576 } on : rpl1 { "t" : 2000, "i" : 38
}
{ "xx" : 71576 } -->>
{ "xx" : 80743 } on : rpl1 { "t" : 2000, "i" : 40
}
{ "xx" : 80743 } -->>
{ "xx" : 89910 } on : rpl1 { "t" : 2000, "i" : 42
}
{ "xx" : 89910 } -->>
{ "xx" : 99077 } on : rpl1 { "t" : 2000, "i" : 44
}
{ "xx" : 99077 } -->>
{ "xx" : 119382 } on : rpl1 { "t" : 2000, "i" :
46 }
{ "xx" : 119382 } -->>
{ "xx" : 133133 } on : rpl1 { "t" : 2000, "i" :
48 }
{ "xx" : 133133 }
-->> { "xx" : 146884 } on : rpl1 { "t" : 2000,
"i" : 50 }
{ "xx" : 146884 }
-->> { "xx" : 160635 } on : rpl1 { "t" : 2000,
"i" : 52 }
{ "xx" : 160635 }
-->> { "xx" : 178457 } on : rpl1 { "t" : 2000,
"i" : 54 }
{ "xx" : 178457 }
-->> { "xx" : 200508 } on : rpl1 { "t" : 2000,
"i" : 56 }
{ "xx" : 200508 }
-->> { "xx" : 214259 } on : rpl1 { "t" : 2000,
"i" : 58 }
{ "xx" : 214259 }
-->> { "xx" : 228010 } on : rpl1 { "t" : 2000,
"i" : 60 }
{ "xx" : 228010 }
-->> { "xx" : 241761 } on : rpl1 { "t" : 2000,
"i" : 62 }
{ "xx" : 241761 }
-->> { "xx" : 255512 } on : rpl1 { "t" : 2000,
"i" : 64 }
{ "xx" : 255512 }
-->> { "xx" : 279630 } on : rpl1 { "t" : 2000,
"i" : 66 }
{ "xx" : 279630 }
-->> { "xx" : 301723 } on : rpl1 { "t" : 2000,
"i" : 68 }
{ "xx" : 301723 }
-->> { "xx" : 317196 } on : rpl1 { "t" : 2000,
"i" : 70 }
{ "xx" : 317196 }
-->> { "xx" : 336533 } on : rpl1 { "t" : 2000,
"i" : 72 }
{ "xx" : 336533 }
-->> { "xx" : 359500 } on : rpl1 { "t" : 2000,
"i" : 74 }
{ "xx" : 359500 }
-->> { "xx" : 385354 } on : rpl1 { "t" : 2000,
"i" : 76 }
{ "xx" : 385354 }
-->> { "xx" : 400837 } on : rpl1 { "t" : 2000,
"i" : 78 }
{ "xx" : 400837 }
-->> { "xx" : 422259 } on : rpl1 { "t" : 2000,
"i" : 80 }
{ "xx" : 422259 }
-->> { "xx" : 444847 } on : rpl1 { "t" : 2000,
"i" : 82 }
{ "xx" : 444847 }
-->> { "xx" : 472084 } on : rpl1 { "t" : 2000,
"i" : 84 }
{ "xx" : 472084 }
-->> { "xx" : 490796 } on : rpl1 { "t" : 2000,
"i" : 86 }
{ "xx" : 490796 }
-->> { "xx" : 509498 } on : rpl1 { "t" : 2000,
"i" : 88 }
{ "xx" : 509498 }
-->> { "xx" : 534670 } on : rpl1 { "t" : 2000,
"i" : 90 }
{ "xx" : 534670 }
-->> { "xx" : 561927 } on : rpl1 { "t" : 2000,
"i" : 92 }
{ "xx" : 561927 }
-->> { "xx" : 586650 } on : rpl1 { "t" : 2000,
"i" : 94 }
{ "xx" : 586650 }
-->> { "xx" : 606316 } on : rpl1 { "t" : 2000,
"i" : 96 }
{ "xx" : 606316 }
-->> { "xx" : 632292 } on : rpl1 { "t" : 2000,
"i" : 98 }
{ "xx" : 632292 }
-->> { "xx" : 650179 } on : rpl1 { "t" : 2000,
"i" : 100 }
{ "xx" : 650179 }
-->> { "xx" : 670483 } on : rpl1 { "t" : 2000,
"i" : 102 }
{ "xx" : 670483 }
-->> { "xx" : 695898 } on : rpl1 { "t" : 2000,
"i" : 104 }
{ "xx" : 695898 }
-->> { "xx" : 719853 } on : rpl1 { "t" : 2000,
"i" : 106 }
{ "xx" : 719853 }
-->> { "xx" : 734751 } on : rpl1 { "t" : 2000,
"i" : 108 }
{ "xx" : 734751 }
-->> { "xx" : 757143 } on : rpl1 { "t" : 2000,
"i" : 110 }
{ "xx" : 757143 }
-->> { "xx" : 773800 } on : rpl1 { "t" : 2000,
"i" : 112 }
{ "xx" : 773800 }
-->> { "xx" : 796919 } on : rpl1 { "t" : 2000,
"i" : 114 }
{ "xx" : 796919 }
-->> { "xx" : 814262 } on : rpl1 { "t" : 2000,
"i" : 116 }
{ "xx" : 814262 }
-->> { "xx" : 837215 } on : rpl1 { "t" : 2000,
"i" : 118 }
{ "xx" : 837215 }
-->> { "xx" : 855766 } on : rpl1 { "t" : 2000,
"i" : 120 }
{ "xx" : 855766 }
-->> { "xx" : 869517 } on : rpl1 { "t" : 2000,
"i" : 122 }
{ "xx" : 869517 }
-->> { "xx" : 883268 } on : rpl2 { "t" : 59000,
"i" : 0 }
{ "xx" : 883268 }
-->> { "xx" : 897019 } on : rpl2 { "t" : 58000,
"i" : 0 }
{ "xx" : 897019 }
-->> { "xx" : 919595 } on : rpl2 { "t" : 57000,
"i" : 0 }
{ "xx" : 919595 }
-->> { "xx" : 946611 } on : rpl2 { "t" : 56000,
"i" : 0 }
{ "xx" : 946611 }
-->> { "xx" : 966850 } on : rpl2 { "t" : 55000,
"i" : 0 }
{ "xx" : 966850 }
-->> { "xx" : 989291 } on : rpl2 { "t" : 54000,
"i" : 0 }
{ "xx" : 989291 }
-->> { "xx" : 1008580 } on : rpl2 { "t" : 53000,
"i" : 0 }
{ "xx" : 1008580 }
-->> { "xx" : 1022331 } on : rpl2 { "t" : 52000,
"i" : 0 }
{ "xx" : 1022331 }
-->> { "xx" : 1036082 } on : rpl2 { "t" : 51000,
"i" : 0 }
{ "xx" : 1036082 }
-->> { "xx" : 1060888 } on : rpl2 { "t" : 50000,
"i" : 0 }
{ "xx" : 1060888 }
-->> { "xx" : 1088121 } on : rpl2 { "t" : 49000,
"i" : 0 }
{ "xx" : 1088121 }
-->> { "xx" : 1101872 } on : rpl2 { "t" : 48000,
"i" : 0 }
{ "xx" : 1101872 }
-->> { "xx" : 1122160 } on : rpl2 { "t" : 47000,
"i" : 0 }
{ "xx" : 1122160 }
-->> { "xx" : 1143537 } on : rpl2 { "t" : 46000,
"i" : 0 }
{ "xx" : 1143537 }
-->> { "xx" : 1168372 } on : rpl2 { "t" : 45000,
"i" : 0 }
{ "xx" : 1168372 }
-->> { "xx" : 1182123 } on : rpl2 { "t" : 44000,
"i" : 0 }
{ "xx" : 1182123 }
-->> { "xx" : 1201952 } on : rpl2 { "t" : 43000,
"i" : 0 }
{ "xx" : 1201952 }
-->> { "xx" : 1219149 } on : rpl2 { "t" : 42000,
"i" : 0 }
{ "xx" : 1219149 }
-->> { "xx" : 1232900 } on : rpl2 { "t" : 41000,
"i" : 0 }
{ "xx" : 1232900 }
-->> { "xx" : 1247184 } on : rpl2 { "t" : 40000,
"i" : 0 }
{ "xx" : 1247184 }
-->> { "xx" : 1270801 } on : rpl2 { "t" : 39000,
"i" : 0 }
{ "xx" : 1270801 }
-->> { "xx" : 1294343 } on : rpl2 { "t" : 38000,
"i" : 0 }
{ "xx" : 1294343 }
-->> { "xx" : 1313250 } on : rpl2 { "t" : 37000,
"i" : 0 }
{ "xx" : 1313250 }
-->> { "xx" : 1336332 } on : rpl2 { "t" : 36000,
"i" : 0 }
{ "xx" : 1336332 }
-->> { "xx" : 1358840 } on : rpl2 { "t" : 35000,
"i" : 0 }
{ "xx" : 1358840 }
-->> { "xx" : 1372591 } on : rpl2 { "t" : 34000,
"i" : 0 }
{ "xx" : 1372591 }
-->> { "xx" : 1386342 } on : rpl2 { "t" : 33000,
"i" : 0 }
{ "xx" : 1386342 }
-->> { "xx" : 1400093 } on : rpl2 { "t" : 32000,
"i" : 0 }
{ "xx" : 1400093 }
-->> { "xx" : 1418372 } on : rpl2 { "t" : 31000,
"i" : 0 }
{ "xx" : 1418372 }
-->> { "xx" : 1440590 } on : rpl2 { "t" : 30000,
"i" : 0 }
{ "xx" : 1440590 }
-->> { "xx" : 1461034 } on : rpl2 { "t" : 29000,
"i" : 0 }
{ "xx" : 1461034 }
-->> { "xx" : 1488305 } on : rpl2 { "t" : 28000,
"i" : 0 }
{ "xx" : 1488305 }
-->> { "xx" : 1510326 } on : rpl2 { "t" : 27000,
"i" : 0 }
{ "xx" : 1510326 }
-->> { "xx" : 1531986 } on : rpl2 { "t" : 26000,
"i" : 0 }
{ "xx" : 1531986 }
-->> { "xx" : 1545737 } on : rpl2 { "t" : 25000,
"i" : 0 }
{ "xx" : 1545737 }
-->> { "xx" : 1559488 } on : rpl2 { "t" : 24000,
"i" : 0 }
{ "xx" : 1559488 }
-->> { "xx" : 1576755 } on : rpl2 { "t" : 23000,
"i" : 0 }
{ "xx" : 1576755 }
-->> { "xx" : 1596977 } on : rpl2 { "t" : 22000,
"i" : 0 }
{ "xx" : 1596977 }
-->> { "xx" : 1619863 } on : rpl2 { "t" : 21000,
"i" : 0 }
{ "xx" : 1619863 }
-->> { "xx" : 1633614 } on : rpl2 { "t" : 20000,
"i" : 0 }
{ "xx" : 1633614 }
-->> { "xx" : 1647365 } on : rpl2 { "t" : 19000,
"i" : 0 }
{ "xx" : 1647365 }
-->> { "xx" : 1668372 } on : rpl2 { "t" : 18000,
"i" : 0 }
{ "xx" : 1668372 }
-->> { "xx" : 1682123 } on : rpl2 { "t" : 17000,
"i" : 0 }
{ "xx" : 1682123 }
-->> { "xx" : 1695874 } on : rpl2 { "t" : 16000,
"i" : 0 }
{ "xx" : 1695874 }
-->> { "xx" : 1711478 } on : rpl2 { "t" : 15000,
"i" : 0 }
{ "xx" : 1711478 }
-->> { "xx" : 1738684 } on : rpl2 { "t" : 14000,
"i" : 0 }
{ "xx" : 1738684 }
-->> { "xx" : 1758083 } on : rpl2 { "t" : 13000,
"i" : 0 }
{ "xx" : 1758083 }
-->> { "xx" : 1773229 } on : rpl2 { "t" : 12000,
"i" : 0 }
{ "xx" : 1773229 }
-->> { "xx" : 1794767 } on : rpl2 { "t" : 11000,
"i" : 0 }
{ "xx" : 1794767 }
-->> { "xx" : 1808518 } on : rpl2 { "t" : 10000,
"i" : 0 }
{ "xx" : 1808518 }
-->> { "xx" : 1834892 } on : rpl2 { "t" : 9000,
"i" : 0 }
{ "xx" : 1834892 }
-->> { "xx" : 1848643 } on : rpl2 { "t" : 8000,
"i" : 0 }
{ "xx" : 1848643 }
-->> { "xx" : 1873297 } on : rpl2 { "t" : 7000,
"i" : 0 }
{ "xx" : 1873297 }
-->> { "xx" : 1887048 } on : rpl2 { "t" : 6000,
"i" : 2 }
{ "xx" : 1887048 }
-->> { "xx" : 1911702 } on : rpl2 { "t" : 6000,
"i" : 3 }
{ "xx" : 1911702 }
-->> { "xx" : 1918372 } on : rpl2 { "t" : 5000,
"i" : 0 }
{ "xx" : 1918372 }
-->> { "xx" : 1932123 } on : rpl2 { "t" : 7000,
"i" : 2 }
{ "xx" : 1932123 }
-->> { "xx" : 1959185 } on : rpl2 { "t" : 7000,
"i" : 3 }
{ "xx" : 1959185 }
-->> { "xx" : 1972936 } on : rpl2 { "t" : 9000,
"i" : 2 }
{ "xx" : 1972936 }
-->> { "xx" : 1999999 } on : rpl2 { "t" : 9000,
"i" : 3 }
{ "xx" : 1999999 }
-->> { "xx" : { $maxKey : 1 } } on : rpl2 { "t" :
2000, "i" : 0 }
test.tickets chunks:
rpl1 1
{ "ip" : { $minKey : 1 } }
-->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" :
1000, "i" : 0 }
{ "_id" : "adin",
"partitioned" : false, "primary" : "rpl1" }
mongos>
7.7.2 vertify if it is a sharding
[root@472322 ~]# /db/mongodb/bin/mongo --port 27027
MongoDB shell version: 2.0.1
connecting to: 127.0.0.1:27027/test
PRIMARY> db.runCommand({ isdbgrid:1 });
{
"errmsg" : "no such cmd: isdbgrid",
"bad cmd" : {
"isdbgrid" : 1
},
"ok" : 0
}
PRIMARY>
there is errmsg info, run it in wrong
windows,maybe run it in the mongos window.
[root@472322 ~]# /db/mongodb/bin/mongo --port 27017
MongoDB shell version: 2.0.1
connecting to: 127.0.0.1:27017/test
mongos> db.runCommand({ isdbgrid:1 });
{ "isdbgrid" : 1, "hostname" :
"472322.ea.com", "ok" : 1 }
mongos>
ok, "isdbgrid" : 1 means
it is a sharding.
7.8 shared for unshared collection
7.8.1 prepared a collection in mongo window.
for( var i = 1; i < 2000000; i++ ) db.tc3.insert({
"xx":i, "fapp" :
"84eb9fb556074d6481e31915ac2427f0", "dne" :
"ueeDEhIB6tmP4cfY43NwWvAenzKWx19znmbheAuBl4j39U8uFXS1QGi2GCMHO7L21szgeF6Iquqmnw8kfJbvZUs/11RyxcoRm+otbUJyPPxFkevzv4SrI3kGxczG6Lsd19NBpyskaElCTtVKxwvQyBNgciXYq6cO/8ntV2C6cwQ=",
"eml" :
"q5x68h3qyVBqp3ollJrY3XEkXECjPEncXhbJjga+3hYoa4zYNhrNmBN91meL3o7jsBI/N6qe2bb2BOnOJNAnBMDzhNmPqJKG/ZVLfT9jpNkUD/pQ4oJENMv72L2GZoiyym2IFT+oT3N0KFhcv08b9ke9tm2EHTGcBsGg1R40Ah+Y/5z89OI4ERmI/48qjvaw",
"uid" :
"sQt92NUPr3CpCVnbpotU2lqRNVfZD6k/9TGW62UT7ExZYF8Dp1cWIVQoYNQVyFRLkxjmCoa8m6DiLiL/fPdG1k7WYGUH4ueXXK2yfVn/AGUk3pQbIuh7nFbqZCrAQtEY7gU0aIGC4sotAE8kghvCa5qWnSX0SWTViAE/esaWORo=",
"agt" : "PHP/eaSSOPlugin", "sd" :
"S15345853557133877", "pt" : 3795,
"ip" : "211.223.160.34", "av" : "http://secure.download.dm.origin.com/production/avatar/prod/1/599/40x40.JPEG",
"nex" : ISODate("2013-01-18T04:00:32.41Z"),
"exy" : ISODate("2013-01-18T01:16:32.015Z"),
"chk" : "sso4648609868740971", "aid" :
"Ciyvab0tregdVsBtboIpeChe4G6uzC1v5_-SIxmvSLLINJClUkXJhNvWkSUnajzi8xsv1DGYm0D3V46LLFI-61TVro9-HrZwyRwyTf9NwYIyentrgAY_qhs8fh7unyfB",
"tid" : "rUqFhONysi0yA==13583853927872",
"date" : ISODate("2012-02-17T01:16:32.787Z"),
"v" : "2.0.0", "scope" : [],
"rug" : false, "schk" : true, "fjs" :
false, "sites" : [{
"name" : "Origin.com",
"id" :
"nb09xrt8384147bba27c2f12e112o8k9",
"last" : ISODate("2013-01-17T01:16:32.787Z"),
"_id" :
ObjectId("50f750f06a56028661000f20") }]});
db.tc3.ensureIndex({xx:1});
see the status of sharding
db.tc3.stats();
mongos> db.tc3.stats();
{
"sharded" : false,
"primary" : "rpl1",
"ns" : "test.tc3",
"count" : 1999999,
"size" : 2439998940,
"avgObjSize" : 1220.00008000004,
"storageSize" : 2480209920,
"numExtents" : 29,
"nindexes" : 2,
"lastExtentSize" : 417480704,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" : 115232544,
"indexSizes" : {
"_id_" : 64909264,
"xx_1" : 50323280
},
"ok" : 1
}
mongos>
7.8.2 enable the sharding,Shard a Collection
Using a Hashed Shard Key "xx"
sh.shardCollection( "test.tc3", { xx: "hashed" }
) or db.runCommand( { shardCollection : "test.tc3", key :
{"xx":1} })
mongos> sh.shardCollection( "test.tc3", { xx:
"hashed" } ) ;
command failed: { "ok" : 0, "errmsg" : "shard
keys must all be ascending" }
shard keys must all be ascending
mongos>
why failed ? i
see the
and found "New in version 2.4: Use the
form {field: "hashed"} to create a hashed shard key. Hashed shard
keys may not be compound indexes.",
and the current is 2.0.1,so i should use the
other way to enable.
mongos> db.runCommand( { shardCollection :
"test.tc3", key : {"xx":1} })
{ "ok" : 0, "errmsg" : "access denied - use
admin db" }
mongos> use admin
switched to db admin
mongos> db.runCommand( { shardCollection : "test.tc3", key :
{"xx":1} })
{ "collectionsharded" : "test.tc3", "ok" :
1 }
mongos>
nice,enable successfully.
7.8.3 and see the status of sharding
mongos> use test;
switched to db test
mongos> db.tc3.stats();
{
"sharded" : true, -- nice, it is a sharding
now
"flags" : 1,
"ns" : "test.tc3",
"count" : 2005360,
"numExtents" : 48,
"size" : 2446539448,
"storageSize" : 2979864576,
"totalIndexSize" : 116573408,
"indexSizes" : {
"_id_" : 65105488,
"xx_1" : 51467920
},
"avgObjSize" : 1220.0001236685682,
"nindexes" : 2,
"nchunks" : 73,
"shards" : {
"rpl1" : {
"ns" :
"test.tc3",
"count" : 1647809,
"size" : 2010326980,
"avgObjSize" : 1220,
"storageSize" : 2480209920,
"numExtents" : 29,
"nindexes" : 2,
"lastExtentSize" :
417480704,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" :
94956064,
"indexSizes" : {
"_id_" :
53487392,
"xx_1" :
41468672
},
"ok" : 1
},
"rpl2" : {
"ns" :
"test.tc3",
"count" : 357551,
"size" : 436212468,
"avgObjSize" :
1220.0006936073455,
"storageSize" : 499654656,
"numExtents" : 19,
"nindexes" : 2,
"lastExtentSize" :
89817088,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" :
21617344,
"indexSizes" : {
"_id_" :
11618096,
"xx_1" :
9999248
},
"ok" : 1
}
},
"ok" : 1
}
mongos>
mongos> db.printShardingStatus();
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "rpl1",
"host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
{ "_id" : "rpl2",
"host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
databases:
{ "_id" : "admin",
"partitioned" : false, "primary" : "config"
}
{ "_id" : "test",
"partitioned" : true, "primary" : "rpl1" }
test.tc chunks:
rpl1 1
{ "ip" : { $minKey : 1 } }
-->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000,
"i" : 0 }
test.tc2 chunks:
rpl1 62
rpl2 61
too many chunks to print, use verbose
if you want to force print
test.tc3 chunks:
rpl2 19
rpl1 54
too many chunks to print, use verbose
if you want to force print
--
see this,there are 19 chunks in rpl2,so data is switching to rpl2 now.
test.tickets chunks:
rpl1 1
{ "ip" : { $minKey : 1 } }
-->> { "ip" : { $maxKey
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/26230597/viewspace-1098147/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- mongodb複製集(replica sets)+分片(sharding)環境搭建MongoDB
- Replica sets複製集的搭建
- MongoDB 複製集模式Replica SetsMongoDB模式
- MongoDB搭建Replica Set複製集MongoDB
- 搭建高可用MongoDB叢集(四):分片MongoDB
- mongodb複製集(replica set)搭建及管理MongoDB
- MongoDB Sharding(二) -- 搭建分片叢集MongoDB
- 【MongoDB】分片(sharding)+副本集(replSet)叢集搭建MongoDB
- 搭建 MongoDB分片(sharding) / 分割槽 / 叢集環境MongoDB
- MongoDB高可用叢集搭建MongoDB
- MongoDB 搭建複製集MongoDB
- mongodb複製集搭建MongoDB
- 【Mongodb】 可複製集搭建MongoDB
- MongoDB 分片叢集搭建MongoDB
- 搭建MongoDB分片叢集MongoDB
- 搭建高可用MongoDB叢集(一):配置MongoDBMongoDB
- mongodb分片(sharding)搭建、應用及管理MongoDB
- mongodb叢集shard_replica的搭建方法MongoDB
- 高可用mongodb叢集(分片+副本)MongoDB
- 【Mongodb】分片加複製集MongoDB
- 【Mongodb】分片複製集環境新增新的分片MongoDB
- 搭建高可用MongoDB叢集(二): 副本集MongoDB
- MongoDB學習4:MongoDB複製集機制和原理,搭建複製集MongoDB
- mongodb replica sets 測試MongoDB
- mongodb分片搭建MongoDB
- 【Mongodb】往分片複製集新增複製成員MongoDB
- mongodb6.0.13 搭建複製集PSAMongoDB
- mongodb複製+分片MongoDB
- 輕鬆掌握元件啟動之MongoDB(番外篇):高可用複製集架構環境搭建-mtools元件MongoDB架構
- MongoDB 分片的原理、搭建、應用MongoDB
- MongoDB 搭建可複製群集MongoDB
- zookeeper 高可用叢集搭建
- MongoDB系列-解決面試中可能遇到的MongoDB複製集(replica set)問題MongoDB面試
- 02 . MongoDB複製集,分片集,備份與恢復MongoDB
- linux下Mongodb叢集搭建:分片+副本集LinuxMongoDB
- MongoDB 4.2分片叢集搭建及與3.4分片叢集搭建時的一些異同MongoDB
- 高可用的MongoDB叢集MongoDB
- 【MongoDB】高可用方案之副本集(Replica Set)MongoDB