【Mongodb】Sharding 手工遷移chunk
mongodb的資料由monogd伺服器儲存,由mongos對寫入的資料根據片鍵進行路由,整個過程對客戶端完全透明。對chunk的移動是由“平衡器”來決定的,當然加入chunk分佈不均勻了,我們也可以手工來操作
db.runCommand( { moveChunk : "" ,
find : {查詢條件} ,
to : "" } )
註釋:
moveChunk:一個集合的全字要加上資料庫的名稱:比如test.yql
find:一個查詢語句,對於指定集合中的符合查詢的資料或者chunk,系統自動查出from 的shard
to: 指向chunk的目的shard
只要目的shard和源sharad同意指定的chunk由目的shard接管,命令就返回。遷移chunk是一個比較複雜的過程,它包括兩個內部通訊協議:
1 複製資料,包括在複製過程中的變化的資料
2 確保所有參與遷移的組成部分:目的shard ,源shard ,config server都確定遷移已經完成!
比如我們要將 yql.momo中的id>82842的部分由shard0001遷移到shard0002:
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "10.250.7.225:27018" }
{ "_id" : "shard0001", "host" : "10.250.7.249:27019" }
{ "_id" : "shard0002", "host" : "10.250.7.241:27020" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0000" }
test.momo chunks:
shard0001 3
shard0002 3
shard0000 6
{ "id" : { $minKey : 1 } } -->> { "id" : 0 } on : shard0001 { "t" : 2000, "i" : 0 }
{ "id" : 0 } -->> { "id" : 11595 } on : shard0002 { "t" : 3000, "i" : 0 }
{ "id" : 11595 } -->> { "id" : 23191 } on : shard0001 { "t" : 4000, "i" : 0 }
{ "id" : 23191 } -->> { "id" : 31929 } on : shard0002 { "t" : 5000, "i" : 0 }
{ "id" : 31929 } -->> { "id" : 42392 } on : shard0001 { "t" : 6000, "i" : 0 }
{ "id" : 42392 } -->> { "id" : 62952 } on : shard0002 { "t" : 7000, "i" : 0 }
{ "id" : 62952 } -->> { "id" : 82842 } on : shard0000 { "t" : 7000, "i" : 1 }
{ "id" : 82842 } -->> { "id" : 102100 } on : shard0000 { "t" : 1000, "i" : 11 }
{ "id" : 102100 } -->> { "id" : 120602 } on : shard0000 { "t" : 1000, "i" : 13 }
{ "id" : 120602 } -->> { "id" : 287873 } on : shard0000 { "t" : 2000, "i" : 2 }
{ "id" : 287873 } -->> { "id" : 305812 } on : shard0000 { "t" : 2000, "i" : 6 }
{ "id" : 305812 } -->> { "id" : { $maxKey : 1 } } on : shard0000 { "t" : 2000, "i" : 7 }
test.yql chunks:
shard0001 2
shard0000 1
shard0002 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : ObjectId("4eb298b3adbd9673afee95e3") } on : shard0001 { "t" : 4000, "i" : 0 }
{ "_id" : ObjectId("4eb298b3adbd9673afee95e3") } -->> { "_id" : ObjectId("4eb2a64640643e5bb60072f7") } on : shard0000 { "t" : 4000, "i" : 1 }
{ "_id" : ObjectId("4eb2a64640643e5bb60072f7") } -->> { "_id" : ObjectId("4eb2a65340643e5bb600e084") } on : shard0002 { "t" : 3000, "i" : 1 }
{ "_id" : ObjectId("4eb2a65340643e5bb600e084") } -->> { "_id" : { $maxKey : 1 } } on : shard0001 { "t" : 3000, "i" : 0 }
{ "_id" : "mongos", "partitioned" : false, "primary" : "shard0000" }
執行遷移命令:
mongos> db.adminCommand({moveChunk : "test.momo", find : {id:{$gt:82842}}, to : "shard0002"});
{ "millis" : 1474, "ok" : 1 }
再次檢視:
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "10.250.7.225:27018" }
{ "_id" : "shard0001", "host" : "10.250.7.249:27019" }
{ "_id" : "shard0002", "host" : "10.250.7.241:27020" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard0000" }
test.momo chunks:
shard0001 4
shard0000 4
shard0002 4
{ "id" : { $minKey : 1 } } -->> { "id" : 0 } on : shard0001 { "t" : 2000, "i" : 0 }
{ "id" : 0 } -->> { "id" : 11595 } on : shard0000 { "t" : 11000, "i" : 0 }
{ "id" : 11595 } -->> { "id" : 23191 } on : shard0001 { "t" : 4000, "i" : 0 }
{ "id" : 23191 } -->> { "id" : 31929 } on : shard0002 { "t" : 11000, "i" : 1 }
{ "id" : 31929 } -->> { "id" : 42392 } on : shard0001 { "t" : 6000, "i" : 0 }
{ "id" : 42392 } -->> { "id" : 62952 } on : shard0002 { "t" : 7000, "i" : 0 }
{ "id" : 62952 } -->> { "id" : 82842 } on : shard0001 { "t" : 8000, "i" : 0 }
{ "id" : 82842 } -->> { "id" : 102100 } on : shard0002 { "t" : 9000, "i" : 0 }
{ "id" : 102100 } -->> { "id" : 120602 } on : shard0000 { "t" : 10000, "i" : 1 }
{ "id" : 120602 } -->> { "id" : 287873 } on : shard0000 { "t" : 2000, "i" : 2 }
{ "id" : 287873 } -->> { "id" : 305812 } on : shard0000 { "t" : 2000, "i" : 6 }
{ "id" : 305812 } -->> { "id" : { $maxKey : 1 } } on : shard0002 { "t" : 10000, "i" : 0 }
test.yql chunks:
shard0001 2
shard0000 1
shard0002 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : ObjectId("4eb298b3adbd9673afee95e3") } on : shard0001 { "t" : 4000, "i" : 0 }
{ "_id" : ObjectId("4eb298b3adbd9673afee95e3") } -->> { "_id" : ObjectId("4eb2a64640643e5bb60072f7") } on : shard0000 { "t" : 4000, "i" : 1 }
{ "_id" : ObjectId("4eb2a64640643e5bb60072f7") } -->> { "_id" : ObjectId("4eb2a65340643e5bb600e084") } on : shard0002 { "t" : 3000, "i" : 1 }
{ "_id" : ObjectId("4eb2a65340643e5bb600e084") } -->> { "_id" : { $maxKey : 1 } } on : shard0001 { "t" : 3000, "i" : 0 }
{ "_id" : "mongos", "partitioned" : false, "primary" : "shard0000" }
mongos>
從結果看出:並非所有id>82842的資料所在的chunk都遷移到了shard0002上面,[82842,102100] 和[305812 ,+∞]遷移到了shard0002.這個有點不明白??
日誌記錄:
##發命令
Sat Nov 5 16:15:35 [conn1] CMD: movechunk: { moveChunk: "test.momo", find: { id: { $gt: 82842.0 } }, to: "shard0002" }
##遷移資料
Sat Nov 5 16:15:35 [conn1] moving chunk ns: test.momo moving ( ns:test.momo at: shard0000:10.250.7.225:27018 lastmod: 2|7 min: { id: 305812 } max: { id: MaxKey }) shard0000:10.250.7.225:27018 -> shard0002:10.250.7.241:27020
Sat Nov 5 16:15:36 [Balancer] distributed lock 'balancer/rac4:27017:1320477786:1804289383' acquired, ts : 4eb4f0a818ed672581e262dd
Sat Nov 5 16:15:36 [Balancer] distributed lock 'balancer/rac4:27017:1320477786:1804289383' unlocked.
Sat Nov 5 16:15:37 [conn1] created new distributed lock for test.momo on rac1:28001,rac2:28002,rac3:28003 ( lock timeout : 900000, ping interval : 30000, process : 0 )
Sat Nov 5 16:15:37 [conn1] ChunkManager: time to load chunks for test.momo: 0ms sequenceNumber: 10 version: 10|1
Sat Nov 5 16:15:41 [Balancer] distributed lock 'balancer/rac4:27017:1320477786:1804289383' acquired, ts : 4eb4f0ad18ed672581e262de
Sat Nov 5 16:15:41 [Balancer] chose [shard0002] to [shard0000] { _id: "test.momo-id_0.0", lastmod: Timestamp 3000|0, ns: "test.momo", min: { id: 0.0 }, max: { id: 11595 }, shard: "shard0002" }
Sat Nov 5 16:15:41 [Balancer] moving chunk ns: test.momo moving ( ns:test.momo at: shard0002:10.250.7.241:27020 lastmod: 3|0 min: { id: 0.0 } max: { id: 11595 }) shard0002:10.250.7.241:27020 -> shard0000:10.250.7.225:27018
Sat Nov 5 16:15:43 [Balancer] created new distributed lock for test.momo on rac1:28001,rac2:28002,rac3:28003 ( lock timeout : 900000, ping interval : 30000, process : 0 )
Sat Nov 5 16:15:43 [Balancer] ChunkManager: time to load chunks for test.momo: 0ms sequenceNumber: 11 version: 11|1
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/22664653/viewspace-710284/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 遷移sqlserver資料到MongoDbSQLServerMongoDB
- 【Mongodb】 Mongodb sharding 管理之二MongoDB
- 【Mongodb】Mongodb sharding 管理之一MongoDB
- MongoDB 資料遷移和同步MongoDB
- MongoDB分片叢集chunk的概念MongoDB
- MongoDB 3.4配置sharding分片MongoDB
- mongodb 分片群集(sharding cluster)MongoDB
- 【Mongodb】Sharding 叢集配置MongoDB
- 手工段管理表空間遷移後的調整
- MongoDB add sharding -- Just a noteMongoDB
- mongodb資料遷移2種方式比較MongoDB
- MongoDB 資料遷移 備份 匯入(自用)MongoDB
- MongoDB mongoshake 遷移分片到複製集合MongoDB
- MongoDB Sharding(二) -- 搭建分片叢集MongoDB
- MongoDB chunk too big to move的解決方案MongoDB
- mongodb分片(sharding)搭建、應用及管理MongoDB
- MongoDB主從複製,副本集, ShardingMongoDB
- 資料遷移新技能,MongoDB輕鬆同步至ClickHouseMongoDB
- 【MongoDB】分片(sharding)+副本集(replSet)叢集搭建MongoDB
- 【Mongodb】sharding 叢集Add/Remove 節點MongoDBREM
- 利用MongoDB的SplitVector命令實現併發資料遷移MongoDB
- 搭建 MongoDB分片(sharding) / 分割槽 / 叢集環境MongoDB
- 查詢rconfig的時候記錄了手工遷移的步驟
- 國際計費系統基於Sharding-Proxy大資料遷移方案實踐大資料
- KVM線上遷移(動態遷移)
- 【遷移】使用rman遷移資料庫資料庫
- mongodb複製集(replica sets)+分片(sharding)環境搭建MongoDB
- MongoDB Sharding ChunkSize大小選擇優缺點介紹MongoDB
- MongoDB Sharding Balancer介紹和設定方法舉例MongoDB
- SQL Server 2000系統DTS遷移後需要手工生成優化統計資訊 ?SQLServer優化
- 遷移公告
- CCU遷移
- 棧遷移
- 【北亞資料恢復】MongoDB資料遷移檔案丟失的MongoDB資料恢復案例資料恢復MongoDB
- 在Ubuntu 14.04上備份,還原和遷移MongoDB資料庫UbuntuMongoDB資料庫
- mongodb資料庫備份與恢復(資料庫資料遷移)MongoDB資料庫
- Mongodb資料遷移步驟MongoDB
- MongoDB遷移工具MongoshakeMongoDB