分散式文件儲存資料庫之MongoDB分片叢集

1874發表於2020-11-12

  前文我們聊到了mongodb的副本集以及配置副本集,回顧請參考https://www.cnblogs.com/qiuhom-1874/p/13953598.html;今天我們來聊下mongodb的分片;

  1、什麼是分片?為什麼要分片?

  我們知道資料庫伺服器一般出現瓶頸是在磁碟io上,或者高併發網路io,又或者單臺server的cpu、記憶體等等一系列原因;於是,為了解決這些瓶頸問題,我們就必須擴充套件伺服器效能;通常擴充套件伺服器有向上擴充套件和向外擴充套件;所謂向上擴充套件就是給伺服器加更大的磁碟,使用更大更好的記憶體,更換更好的cpu;這種擴充套件在一定程度上是可以解決效能瓶頸問題,但隨著資料量大增大,瓶頸會再次出現;所以通常這種向上擴充套件的方式不推薦;向外擴充套件是指一臺伺服器不夠加兩臺,兩臺不夠加三臺,以這種方式擴充套件,只要出現瓶頸我們就可以使用增加伺服器來解決;這樣一來伺服器效能解決了,但使用者的讀寫怎麼分散到多個伺服器上去呢?所以我們還要想辦法把資料切分成多塊,讓每個伺服器只儲存整個資料集的部分資料,這樣一來使得原來一個很大的資料集就通過切片的方式,把它切分成多分,分散的存放在多個伺服器上,這就是分片;分片是可以有效解決使用者寫操作效能瓶頸;雖然解決了伺服器效能問題和使用者寫效能問題,同時也帶來了一個新問題,就是使用者的查詢;我們把整個資料集分散到多個server上,那麼使用者怎麼查詢資料呢?比如使用者要查詢年齡大於30的使用者,該怎麼查詢呢?而年齡大於30的使用者的資料,可能server1上有一部分資料,server2上有部分資料,我們怎麼才能夠把所有滿足條件的資料全部查詢到呢?這個場景有點類似我們之前說的mogilefs的架構,使用者上傳圖片到mogilefs首先要把圖片的後設資料寫進tracker,然後在把資料存放在對應的data節點,這樣一來使用者來查詢,首先找tracker節點,tracker會把使用者的請求檔案的後設資料告訴客戶端,然後客戶端在到對應的data節點取資料,最後拼湊成一張圖片;而在mongodb上也是很類似,不同的的是在mogilefs上,客戶端需要自己去和後端的data節點互動,取出資料;在mongdb上客戶端不需要直接和後端的data節點互動,而是通過mongodb專有的客戶端代理去代客戶端互動,最後把資料統一由代理返回給客戶端;這樣一來就可以解決使用者的查詢問題;簡單講所謂分片就是把一個大的資料集通過切分的方式切分成多分,分散的存放在多個伺服器上;分片的目的是為了解決資料量過大而導致的效能問題;

  2、資料集分片示意圖

  提示:我們通過分片,可以將原本1T的資料集,平均分成4分,每個節點儲存原有資料集的1/4,使得原來用一臺伺服器處理1T的資料,現在可以用4臺伺服器來處理,這樣一來就有效的提高了資料處理過程;這也是分散式系統的意義;在mongodb中我們把這種共同處理一個資料集的部分資料的節點叫shard,我們把使用這種分片機制的mongodb叢集就叫做mongodb分片叢集;

  3、mongodb分片叢集架構

 

  提示:在mongodb分片叢集中,通常有三類角色,第一類是router角色,router角色主要用來接收客戶端的讀寫請求,主要執行mongos這個服務;為了使得router角色的高可用,通常會用多個節點來組成router高可用叢集;第二類是config server,這類角色主要用來儲存mongodb分片叢集中的資料和叢集的後設資料資訊,有點類似mogilefs中的tracker的作用;為了保證config server的高可用性,通常config server也會將其執行為一個副本集;第三類是shard角色,這類角色主要用來存放資料,類似mogilefs的資料節點,為了保證資料的高可用和完整性,通常每個shard是一個副本集;

  4、mongodb分片叢集工作過程

  首先使用者將請求傳送給router,router接收到使用者請求,然後去找config server拿對應請求的後設資料資訊,router拿到後設資料資訊後,然後再向對應的shard請求資料,最後將資料整合後響應給使用者;在這個過程中router 就相當於mongodb的一個客戶端代理;而config server用來存放資料的後設資料資訊,這些資訊主要包含了那些shard上存放了那些資料,對應的那些資料存放在那些shard上,和mogilefs上的tracker非常類似,主要存放了兩張表,一個是以資料為中心的一張表,一個是以shard節點為中心的一張表;

   5、mongodb是怎麼分片的?

  在mongodb的分片叢集中,分片是按照collection欄位來分的,我們把指定的欄位叫shard key;根據shard key的取值不同和應用場景,我們可以基於shard key取值範圍來分片,也可以基於shard key做hash分片;分好片以後將結果儲存在config server上;在configserver 上儲存了每一個分片對應的資料集;比如我們基於shardkey的範圍來分片,在configserver上就記錄了一個連續範圍的shardkey的值都儲存在一個分片上;如下圖

  上圖主要描述了基於範圍的分片,從shardkey最小值到最大值進行分片,把最小值到-75這個範圍值的資料塊儲存在第一個分片上,把-75到25這個範圍值的資料塊儲存在第二個分片上,依次類推;這種基於範圍的分片,很容易導致某個分片上的資料過大,而有的分片上的資料又很小,造成分片資料不均勻;所以除了基與shard key的值的範圍分片,也可以基於shard key的值做hash分片,如下圖

  基於hash分片,主要是對shardkey做hash計算後,然後根據最後的結果落在哪個分片上就把對應的資料塊儲存在對應的分片上;比如我們把shandkey做hash計算,然後對分片數量進行取模計算,如果得到的結果是0,那麼就把對應的資料塊儲存在第一個分片上,如果取得到結果是1就儲存在第二個分片上依次類推;這種基於hash分片,就有效的降低分片資料不均衡的情況,因為hash計算的值是雜湊的;

  除了上述兩種切片的方式以外,我們還可以根據區域切片,也叫基於列表切片,如下圖

  上圖主要描述了基於區域分片,這種分片一般是針對shardkey的取值範圍不是一個順序的集合,而是一個離散的集合,比如我們可用這種方式對全國省份這個欄位做切片,把流量特別大的省份單獨切一個片,把流量小的幾個省份組合切分一片,把國外的訪問或不是國內省份的切分為一片;這種切片有點類似給shardkey做分類;不管用什麼方式去做分片,我們儘可能的遵循寫操作要越分散越好,讀操作要越集中越好;

  6、mongodb分片叢集搭建

  環境說明

主機名 角色 ip地址
node01 router 192.168.0.41
node02/node03/node04 config server replication set

192.168.0.42

192.168.0.43

192.168.0.44

node05/node06/node07 shard1 replication set

192.168.0.45

192.168.0.46

192.168.0.47

node08/node09/node10 shard2 replication set

192.168.0.48

192.168.0.49

192.168.0.50

 

 

 

 

 

 

 

 

 

 

 

 

  

 

  

  基礎環境,各server做時間同步,關閉防火牆,關閉selinux,ssh互信,主機名解析

  主機名解析

[root@node01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.99 time.test.org time-node
192.168.0.41 node01.test.org node01
192.168.0.42 node02.test.org node02
192.168.0.43 node03.test.org node03
192.168.0.44 node04.test.org node04
192.168.0.45 node05.test.org node05
192.168.0.46 node06.test.org node06
192.168.0.47 node07.test.org node07
192.168.0.48 node08.test.org node08
192.168.0.49 node09.test.org node09
192.168.0.50 node10.test.org node10
192.168.0.51 node11.test.org node11
192.168.0.52 node12.test.org node12
[root@node01 ~]#

  準備好基礎環境以後,配置mongodb yum源

[root@node01 ~]# cat /etc/yum.repos.d/mongodb.repo
[mongodb-org]
name = MongoDB Repository
baseurl = https://mirrors.aliyun.com/mongodb/yum/redhat/7/mongodb-org/4.4/x86_64/
gpgcheck = 1
enabled = 1
gpgkey = https://www.mongodb.org/static/pgp/server-4.4.asc
[root@node01 ~]# 

  將mongodb yum源複製給其他節點

[root@node01 ~]# for i in {02..10} ; do scp /etc/yum.repos.d/mongodb.repo node$i:/etc/yum.repos.d/; done
mongodb.repo                                                                  100%  206   247.2KB/s   00:00    
mongodb.repo                                                                  100%  206   222.3KB/s   00:00    
mongodb.repo                                                                  100%  206   118.7KB/s   00:00    
mongodb.repo                                                                  100%  206   164.0KB/s   00:00    
mongodb.repo                                                                  100%  206   145.2KB/s   00:00    
mongodb.repo                                                                  100%  206   119.9KB/s   00:00    
mongodb.repo                                                                  100%  206   219.2KB/s   00:00    
mongodb.repo                                                                  100%  206   302.1KB/s   00:00    
mongodb.repo                                                                  100%  206   289.3KB/s   00:00    
[root@node01 ~]# 

  在每個節點上安裝mongodb-org這個包

for i in {01..10} ; do ssh node$i ' yum -y install mongodb-org '; done

  在config server 和shard節點上建立資料目錄和日誌目錄,並將其屬主和屬組更改為mongod

[root@node01 ~]# for i in {02..10} ; do ssh node$i 'mkdir -p /mongodb/{data,log} && chown -R mongod.mongod /mongodb/ && ls -ld /mongodb'; done
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:47 /mongodb
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:45 /mongodb
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:45 /mongodb
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:45 /mongodb
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:45 /mongodb
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:45 /mongodb
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:45 /mongodb
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:45 /mongodb
drwxr-xr-x 4 mongod mongod 29 Nov 11 22:45 /mongodb
[root@node01 ~]# 

  配置shard1 replication set

[root@node05 ~]# cat /etc/mongod.conf 
systemLog:
  destination: file
  logAppend: true
  path: /mongodb/log/mongod.log

storage:
  dbPath: /mongodb/data/
  journal:
    enabled: true

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  bindIp: 0.0.0.0

sharding:
  clusterRole: shardsvr

replication:
  replSetName: shard1_replset
[root@node05 ~]# scp /etc/mongod.conf node06:/etc/
mongod.conf                                                                   100%  360   394.5KB/s   00:00    
[root@node05 ~]# scp /etc/mongod.conf node07:/etc/
mongod.conf                                                                   100%  360   351.7KB/s   00:00    
[root@node05 ~]#

  配置shard2 replication set

[root@node08 ~]# cat /etc/mongod.conf
systemLog:
  destination: file
  logAppend: true
  path: /mongodb/log/mongod.log

storage:
  dbPath: /mongodb/data/
  journal:
    enabled: true

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  bindIp: 0.0.0.0

sharding:
  clusterRole: shardsvr

replication:
  replSetName: shard2_replset
[root@node08 ~]# scp /etc/mongod.conf node09:/etc/
mongod.conf                                                                   100%  360   330.9KB/s   00:00    
[root@node08 ~]# scp /etc/mongod.conf node10:/etc/
mongod.conf                                                                   100%  360   385.9KB/s   00:00    
[root@node08 ~]# 

  啟動shard1 replication set和shard2 replication set

[root@node05 ~]# systemctl start mongod.service 
[root@node05 ~]# ss -tnl
State      Recv-Q Send-Q           Local Address:Port                          Peer Address:Port              
LISTEN     0      128                          *:22                                       *:*                  
LISTEN     0      100                  127.0.0.1:25                                       *:*                  
LISTEN     0      128                          *:27018                                    *:*                  
LISTEN     0      128                         :::22                                      :::*                  
LISTEN     0      100                        ::1:25                                      :::*                  
[root@node05 ~]#for i in {06..10} ; do ssh node$i 'systemctl start mongod.service && ss -tnl';done
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     0      128          *:22                       *:*                  
LISTEN     0      100    127.0.0.1:25                       *:*                  
LISTEN     0      128          *:27018                    *:*                  
LISTEN     0      128         :::22                      :::*                  
LISTEN     0      100        ::1:25                      :::*                  
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     0      128          *:22                       *:*                  
LISTEN     0      100    127.0.0.1:25                       *:*                  
LISTEN     0      128          *:27018                    *:*                  
LISTEN     0      128         :::22                      :::*                  
LISTEN     0      100        ::1:25                      :::*                  
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     0      128          *:22                       *:*                  
LISTEN     0      100    127.0.0.1:25                       *:*                  
LISTEN     0      128          *:27018                    *:*                  
LISTEN     0      128         :::22                      :::*                  
LISTEN     0      100        ::1:25                      :::*                  
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     0      128          *:22                       *:*                  
LISTEN     0      100    127.0.0.1:25                       *:*                  
LISTEN     0      128          *:27018                    *:*                  
LISTEN     0      128         :::22                      :::*                  
LISTEN     0      100        ::1:25                      :::*                  
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     0      128          *:22                       *:*                  
LISTEN     0      100    127.0.0.1:25                       *:*                  
LISTEN     0      128          *:27018                    *:*                  
LISTEN     0      128         :::22                      :::*                  
LISTEN     0      100        ::1:25                      :::*                  
[root@node05 ~]# 

  提示:預設不指定shard監聽埠,它預設就監聽在27018埠,所以啟動shard節點後,請確保27018埠正常監聽即可;

  連線node05的mongodb 初始化shard1_replset副本集

> rs.initiate(
...   {
...     _id : "shard1_replset",
...     members: [
...       { _id : 0, host : "node05:27018" },
...       { _id : 1, host : "node06:27018" },
...       { _id : 2, host : "node07:27018" }
...     ]
...   }
... )
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1605107401, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1605107401, 1)
}
shard1_replset:SECONDARY>

  連線node08的mongodb 初始化shard2_replset副本集

> rs.initiate(
...   {
...     _id : "shard2_replset",
...     members: [
...       { _id : 0, host : "node08:27018" },
...       { _id : 1, host : "node09:27018" },
...       { _id : 2, host : "node10:27018" }
...     ]
...   }
... )
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1605107644, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1605107644, 1)
}
shard2_replset:OTHER> 

  配置configserver replication set 

[root@node02 ~]# cat /etc/mongod.conf
systemLog:
  destination: file
  logAppend: true
  path: /mongodb/log/mongod.log

storage:
  dbPath: /mongodb/data/
  journal:
    enabled: true

processManagement:
  fork: true
  pidFilePath: /var/run/mongodb/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  bindIp: 0.0.0.0

sharding:
  clusterRole: configsvr

replication:
  replSetName: cfg_replset
[root@node02 ~]# scp /etc/mongod.conf node03:/etc/mongod.conf 
mongod.conf                                                                   100%  358   398.9KB/s   00:00    
[root@node02 ~]# scp /etc/mongod.conf node04:/etc/mongod.conf  
mongod.conf                                                                   100%  358   270.7KB/s   00:00    
[root@node02 ~]# 

  啟動config server

[root@node02 ~]# systemctl start mongod.service 
[root@node02 ~]# ss -tnl
State      Recv-Q Send-Q           Local Address:Port                          Peer Address:Port              
LISTEN     0      128                          *:27019                                    *:*                  
LISTEN     0      128                          *:22                                       *:*                  
LISTEN     0      100                  127.0.0.1:25                                       *:*                  
LISTEN     0      128                         :::22                                      :::*                  
LISTEN     0      100                        ::1:25                                      :::*                  
[root@node02 ~]# ssh node03 'systemctl start mongod.service && ss -tnl'  
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     0      128          *:27019                    *:*                  
LISTEN     0      128          *:22                       *:*                  
LISTEN     0      100    127.0.0.1:25                       *:*                  
LISTEN     0      128         :::22                      :::*                  
LISTEN     0      100        ::1:25                      :::*                  
[root@node02 ~]# ssh node04 'systemctl start mongod.service && ss -tnl' 
State      Recv-Q Send-Q Local Address:Port               Peer Address:Port              
LISTEN     0      128          *:27019                    *:*                  
LISTEN     0      128          *:22                       *:*                  
LISTEN     0      100    127.0.0.1:25                       *:*                  
LISTEN     0      128         :::22                      :::*                  
LISTEN     0      100        ::1:25                      :::*                  
[root@node02 ~]# 

  提示:config server 預設在不指定埠的情況監聽在27019這個埠,啟動後,請確保該埠處於正常監聽;

  連線node02的mongodb,初始化cfg_replset 副本集

> rs.initiate(
...   {
...     _id: "cfg_replset",
...     configsvr: true,
...     members: [
...       { _id : 0, host : "node02:27019" },
...       { _id : 1, host : "node03:27019" },
...       { _id : 2, host : "node04:27019" }
...     ]
...   }
... )
{
        "ok" : 1,
        "$gleStats" : {
                "lastOpTime" : Timestamp(1605108177, 1),
                "electionId" : ObjectId("000000000000000000000000")
        },
        "lastCommittedOpTime" : Timestamp(0, 0),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1605108177, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1605108177, 1)
}
cfg_replset:SECONDARY> 

  配置router

[root@node01 ~]# cat /etc/mongos.conf
systemLog:
   destination: file
   path: /var/log/mongodb/mongos.log
   logAppend: true

processManagement:
   fork: true

net:
   bindIp: 0.0.0.0
sharding:
  configDB: "cfg_replset/node02:27019,node03:27019,node04:27019"
[root@node01 ~]# 

  提示:configDB必須是副本集名稱/成員監聽地址:port的形式,成員至少要寫一個;

  啟動router

[root@node01 ~]# mongos -f /etc/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 1510
child process started successfully, parent exiting
[root@node01 ~]# ss -tnl
State      Recv-Q Send-Q           Local Address:Port                          Peer Address:Port              
LISTEN     0      128                          *:22                                       *:*                  
LISTEN     0      100                  127.0.0.1:25                                       *:*                  
LISTEN     0      128                          *:27017                                    *:*                  
LISTEN     0      128                         :::22                                      :::*                  
LISTEN     0      100                        ::1:25                                      :::*                  
[root@node01 ~]#

  連線mongos,新增shard1 replication set 和shard2 replication set

mongos> sh.addShard("shard1_replset/node05:27018,node06:27018,node07:27018")
{
        "shardAdded" : "shard1_replset",
        "ok" : 1,
        "operationTime" : Timestamp(1605109085, 3),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1605109086, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.addShard("shard2_replset/node08:27018,node09:27018,node10:27018")
{
        "shardAdded" : "shard2_replset",
        "ok" : 1,
        "operationTime" : Timestamp(1605109118, 2),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1605109118, 3),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos>

  提示:新增shard 副本集也是需要指明副本集名稱/成員的格式新增;

  到此分片叢集就配置好了

  檢視sharding 叢集狀態

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5fac01dd8d6fa3fe899662c8")
  }
  shards:
        {  "_id" : "shard1_replset",  "host" : "shard1_replset/node05:27018,node06:27018,node07:27018",  "state" : 1 }
        {  "_id" : "shard2_replset",  "host" : "shard2_replset/node08:27018,node09:27018,node10:27018",  "state" : 1 }
  active mongoses:
        "4.4.1" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  yes
        Collections with active migrations: 
                config.system.sessions started at Wed Nov 11 2020 23:43:14 GMT+0800 (CST)
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                45 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1_replset  978
                                shard2_replset  46
                        too many chunks to print, use verbose if you want to force print
mongos> 

  提示:可以看到當前分片叢集中有兩個shard 副本集,分別是shard1_replset和shard2_replset;以及一個config server 

  對testdb資料庫啟用sharding功能

mongos> sh.enableSharding("testdb")
{
        "ok" : 1,
        "operationTime" : Timestamp(1605109993, 9),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1605109993, 9),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5fac01dd8d6fa3fe899662c8")
  }
  shards:
        {  "_id" : "shard1_replset",  "host" : "shard1_replset/node05:27018,node06:27018,node07:27018",  "state" : 1 }
        {  "_id" : "shard2_replset",  "host" : "shard2_replset/node08:27018,node09:27018,node10:27018",  "state" : 1 }
  active mongoses:
        "4.4.1" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                214 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1_replset  810
                                shard2_replset  214
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "testdb",  "primary" : "shard2_replset",  "partitioned" : true,  "version" : {  "uuid" : UUID("454aad2e-b397-4c88-b5c4-c3b21d37e480"),  "lastMod" : 1 } }
mongos> 

  提示:在對某個資料庫啟動sharding功能後,它會給我們分片一個主shard所謂主shard是用來存放該資料庫下沒有做分片的colleciton;對於分片的collection會分散在各個shard上;

  啟用對testdb庫下的peoples集合啟動sharding,並指明在age欄位上做基於範圍的分片

mongos> sh.shardCollection("testdb.peoples",{"age":1})
{
        "collectionsharded" : "testdb.peoples",
        "collectionUUID" : UUID("ec095411-240d-4484-b45d-b541c33c3975"),
        "ok" : 1,
        "operationTime" : Timestamp(1605110694, 11),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1605110694, 11),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5fac01dd8d6fa3fe899662c8")
  }
  shards:
        {  "_id" : "shard1_replset",  "host" : "shard1_replset/node05:27018,node06:27018,node07:27018",  "state" : 1 }
        {  "_id" : "shard2_replset",  "host" : "shard2_replset/node08:27018,node09:27018,node10:27018",  "state" : 1 }
  active mongoses:
        "4.4.1" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                408 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1_replset  616
                                shard2_replset  408
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "testdb",  "primary" : "shard2_replset",  "partitioned" : true,  "version" : {  "uuid" : UUID("454aad2e-b397-4c88-b5c4-c3b21d37e480"),  "lastMod" : 1 } }
                testdb.peoples
                        shard key: { "age" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2_replset  1
                        { "age" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 } } on : shard2_replset Timestamp(1, 0) 
mongos> 

  提示:如果對應的collection存在,我們還需要先對collection建立shardkey索引,然後在使用sh.shardCollection()來對colleciton啟用sharding功能;基於範圍做分片,我們可以在多個欄位上做;

  基於hash做分片

mongos> sh.shardCollection("testdb.peoples1",{"name":"hashed"})
{
        "collectionsharded" : "testdb.peoples1",
        "collectionUUID" : UUID("f6213da1-7c7d-4d5e-8fb1-fc554efb9df2"),
        "ok" : 1,
        "operationTime" : Timestamp(1605111014, 2),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1605111014, 2),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5fac01dd8d6fa3fe899662c8")
  }
  shards:
        {  "_id" : "shard1_replset",  "host" : "shard1_replset/node05:27018,node06:27018,node07:27018",  "state" : 1 }
        {  "_id" : "shard2_replset",  "host" : "shard2_replset/node08:27018,node09:27018,node10:27018",  "state" : 1 }
  active mongoses:
        "4.4.1" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  yes
        Collections with active migrations: 
                config.system.sessions started at Thu Nov 12 2020 00:10:16 GMT+0800 (CST)
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                480 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1_replset  543
                                shard2_replset  481
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "testdb",  "primary" : "shard2_replset",  "partitioned" : true,  "version" : {  "uuid" : UUID("454aad2e-b397-4c88-b5c4-c3b21d37e480"),  "lastMod" : 1 } }
                testdb.peoples
                        shard key: { "age" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2_replset  1
                        { "age" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 } } on : shard2_replset Timestamp(1, 0) 
                testdb.peoples1
                        shard key: { "name" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1_replset  2
                                shard2_replset  2
                        { "name" : { "$minKey" : 1 } } -->> { "name" : NumberLong("-4611686018427387902") } on : shard1_replset Timestamp(1, 0) 
                        { "name" : NumberLong("-4611686018427387902") } -->> { "name" : NumberLong(0) } on : shard1_replset Timestamp(1, 1) 
                        { "name" : NumberLong(0) } -->> { "name" : NumberLong("4611686018427387902") } on : shard2_replset Timestamp(1, 2) 
                        { "name" : NumberLong("4611686018427387902") } -->> { "name" : { "$maxKey" : 1 } } on : shard2_replset Timestamp(1, 3) 
mongos> 

  提示:基於hash做分片只能在一個欄位上做,不能指定多個欄位;從上面的狀態資訊可以看到testdb.peoples被分到了shard2上,peoples1一部分分到了shard1,一部分分到了shard2上;所以在peoples中插入多少條資料,它都會寫到shard2上,在peoples1中插入資料會被寫入到shard1和shard2上;

  驗證:在peoples1 集合上插入資料,看看是否將資料分片到不同的shard上呢?

  在mongos上插入資料

mongos> use testdb
switched to db testdb
mongos> for (i=1;i<=10000;i++) db.peoples1.insert({name:"people"+i,age:(i%120),classes:(i%20)})
WriteResult({ "nInserted" : 1 })
mongos> 

  在shard1上檢視資料

shard1_replset:PRIMARY> show dbs
admin   0.000GB
config  0.001GB
local   0.001GB
testdb  0.000GB
shard1_replset:PRIMARY> use testdb
switched to db testdb
shard1_replset:PRIMARY> show tables
peoples1
shard1_replset:PRIMARY> db.peoples1.find().count()
4966
shard1_replset:PRIMARY> 

  提示:在shard1上可以看到對應collection儲存了4966條資料;

  在shard2上檢視資料

shard2_replset:PRIMARY> show dbs
admin   0.000GB
config  0.001GB
local   0.011GB
testdb  0.011GB
shard2_replset:PRIMARY> use testdb
switched to db testdb
shard2_replset:PRIMARY> show tables
peoples
peoples1
shard2_replset:PRIMARY> db.peoples1.find().count()
5034
shard2_replset:PRIMARY> 

  提示:在shard2上可以看到有peoples集合和peoples1集合,其中peoples1集合儲存了5034條資料;shard1和shard2總共就儲存了我們剛才插入的10000條資料;

  ok,到此mongodb的分片叢集就搭建,測試完畢了;

相關文章