Docker Swarm 核心概念及詳細使用

小陈运维發表於2024-11-19

Docker Swarm 核心概念及詳細使用

Docker Swarm 介紹

Docker Swarm 是 Docker 的原生叢集管理工具。它的主要作用是將多個 Docker 主機整合到一個虛擬的 Docker 主機中,為 Docker 容器提供叢集和排程功能。透過 Docker Swarm,您可以輕鬆地管理多個 Docker 主機,並能在這些主機上排程容器的部署。下面是 Docker Swarm 的一些核心功能和特點:

  • 叢集管理:Docker Swarm 允許您將多個 Docker 主機作為一個單一的虛擬主機來管理。這意味著您可以在多個不同的伺服器上執行 Docker 容器,而這些伺服器被統一管理。
  • 容錯和高可用性:Swarm 提供高可用性服務,即使叢集中的一部分節點失敗,服務仍然可以繼續執行。
  • 負載均衡:Swarm 自動分配容器到叢集中的不同節點,從而實現負載均衡。它還可以根據需要自動擴充套件或縮減服務例項的數量。
  • 宣告式服務模型:Swarm 使用 Docker Compose 檔案格式,使您可以以宣告式方式定義應用的多個服務。
  • 服務發現:Swarm 叢集中的每個服務都可以透過服務名自動進行服務發現,這簡化了不同服務之間的通訊。
  • 安全性:Swarm 叢集內的通訊是加密的,提供了安全的節點間通訊機制。
  • 易用性:作為 Docker 的一部分,Swarm 的使用和 Docker 非常類似,對於熟悉 Docker 的使用者來說非常容易上手。

總體來說,Docker Swarm 是一種輕量級且易於使用的容器編排工具,適合那些希望利用 Docker 的強大功能,同時需要簡單叢集管理和服務編排功能的場景。雖然它不像 Kubernetes 那樣功能強大和複雜,但對於中小型專案或者對 Kubernetes 的複雜性有所顧慮的使用者來說,它是一個很好的選擇。

Node

Swarm 叢集由 Manager 節點(管理者角色,管理成員和委託任務)和 Worker 節點(工作者角色,執行 Swarm 服務)組成。一個節點就是 Swarm 叢集中的一個例項,也就是一個 Docker 主機。你可以執行一個或多個節點在單臺物理機或雲伺服器上,但是生產環境上,典型的部署方式是:Docker 節點交叉分散式部署在多臺物理機或雲主機上。節點名稱預設為機器的 hostname。

  • Manager:負責整個叢集的管理工作包括叢集配置、服務管理、容器編排等所有跟叢集有關的工作,它會選舉出一個 leader 來指揮編排任務;
  • Worker:工作節點接收和執行從管理節點分派的任務(Tasks)執行在相應的服務(Services)上。
Service

服務(Service)是一個抽象的概念,是對要在管理節點或工作節點上執行的任務的定義。它是叢集系統的中心結構,是使用者與叢集互動的主要根源。

Task

任務(Task)包括一個 Docker 容器和在容器中執行的命令。任務是一個叢集的最小單元,任務與容器是一對一的關係。管理節點根據服務規模中設定的副本數量將任務分配給工作節點。一旦任務被分配到一個節點,便無法移動到另一個節點。它只能在分配的節點上執行或失敗。

工作流程

Swarm Manager:

  1. API:接受命令並建立 service 物件(建立物件) $\Downarrow$
  2. orchestrator:為 service 物件建立的 task 進行編排工作(服務編排) $\Downarrow$
  3. allocater:為各個 task 分配 IP 地址(分配 IP) $\Downarrow$
  4. dispatcher:將 task 分發到 nodes(分發任務) $\Downarrow$
  5. scheduler:安排一個 worker 節點執行 task(執行任務) $\Downarrow$

Worker Node:

  1. worker:連線到排程器,檢查分配的 task(檢查任務) $\Uparrow$
  2. executor:執行分配給 worker 節點的 task(執行任務)

Dcoker Swarm 叢集部署

機器環境
IP:192.168.1.51 主機名: Manager 擔任角色: Manager

IP:192.168.1.52 主機名: Node1 擔任角色: Node

IP:192.168.1.53 主機名: Node2 擔任角色: Node

安裝基礎環境

# 修改主機名

[root@localhost ~]# hostnamectl set-hostname Manager
[root@localhost ~]# hostnamectl set-hostname Node1
[root@localhost ~]# hostnamectl set-hostname Node2
 
 
# 設定防火牆
# 關閉三臺機器上的防火牆。如果開啟防火牆,則需要在所有節點的防火牆上依次放行2377/tcp(管理埠)、7946/udp(節點間通訊埠)、4789/udp(overlay 網路埠)埠。

[root@localhost ~]# systemctl disable firewalld.service
[root@localhost ~]# systemctl stop firewalld.service
 
# 安裝docker
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

# 啟動docker
systemctl enable docker
systemctl start docker

配置加速器

# 用你自己的阿里雲加速器
[root@chenby ~]# cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "insecure-registries": ["z.oiox.cn:18082"],
  "registry-mirrors": [
    "https://xxxxx.mirror.aliyuncs.com"
  ],
  "max-concurrent-downloads": 10,
  "log-driver": "json-file",
  "log-level": "warn",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
    },
  "data-root": "/var/lib/docker"
}
EOF

# 重新啟動docker
[root@chenby ~]# systemctl restart docker && systemctl status docker -l

建立Swarm並新增節點

# 建立Swarm叢集

[root@Manager ~]# docker swarm init --advertise-addr 192.168.1.51
Swarm initialized: current node (nuy82gjzc2c0wip9agbava3z9) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0mbfykukl6fwl1mziipzqbakqmoo4iz1ti135uuyoj7zfgxgy2-4qbs0bm04iz0l52nm3bljvuoy 192.168.1.51:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

[root@Manager ~]#

# 其餘的Node節點執行加入操作
[root@Node1 ~]# docker swarm join --token SWMTKN-1-0mbfykukl6fwl1mziipzqbakqmoo4iz1ti135uuyoj7zfgxgy2-4qbs0bm04iz0l52nm3bljvuoy 192.168.1.51:2377
This node joined a swarm as a worker.
[root@Node1 ~]# 

[root@Node2 ~]# docker swarm join --token SWMTKN-1-0mbfykukl6fwl1mziipzqbakqmoo4iz1ti135uuyoj7zfgxgy2-4qbs0bm04iz0l52nm3bljvuoy 192.168.1.51:2377
This node joined a swarm as a worker.
[root@Node2 ~]#

檢視叢集的相關資訊

[root@Manager ~]# docker info
Client: Docker Engine - Community
 Version:    27.3.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.17.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.29.7
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 27.3.1
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: active
  NodeID: nuy82gjzc2c0wip9agbava3z9
  Is Manager: true
  ClusterID: hiki507c9yp8p4lrb8icp0rcs
  Managers: 1
  Nodes: 3
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 192.168.1.51
  Manager Addresses:
   192.168.1.51:2377
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 57f17b0a6295a39009d861b89e3b3b87b005ca27
 runc version: v1.1.14-0-g2c9f560
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.14.0-503.el9.x86_64
 Operating System: CentOS Stream 9
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 1.921GiB
 Name: Manager
 ID: fb7ffc06-ccc6-4faf-bf8a-4e05f13c14d6
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[root@Manager ~]# 
[root@Manager ~]# 
[root@Manager ~]# 
[root@Manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
nuy82gjzc2c0wip9agbava3z9 *   Manager    Ready     Active         Leader           27.3.1
6vdnp73unqh3qe096vv3iitwm     Node1      Ready     Active                          27.3.1
9txw7h8w3wfkjj85rulu7jnen     Node2      Ready     Active                          27.3.1
[root@Manager ~]# 
[root@Manager ~]#

節點上下線

更改節點的availablity狀態

swarm叢集中node的availability狀態可以為 active或者drain

active狀態下,node可以接受來自manager節點的任務分派

drain狀態下,node節點會結束task,且不再接受來自manager節點的任務分派(也就是下線節點)

# 設定節點為Drain

[root@Manager ~]# docker node update --availability drain Node1
Node1
[root@Manager ~]# 
[root@Manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
nuy82gjzc2c0wip9agbava3z9 *   Manager    Ready     Active         Leader           27.3.1
6vdnp73unqh3qe096vv3iitwm     Node1      Ready     Drain                           27.3.1
9txw7h8w3wfkjj85rulu7jnen     Node2      Ready     Active                          27.3.1
[root@Manager ~]# 
[root@Manager ~]# 


# 刪除節點
[root@Manager ~]# docker node rm --force Node1
Node1
[root@Manager ~]# 
[root@Manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
nuy82gjzc2c0wip9agbava3z9 *   Manager    Ready     Active         Leader           27.3.1
9txw7h8w3wfkjj85rulu7jnen     Node2      Ready     Active                          27.3.1
[root@Manager ~]# 



# 重新加入節點
[root@Node1 ~]# docker swarm leave -f
Node left the swarm.
[root@Node1 ~]# 
[root@Node1 ~]# 
[root@Node1 ~]# docker swarm join --token SWMTKN-1-0mbfykukl6fwl1mziipzqbakqmoo4iz1ti135uuyoj7zfgxgy2-4qbs0bm04iz0l52nm3bljvuoy 192.168.1.51:2377
This node joined a swarm as a worker.
[root@Node1 ~]# 


# 檢視現有狀態
[root@Manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
nuy82gjzc2c0wip9agbava3z9 *   Manager    Ready     Active         Leader           27.3.1
uec5t9039ef02emg963fean4u     Node1      Ready     Active                          27.3.1
9txw7h8w3wfkjj85rulu7jnen     Node2      Ready     Active                          27.3.1
[root@Manager ~]#

在Swarm中部署服務

# 建立網路
[root@Manager ~]# docker network create -d overlay nginx_net
resh5jevjdzfawrbc0tbxpns0
[root@Manager ~]# 
[root@Manager ~]# docker network ls | grep nginx_net
resh5jevjdzf   nginx_net         overlay   swarm
[root@Manager ~]# 
 
# 部署服務
[root@Manager ~]# docker service create --replicas 1 --network nginx_net --name my_nginx -p 80:80 nginx
ry7y3p039614jmvqytshxvnb3
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service ry7y3p039614jmvqytshxvnb3 converged 
[root@Manager ~]# 

# 使用 docker service ls 檢視正在執行服務的列表
[root@Manager ~]# docker service ls
ID             NAME       MODE         REPLICAS   IMAGE          PORTS
ry7y3p039614   my_nginx   replicated   1/1        nginx:latest   *:80->80/tcp
[root@Manager ~]# 


# 查詢Swarm中服務的資訊
# -pretty 使命令輸出格式化為可讀的格式,不加 --pretty 可以輸出更詳細的資訊:
[root@Manager ~]# docker service inspect --pretty my_nginx
ID:        ry7y3p039614jmvqytshxvnb3
Name:        my_nginx
Service Mode:    Replicated
 Replicas:    1
Placement:
UpdateConfig:
 Parallelism:    1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:    1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:        nginx:latest@sha256:bc5eac5eafc581aeda3008b4b1f07ebba230de2f27d47767129a6a905c84f470
 Init:        false
Resources:
Networks: nginx_net 
Endpoint Mode:    vip
Ports:
 PublishedPort = 80
  Protocol = tcp
  TargetPort = 80
  PublishMode = ingress 

[root@Manager ~]# 
 
# 檢視執行狀態
[root@Manager ~]# docker service ps my_nginx
ID             NAME         IMAGE          NODE      DESIRED STATE   CURRENT STATE           ERROR     PORTS
x6rn5w1hv2ip   my_nginx.1   nginx:latest   Node2     Running         Running 2 minutes ago           
[root@Manager ~]# 

# 訪問測試
[root@Manager ~]# curl -I 192.168.1.53
HTTP/1.1 200 OK
Server: nginx/1.27.2
Date: Tue, 19 Nov 2024 11:00:07 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 02 Oct 2024 15:13:19 GMT
Connection: keep-alive
ETag: "66fd630f-267"
Accept-Ranges: bytes

[root@Manager ~]#

調整副本數

# 增加副本數
[root@Manager ~]# docker service scale my_nginx=4
my_nginx scaled to 4
overall progress: 4 out of 4 tasks 
1/4: running   [==================================================>] 
2/4: running   [==================================================>] 
3/4: running   [==================================================>] 
4/4: running   [==================================================>] 
verify: Service my_nginx converged 
[root@Manager ~]#
 
# 檢視是否正常執行
[root@Manager ~]#  docker service ps my_nginx
ID             NAME         IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
x6rn5w1hv2ip   my_nginx.1   nginx:latest   Node2     Running         Running 12 minutes ago           
mi0wb3e0eixi   my_nginx.2   nginx:latest   Node1     Running         Running 8 minutes ago            
grm4mtucb2io   my_nginx.3   nginx:latest   Manager   Running         Running 8 minutes ago            
u8gdmihpkqty   my_nginx.4   nginx:latest   Node1     Running         Running 8 minutes ago            
[root@Manager ~]#

模擬節點當機

# 模擬當機node節點
[root@Node2 ~]# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
[root@Node2 ~]# 

# 檢視節點是否正常
[root@Manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
nuy82gjzc2c0wip9agbava3z9 *   Manager    Ready     Active         Leader           27.3.1
uec5t9039ef02emg963fean4u     Node1      Ready     Active                          27.3.1
9txw7h8w3wfkjj85rulu7jnen     Node2      Down      Active                          27.3.1
[root@Manager ~]# 

# 檢視容器是否正常
# 節點異常後,容器會在其他的節點上啟動起來
[root@Manager ~]#  docker service ps my_nginx
ID             NAME             IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
6yf6qs3rv6gx   my_nginx.1       nginx:latest   Manager   Running         Running 18 seconds ago           
x6rn5w1hv2ip    \_ my_nginx.1   nginx:latest   Node2     Shutdown        Running 14 minutes ago           
mi0wb3e0eixi   my_nginx.2       nginx:latest   Node1     Running         Running 9 minutes ago            
grm4mtucb2io   my_nginx.3       nginx:latest   Manager   Running         Running 9 minutes ago            
u8gdmihpkqty   my_nginx.4       nginx:latest   Node1     Running         Running 9 minutes ago            
[root@Manager ~]#

縮小已加的副本

# Swarm 動態縮容服務(scale)
[root@Manager ~]# docker service scale my_nginx=1
my_nginx scaled to 1
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service my_nginx converged 
[root@Manager ~]# 
[root@Manager ~]# 
[root@Manager ~]# docker service ls
ID             NAME       MODE         REPLICAS   IMAGE          PORTS
ry7y3p039614   my_nginx   replicated   1/1        nginx:latest   *:80->80/tcp
[root@Manager ~]# 
[root@Manager ~]# 
[root@Manager ~]# docker service ps my_nginx
ID             NAME             IMAGE          NODE      DESIRED STATE   CURRENT STATE                 ERROR     PORTS
6yf6qs3rv6gx   my_nginx.1       nginx:latest   Manager   Running         Running 4 minutes ago                 
x6rn5w1hv2ip    \_ my_nginx.1   nginx:latest   Node2     Shutdown        Shutdown about a minute ago           
[root@Manager ~]#

更新引數映象資訊

# 可以使用 update 更新引數
[root@Manager ~]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                   CREATED         STATUS         PORTS     NAMES
64e2f72522a2   nginx:latest   "/docker-entrypoint.…"   6 minutes ago   Up 6 minutes   80/tcp    my_nginx.1.6yf6qs3rv6gxbnrc032mhrwf1
[root@Manager ~]# docker service update --replicas 3 my_nginx
my_nginx
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service my_nginx converged 
[root@Manager ~]# 
[root@Manager ~]# docker service ls
ID             NAME       MODE         REPLICAS   IMAGE          PORTS
ry7y3p039614   my_nginx   replicated   3/3        nginx:latest   *:80->80/tcp
[root@Manager ~]# 
[root@Manager ~]#
[root@Manager ~]# docker service ps my_nginx
ID             NAME             IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
6yf6qs3rv6gx   my_nginx.1       nginx:latest   Manager   Running         Running 7 minutes ago            
x6rn5w1hv2ip    \_ my_nginx.1   nginx:latest   Node2     Shutdown        Shutdown 4 minutes ago           
pkc7bzqkpppz   my_nginx.2       nginx:latest   Node2     Running         Running 22 seconds ago           
jfok9cwixbi6   my_nginx.3       nginx:latest   Node1     Running         Running 23 seconds ago           
[root@Manager ~]# 
 
# 透過update引數進行升級映象
[root@Manager ~]# docker service update --image nginx:new my_nginx
[root@Manager ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
zs7fw4ereo5w        my_nginx            replicated          3/3                 nginx:new           *:80->80/tcp

刪除服務

[root@Manager ~]# docker service rm my_nginx
my_nginx
[root@Manager ~]# 
[root@Manager ~]# docker service ps my_nginx
no such service: my_nginx
[root@Manager ~]#

儲存卷的掛載

# Swarm中使用Volume(掛在目錄,mount命令)
# 建立一個volume
[root@Manager ~]# docker volume create --name testvolume
testvolume
[root@Manager ~]# 

# 檢視建立的volume
[root@Manager ~]# docker volume ls
DRIVER    VOLUME NAME
local     testvolume
[root@Manager ~]# 

# 檢視volume詳情
[root@Manager ~]# docker volume inspect testvolume
[
    {
        "CreatedAt": "2024-11-19T19:23:42+08:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/testvolume/_data",
        "Name": "testvolume",
        "Options": null,
        "Scope": "local"
    }
]
[root@Manager ~]#

建立服務使用儲存卷掛載

# 建立新的服務並掛載testvolume
[root@Manager ~]# docker service create --replicas 3 --mount type=volume,src=testvolume,dst=/usr/share/nginx/html --network nginx_net --name test_nginx -p 80:80 nginx
4ol5e2jxvs446q4mr9brs3cfk
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service 4ol5e2jxvs446q4mr9brs3cfk converged 
[root@Manager ~]# 
[root@Manager ~]# 


# 檢視建立服務
[root@Manager ~]# docker service ls
ID             NAME         MODE         REPLICAS   IMAGE          PORTS
4ol5e2jxvs44   test_nginx   replicated   3/3        nginx:latest   *:80->80/tcp
[root@Manager ~]# 
[root@Manager ~]# 
[root@Manager ~]# docker service ps test_nginx
ID             NAME           IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
jvaokj73sv0q   test_nginx.1   nginx:latest   Node2     Running         Running 35 seconds ago           
28kwulxo957w   test_nginx.2   nginx:latest   Manager   Running         Running 35 seconds ago           
odx5ejqph369   test_nginx.3   nginx:latest   Node1     Running         Running 35 seconds ago           
[root@Manager ~]#

測試是否成功掛載

# 檢視有沒有掛載成功
# 寫入內容到網頁內容中
[root@Manager ~]# echo "192.168.1.51" > /var/lib/docker/volumes/testvolume/_data/index.html
[root@Manager ~]# 
[root@Node1 ~]# echo "192.168.1.52" > /var/lib/docker/volumes/testvolume/_data/index.html
[root@Node1 ~]# 
[root@Node2 ~]# echo "192.168.1.53" > /var/lib/docker/volumes/testvolume/_data/index.html 
[root@Node2 ~]# 

# 測試是否生效
# 訪問任意的節點IP即可,會輪詢到這個節點上
[root@Manager ~]# curl 192.168.1.51
192.168.1.51
[root@Manager ~]# curl 192.168.1.51
192.168.1.53
[root@Manager ~]# curl 192.168.1.51
192.168.1.52
[root@Manager ~]#

建立官方的視覺化皮膚

# 安裝一個官方的視覺化皮膚
[root@Manager ~]# docker run -it -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer
Unable to find image 'dockersamples/visualizer:latest' locally
latest: Pulling from dockersamples/visualizer
ddad3d7c1e96: Pull complete 
3a8370f05d5d: Pull complete 
71a8563b7fea: Pull complete 
119c7e14957d: Pull complete 
28bdf67d9c0d: Pull complete 
12571b9c0c9e: Pull complete 
e1bd03793962: Pull complete 
3ab99c5ebb8e: Pull complete 
94993ebc295c: Pull complete 
021a328e5f7b: Pull complete 
Digest: sha256:530c863672e7830d7560483df66beb4cbbcd375a9f3ec174ff5376616686a619
Status: Downloaded newer image for dockersamples/visualizer:latest
a6a71d4a6d59d8a1e321c70add627bb3c407ae2d4c1e5e9f5a1202bbaa4a24a9
[root@Manager ~]#
[root@Manager ~]# 
[root@Manager ~]# curl -I 192.168.1.51:8080
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 1920
ETag: W/"780-E5yvqIM13yhGsvY/rSKjKKqkVno"
Date: Tue, 19 Nov 2024 11:43:05 GMT
Connection: keep-alive
Keep-Alive: timeout=5
[root@Manager ~]#

Docker Swarm 容器網路

swarm模式的覆蓋網路包括以下功能:

  • 可以附加多個服務到同一個網路。
  • 可以給每個swarm服務分配一個虛擬IP地址(vip)和DNS名稱
  • 使得在同一個網路中容器之間可以使用服務名稱為互相連線
  • 可以配置使用DNS輪循而不使用VIP
  • 為了可以使用swarm的覆蓋網路,在啟用swarm模式之間你需要在swarm節點之間開放以下埠:

    TCP/UDP埠7946 – 用於容器網路發現

    UDP埠4789 – 用於容器覆蓋網路
# 建立網路
[root@Manager ~]# docker network create --driver overlay --opt encrypted --subnet 192.168.2.0/24 cby_net
j26skr271gjkzpbx91wu1okt9
[root@Manager ~]# 
   
引數解釋:
–opt encrypted  預設情況下swarm中的節點通訊是加密的。在不同節點的容器之間,可選的–opt encrypted引數能在它們的vxlan流量啟用附加的加密層。
--subnet 命令列引數指定overlay網路使用的子網網段。當不指定一個子網時,swarm管理器自動選擇一個子網並分配給網路。


[root@Manager ~]# 
[root@Manager ~]# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
239c159fade4   bridge            bridge    local
j26skr271gjk   cby_net           overlay   swarm
ee7340b82a36   docker_gwbridge   bridge    local
82ce5e09d333   host              host      local
dkhebie7aja7   ingress           overlay   swarm
resh5jevjdzf   nginx_net         overlay   swarm
60d6545d6b8e   none              null      local
[root@Manager ~]# 

   
# 建立的容器使用此網路
[root@Manager ~]# docker service create --replicas 5 --network cby_net --name my-cby -p 8088:80 nginx
58j0x31f072f12njv8oz2ibwf
overall progress: 5 out of 5 tasks 
1/5: running   [==================================================>] 
2/5: running   [==================================================>] 
3/5: running   [==================================================>] 
4/5: running   [==================================================>] 
5/5: running   [==================================================>] 
verify: Service 58j0x31f072f12njv8oz2ibwf converged 
[root@Manager ~]# 



[root@Manager ~]# docker service ls | grep my-cby
58j0x31f072f   my-cby       replicated   5/5        nginx:latest   *:8088->80/tcp
[root@Manager ~]# 

   
在manager-node節點上,透過下面的命令檢視哪些節點有處於running狀態的任務:
[root@Manager ~]# docker service ps my-cby
ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE                ERROR     PORTS
hrppcl25yba0   my-cby.1   nginx:latest   Node2     Running         Running about a minute ago           
xw55qx98dgby   my-cby.2   nginx:latest   Manager   Running         Running about a minute ago           
izx4jb8aen5w   my-cby.3   nginx:latest   Node1     Running         Running about a minute ago           
tdkm03dxjzv2   my-cby.4   nginx:latest   Manager   Running         Running about a minute ago           
h6lcj91v01cm   my-cby.5   nginx:latest   Node1     Running         Running about a minute ago           
[root@Manager ~]#

檢視網路詳細資訊

可以查詢某個節點上關於my-network的詳細資訊:
[root@Manager ~]# docker network inspect cby_net
[
    {
        "Name": "cby_net",
        "Id": "j26skr271gjkzpbx91wu1okt9",
        "Created": "2024-11-19T20:10:07.207940854+08:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "192.168.2.0/24",
                    "Gateway": "192.168.2.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "2f101603351b388e2820dd576b2ab9490863b65ae04e8cb4f6a3bf2d0df2a590": {
                "Name": "my-cby.4.tdkm03dxjzv20acx6shajxhjg",
                "EndpointID": "b2a5885c2efe87370eb34b4da2103f979a8fa95fe9bff71f037eee569f1ffb0b",
                "MacAddress": "02:42:c0:a8:02:06",
                "IPv4Address": "192.168.2.6/24",
                "IPv6Address": ""
            },
            "8cbd44885c579fa9bc267bcb3eea11b8edcd696b0c75180da5c9237330afcba6": {
                "Name": "my-cby.2.xw55qx98dgbyrdi6jxt9kguvc",
                "EndpointID": "e2f04749c1b55c25b75ec677fd95cc0bd1941a58c69a9b3eed8754d2cfb6de32",
                "MacAddress": "02:42:c0:a8:02:04",
                "IPv4Address": "192.168.2.4/24",
                "IPv6Address": ""
            },
            "lb-cby_net": {
                "Name": "cby_net-endpoint",
                "EndpointID": "f93daf78ca41922a4be4c4b3dde01bb7a919d9008304a4a31950f09281ae30f9",
                "MacAddress": "02:42:c0:a8:02:0a",
                "IPv4Address": "192.168.2.10/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4098",
            "encrypted": ""
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "33a2908a261b",
                "IP": "192.168.1.53"
            },
            {
                "Name": "d7847005824e",
                "IP": "192.168.1.51"
            },
            {
                "Name": "7640904bfcc4",
                "IP": "192.168.1.52"
            }
        ]
    }
]
[root@Manager ~]# 
   


[root@Node1 ~]# docker network inspect cby_net
............................................
        "Containers": {
            "01f42aaeb9667b8d07683f5e2d60f643cc61aa9ceda0d0adb8bc642d1093bfc9": {
                "Name": "my-cby.3.izx4jb8aen5wbwkvy5b7tz1lz",
                "EndpointID": "46892992ac400bc1cfc40f62610ff3321ae079929170912d861556f8d1f4645f",
                "MacAddress": "02:42:c0:a8:02:05",
                "IPv4Address": "192.168.2.5/24",
                "IPv6Address": ""
            },
            "a214a33e86c95b3ea84aae6eee705b633ac5a31854425317d2a0d9693cee00ca": {
                "Name": "my-cby.5.h6lcj91v01cmhf03644nqarxq",
                "EndpointID": "a1ec3f28877fb82ba86c2b6312c592489bb59d354caa274a6b5d98aae3c4ee17",
                "MacAddress": "02:42:c0:a8:02:07",
                "IPv4Address": "192.168.2.7/24",
                "IPv6Address": ""
            },
            "lb-cby_net": {
                "Name": "cby_net-endpoint",
                "EndpointID": "05f1e89be3036367a729c1a50430bcd6181ed4bc7ec4e37bd025ba5591b6b3bf",
                "MacAddress": "02:42:c0:a8:02:09",
                "IPv4Address": "192.168.2.9/24",
                "IPv6Address": ""
            }
        },
............................................
  
[root@Node2 ~]# docker network inspect cby_net
............................................
        "Containers": {
            "6ac3a65fa5a2501a5ad6d4183895e4e6b13beaf6b8642c360e19e9bc0849f74c": {
                "Name": "my-cby.1.hrppcl25yba05o26q1my5abmc",
                "EndpointID": "a0e8246bc74b8e9b9f964b1efb51ad59d9b7dff219b7f10fc027970145503f34",
                "MacAddress": "02:42:c0:a8:02:03",
                "IPv4Address": "192.168.2.3/24",
                "IPv6Address": ""
            },
            "lb-cby_net": {
                "Name": "cby_net-endpoint",
                "EndpointID": "aa739df579790e8147877ce79220cf5387740f11a581344756502f9212314c24",
                "MacAddress": "02:42:c0:a8:02:08",
                "IPv4Address": "192.168.2.8/24",
                "IPv6Address": ""
            }
        },
.............................................
   
# 可以透過查詢服務來獲得服務的虛擬IP地址,如下:
[root@Manager ~]# docker service inspect --format='{{json .Endpoint.VirtualIPs}}' my-cby
[{"NetworkID":"dkhebie7aja768y8agz4xdpwt","Addr":"10.0.0.27/24"},{"NetworkID":"j26skr271gjkzpbx91wu1okt9","Addr":"192.168.2.2/24"}]
[root@Manager ~]# 

建立測試容器

[root@Manager ~]# docker service create --name my-by_net  --network cby_net busybox ping www.baidu.com

u7eana0p9xp9auw9p02d8z1wx
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service u7eana0p9xp9auw9p02d8z1wx converged 
[root@Manager ~]# 
[root@Manager ~]# 
[root@Manager ~]# 
[root@Manager ~]# docker service ps my-by_net
ID             NAME          IMAGE            NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
7l1fpecym4kc   my-by_net.1   busybox:latest   Node2     Running         Running 31 seconds ago           
[root@Manager ~]# 

進行網路測試

# 測試其他的IP的是否在容器內正常
[root@Node2 ~]# docker exec -ti 1b1a6f6c5a7b /bin/sh
/ # 
/ # ping 192.168.2.8
PING 192.168.2.8 (192.168.2.8): 56 data bytes
64 bytes from 192.168.2.8: seq=0 ttl=64 time=0.095 ms
64 bytes from 192.168.2.8: seq=1 ttl=64 time=0.073 ms
64 bytes from 192.168.2.8: seq=2 ttl=64 time=0.101 ms
^C
--- 192.168.2.8 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.089/0.101 ms
/ # ping 192.168.2.7
PING 192.168.2.7 (192.168.2.7): 56 data bytes
64 bytes from 192.168.2.7: seq=0 ttl=64 time=0.434 ms
64 bytes from 192.168.2.7: seq=1 ttl=64 time=0.430 ms
64 bytes from 192.168.2.7: seq=2 ttl=64 time=0.401 ms
64 bytes from 192.168.2.7: seq=3 ttl=64 time=0.386 ms
^C
--- 192.168.2.7 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.386/0.412/0.434 ms
/ # ping 192.168.2.2
PING 192.168.2.2 (192.168.2.2): 56 data bytes
64 bytes from 192.168.2.2: seq=0 ttl=64 time=0.081 ms
64 bytes from 192.168.2.2: seq=1 ttl=64 time=0.075 ms
64 bytes from 192.168.2.2: seq=2 ttl=64 time=0.093 ms
64 bytes from 192.168.2.2: seq=3 ttl=64 time=0.073 ms
^C
--- 192.168.2.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.080/0.093 ms
/ # 


# 服務發現功能
# 查詢service的虛擬IP地址
/ # nslookup my-cby
Server:        127.0.0.11
Address:    127.0.0.11:53

Non-authoritative answer:

Non-authoritative answer:
Name:    my-cby
Address: 192.168.2.2

/ #
  
  
# 查詢所有的容器IP
/ # nslookup tasks.my-cby
Server:        127.0.0.11
Address:    127.0.0.11:53

Non-authoritative answer:

Non-authoritative answer:
Name:    tasks.my-cby
Address: 192.168.2.73
Name:    tasks.my-cby
Address: 192.168.2.74
Name:    tasks.my-cby
Address: 192.168.2.7
Name:    tasks.my-cby
Address: 192.168.2.5
Name:    tasks.my-cby
Address: 192.168.2.3

/ #

關於

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、51CTO、知乎、開源中國、思否、掘金、簡書、華為雲、阿里雲、騰訊雲、嗶哩嗶哩、今日頭條、新浪微博、個人部落格

全網可搜《小陳運維》

文章主要釋出於微信公眾號

相關文章