[自動化]基於kolla的自動化部署ceph叢集

我是一個平民發表於2022-02-28
kolla-ceph來源:

專案中的部分程式碼來自於kolla和kolla-ansible

kolla-ceph的介紹:

1、映象的構建很方便, 基於容器的方式部署,建立、刪除方便

2、kolla-ceph的操作冪等,多次執行不會產生副作用

3、使用kolla-ceph(基於ansible)流程化部署

4、通過給磁碟打上相應標籤,建立osd非常簡單

5、升級便捷,通過構建新的ceph映象,upgrade既可

6、自動根據osd節點數量來設定故障域: "osd" 或 "host",及配置對應的副本數

專案結構

Auto_Ceph
├── 00-hosts
├── README.md
├── action_plugins
├── action.yml
├── bin
├── build
├── config
├── group_vars
├── library
├── os.yml
├── requirements.txt 
├── roles
└── site.yml

點選檢視專案:kolla-ceph自動化部署ceph叢集
系統: Centos
環境: 3臺虛擬機器(可採用單節點或多節點),下載Auto_Ceph專案放在/root/目錄下

ceph叢集節點規劃、網路規劃

vi /root/Auto_Ceph/00-hosts

# storage_interface=eth0 ceph叢集管理介面,必須配置 
# cluster_interface=eth1 ceph叢集資料同步介面,為空預設就是storage_interface

[all:vars]
storage_interface=eth0
cluster_interface=eth1

[mon]
172.20.163.244  # 同時是部署節點
172.20.163.67 
172.20.163.238

[mgr]
172.20.163.244
172.20.163.67 
172.20.163.238 

[osd]
172.20.163.244
172.20.163.67 
172.20.163.238 

[rgw]
172.20.163.244
172.20.163.67 
172.20.163.238 

[mds]

下載ceph映象<部署節點操作>

部署節點:可以是任意一臺mon組的節點(172.20.163.244)

  1. 線上部署: 下載ceph映象、安裝ansible、kolla-ceph<部署節點操作>
1. type wget || yum install wget -y

2. wget https://bootstrap.pypa.io/pip/2.7/get-pip.py --no-check-certificate

3. python get-pip.py

4.ceph映象下載安裝依賴
  pip install -r /root/Auto_Ceph/requirements.txt --ignore-installed

  當出現以下報錯的時候執行:pip install GitPython, 再執行pip install kolla==9.4.0
  ERROR: pip's legacy dependency resolver does not consider dependency conflicts when selecting   packages. This behaviour is the source of the following dependency conflicts.
  gitdb2 4.0.2 requires gitdb>=4.0.1, but you'll have gitdb 0.6.4 which is incompatible.
  gitpython 2.1.15 requires gitdb2<3,>=2, but you'll have gitdb2 4.0.2 which is incompatible.
    
5. 下載docker
   sh /root/Auto_Ceph/bin/install -D
    
6. 執行docker
   sh /root/Auto_Ceph/bin/install -I
   systemctl status docker
    
7. 下載registry
   sh /root/Auto_Ceph/bin/install -R
    
8. 執行registry
   docker run -d -v /opt/registry:/var/lib/registry -p 5000:5000 --restart=always --name registry registry:latest
	
7. 修改配置
   vi /root/Auto_Ceph/build/ceph-build.conf
   registry = 172.17.2.179:5000 # 必須按照實際修改, 其它預設既可 
           
8. 開始構建ceph映象, 檢視映象
   cd /root/Auto_Ceph/build/ && sh build.sh --tag nautilus
   docker image ls
     REPOSITORY                                                TAG                 IMAGE ID            CREATED             SIZE
     172.20.163.77:5000/kolla-ceph/centos-binary-ceph-mon      nautilus            a5e8a5ff08fc        13 days ago         792MB
     172.20.163.77:5000/kolla-ceph/centos-binary-ceph-osd      nautilus            118b704bcf88        13 days ago         793MB
     172.20.163.77:5000/kolla-ceph/centos-binary-cephfs-fuse   nautilus            6b00fc4b6e2e        13 days ago         792MB
     172.20.163.77:5000/kolla-ceph/centos-binary-ceph-mds      nautilus            b206c578e594        13 days ago         792MB
     172.20.163.77:5000/kolla-ceph/centos-binary-ceph-rgw      nautilus            e9f5e4bca8ab        13 days ago         792MB
     172.20.163.77:5000/kolla-ceph/centos-binary-ceph-mgr      nautilus            b561bf427142        13 days ago         792MB
     172.20.163.77:5000/kolla-ceph/centos-binary-ceph-base     nautilus            eae0898ce208        13 days ago         792MB
     172.20.163.77:5000/kolla-ceph/centos-binary-base          nautilus            d48db6e179f9        13 days ago         410MB

    
9. type ansible || yum install ansible -y

10. 部署節點與節點之間ssh免密配置

11. 安裝kolla-ceph工具 
    cd /root/Auto_Ceph/bin && sh install -K
    
  1. 線上部署: 下載docker<除部署節點外, 在其它節點操作以下步驟>
1. type wget || yum install wget -y

2. wget https://bootstrap.pypa.io/pip/2.7/get-pip.py --no-check-certificate

3. python get-pip.py

4. 安裝docker模組 
    pip install docker
    
5. 安裝並執行docker
   scp /root/Auto_Ceph/bin/install ${target_host}:/root/
   sh /root/install -D &&  sh /root/install -I

部署ceph叢集

1. 修改引數<部署節點操作>
vi /root/Auto_Ceph/config/globals.yml 

   ceph_tag: "nautilus"
   docker_registry: "倉庫地址:埠"
   ceph_osd_store_type: "bluestore"
   ceph_pool_pg_num: 32 # 設定你的pg數
   ceph_pool_pgp_num: 32 # 設定你的pgp數
   enable_ceph_rgw: "true or false"
   enable_ceph_mds: "true or false"
2. kolla-ceph部署使用<部署節點操作>

2.1 初始化ceph主機節點

    kolla-ceph -i /root/Auto_Ceph/00-hosts os
   
2.2 部署前檢查配置

    kolla-ceph -i /root/Auto_Ceph/00-hosts prechecks
   
2.3 部署ceph叢集

    1、bluestore osd: 為每個osd節點的磁碟打上標籤
       parted  /dev/vdc  -s  -- mklabel  gpt  mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS  1 -1
    2、部署ceph-mon、ceph-osd、ceph-mgr、ceph-rgw、ceph-mds
       kolla-ceph -i /root/Auto_Ceph/00-hosts deploy
    3、docker exec ceph_mon ceph -s
         cluster:
           id:     4a9e463a-4853-4237-a5c5-9ae9d25bacda
           health: HEALTH_OK
        
         services:
           mon: 3 daemons, quorum 172.20.163.67,172.20.163.77,172.20.163.238 (age 2h)
           mgr: 172.20.163.238(active, since 2h), standbys: 172.20.163.77, 172.20.163.67
           mds: cephfs:1 {0=devops2=up:active} 2 up:standby
           osd: 4 osds: 4 up (since 2h), 4 in (since 13d)
           rgw: 1 daemon active (radosgw.gateway)
        
         data:
           pools:   7 pools, 104 pgs
           objects: 260 objects, 7.6 KiB
           usage:   4.1 GiB used, 76 GiB / 80 GiB avail
           pgs:     104 active+clean   

2.4 刪除操作: ceph叢集容器和volume

    kolla-ceph -i /root/Auto_Ceph/00-hosts  destroy --yes-i-really-really-mean-it
  
2.5 升級操作

    1、cd /root/Auto_Ceph/build/ && sh build.sh --tag new_ceph_version
    2、修改最新ceph_tag: "new_ceph_version"
    3、kolla-ceph -i /root/Auto_Ceph/00-hosts upgrade
   
2.6 單獨更換部署osd

    kolla-ceph -i /root/Auto_Ceph/00-hosts -t ceph-osd

2.7 開啟ceph dashborad
    enable_ceph_dashboard: true
    kolla-ceph -i /root/Auto_Ceph/00-hosts start-dashborad

2.8 啟用物件閘道器管理前端
    enable_ceph_rgw: true   
    kolla-ceph -i /root/Auto_Ceph/00-hosts start-rgw-front


3. 磁碟打標籤介紹
3.1. bluestore wal db共用一塊盤打標籤方式
  1. parted /dev/vdc -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS 1 -1
3.2. bluestore 分離db和wal打標籤方式

為了提高 ceph 效能,且ssd磁碟數量有限,通常將db和wal存放在單獨的 ssd 磁碟上

  # SSD磁碟:vdb vdd HDD磁碟:vdc
    1. 指定後設資料分割槽 
       parted /dev/vdc -s -- mklabel  gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_BLUE1 1 100
    2. 指定block 分割槽
       parted /dev/vdc -s -- mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_BLUE1_B 101 100%
      
    3. 指定block.wal分割槽
       parted /dev/vdb -s -- mklabel  gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_BLUE1_W 1 1000
    4. 指定block.db分割槽
       parted /dev/vdd -s -- mklabel  gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_BLUE1_D 1 10000
    

block.db 分割槽的大小為 block 分割槽 的 4%大小

3.3 filestore 打標籤方式
  1. parted /dev/vdc -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
3.4 filestore 指定日誌單獨分割槽

filestore 為了提高 ceph 效能,通常將日誌存放在單獨的 ssd 磁碟上

# SSD磁碟:vdb    HDD磁碟:vdc vdd 
1. vdc 作為資料盤
   parted /dev/vdc -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FILE1 1 -1
2. vdd 作為資料盤
   parted /dev/vdd -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FILE2 1 -1
3. vdb作為vdc、vdd 的journal盤
   parted /dev/vdb -s -- mklabel gpt
   parted /dev/vdb -s -- mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FILE1_J 4M 2G
   parted /dev/vdb -s -- mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FILE2_J 2G 4G

運維操作

1、任意一臺montior節點進入ceph環境. 既可以正常執行ceph命令運維操作
   docker exec -it ceph_mon bash
   ceph -s
     
2、或者直接外部操作
   docker exec ceph_mon ceph -s

3、osd故障操作
   docker exec ceph_mon ceph osd crush rm osd.1
   docker exec ceph_mon ceph osd auth rm osd.1
   docker exec ceph_mon ceph osd rm osd.1
   到故障osd節點把容器給幹掉,然後換新盤: docker rm -f ceph_osd_1
   為新磁碟盤打標籤: parted  /dev/vdc  -s  -- mklabel  gpt  mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS  1 -1
   部署新osd: kolla-ceph -i /root/Auto_Ceph/00-hosts -t ceph-osd

ceph dashboard

ceph 叢集狀態

ceph cluster
ceph cluster

ceph rgw

ceph rgw

相關文章