cephadm訪問ceph叢集的方式及管理員節點配置案例

尹正杰發表於2024-08-22

                                              作者:尹正傑

版權宣告:原創作品,謝絕轉載!否則將追究法律責任。

目錄
  • 一.cephadm訪問ceph叢集
    • 1 方式一: 使用cephadm shell互動式配置【會建立臨時容器,當shell推出後就會自動刪除容器喲~】
    • 2 方式二: 使用cephadm非互動式配置【會建立臨時容器】
    • 3 方式三: 安裝ceph通用包,其中包含所有ceph命令,包括ceph、rbd、mount.ceph(用於掛載CephFS檔案系統)等【推薦使用】
  • 二.ceph的管理節點配置
    • 1.複製apt源及認證檔案
    • 2.客戶端更新源並安裝客戶端節點ceph客戶端軟體包
    • 3.ceph141節點複製認證檔案到ceph142節點
    • 4.ceph142節點測試
    • 5 彩蛋: 標籤管理

一.cephadm訪問ceph叢集

1 方式一: 使用cephadm shell互動式配置【會建立臨時容器,當shell推出後就會自動刪除容器喲~】

[root@ceph141 ~]# cephadm shell 
Inferring fsid c044ff3c-5f05-11ef-9d8b-51db832765d6
Inferring config /var/lib/ceph/c044ff3c-5f05-11ef-9d8b-51db832765d6/mon.ceph141/config
Using ceph image with id '2bc0b0f4375d' and tag 'v18' created on 2024-07-24 06:19:35 +0800 CST
quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906
root@ceph141:/# 
root@ceph141:/# ceph -s
  cluster:
    id:     c044ff3c-5f05-11ef-9d8b-51db832765d6
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph141 (age 14m)
    mgr: ceph141.gqogmi(active, since 10m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
root@ceph141:/# 
root@ceph141:/# exit 
exit
[root@ceph141 ~]# 

2 方式二: 使用cephadm非互動式配置【會建立臨時容器】

[root@ceph141 ~]# cephadm shell -- ceph -s
Inferring fsid c044ff3c-5f05-11ef-9d8b-51db832765d6
Inferring config /var/lib/ceph/c044ff3c-5f05-11ef-9d8b-51db832765d6/mon.ceph141/config
Using ceph image with id '2bc0b0f4375d' and tag 'v18' created on 2024-07-24 06:19:35 +0800 CST
quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906
  cluster:
    id:     c044ff3c-5f05-11ef-9d8b-51db832765d6
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph141 (age 12m)
    mgr: ceph141.gqogmi(active, since 8m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
[root@ceph141 ~]# 

3 方式三: 安裝ceph通用包,其中包含所有ceph命令,包括ceph、rbd、mount.ceph(用於掛載CephFS檔案系統)等【推薦使用】

[root@ceph141 ~]# cephadm add-repo --release reef
[root@ceph141 ~]# cephadm install ceph-common
...
Installing repo GPG key from https://download.ceph.com/keys/release.gpg...
Installing repo file at /etc/apt/sources.list.d/ceph.list...  # 會在宿主機建立原始檔並安裝,速度較慢,請耐性等待!
Updating package list...
Completed adding repo.
Installing packages ['ceph-common']
[root@ceph141 ~]#
[root@ceph141 ~]# ceph -v  # Duang~宿主機可以正常訪問啦!
ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph -s  # 直接在宿主機訪問即可
  cluster:
    id:     c044ff3c-5f05-11ef-9d8b-51db832765d6
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum ceph141 (age 22m)
    mgr: ceph141.gqogmi(active, since 18m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
[root@ceph141 ~]# 

二.ceph的管理節點配置

1.複製apt源及認證檔案

[root@ceph141 ~]# scp  /etc/apt/sources.list.d/ceph.list ceph142:/etc/apt/sources.list.d/
[root@ceph141 ~]# scp /etc/apt/trusted.gpg.d/ceph.release.gpg  ceph142:/etc/apt/trusted.gpg.d/

2.客戶端更新源並安裝客戶端節點ceph客戶端軟體包

[root@ceph142 ~]# ll /etc/apt/trusted.gpg.d/ceph.release.gpg 
-rw-r--r-- 1 root root 1143 Aug 21 16:35 /etc/apt/trusted.gpg.d/ceph.release.gpg
[root@ceph142 ~]# 
[root@ceph142 ~]# 
[root@ceph142 ~]# ll /etc/apt/sources.list.d/ceph.list 
-rw-r--r-- 1 root root 54 Aug 21 16:33 /etc/apt/sources.list.d/ceph.list
[root@ceph142 ~]# 
[root@ceph142 ~]# apt update
[root@ceph142 ~]# 
[root@ceph142 ~]# apt -y install ceph-common
[root@ceph142 ~]# 
[root@ceph142 ~]# ceph -v
ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)
[root@ceph142 ~]# 
[root@ceph142 ~]# ll /etc/ceph/
total 12
drwxr-xr-x   2 root root 4096 Aug 21 16:37 ./
drwxr-xr-x 101 root root 4096 Aug 21 16:37 ../
-rw-r--r--   1 root root   92 Jul 12 23:42 rbdmap
[root@ceph142 ~]# 
[root@ceph142 ~]# 
[root@ceph142 ~]# ceph -s  # 很明顯,此節點的ceph管理ceph叢集
Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
[root@ceph142 ~]# 

3.ceph141節點複製認證檔案到ceph142節點

[root@ceph141 ~]# scp /etc/ceph/ceph.{conf,client.admin.keyring} ceph142:/etc/ceph/

4.ceph142節點測試

[root@ceph142 ~]# ll /etc/ceph/
total 20
drwxr-xr-x   2 root root 4096 Aug 21 16:40 ./
drwxr-xr-x 101 root root 4096 Aug 21 16:37 ../
-rw-------   1 root root  151 Aug 21 16:40 ceph.client.admin.keyring
-rw-r--r--   1 root root  259 Aug 21 16:40 ceph.conf
-rw-r--r--   1 root root   92 Jul 12 23:42 rbdmap
[root@ceph142 ~]# 
[root@ceph142 ~]# 
[root@ceph142 ~]# ceph -s
  cluster:
    id:     3cb12fba-5f6e-11ef-b412-9d303a22b70f
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 94m)
    mgr: ceph141.cwgrgj(active, since 5h), standbys: ceph142.ymuzfe
    osd: 7 osds: 7 up (since 78m), 7 in (since 78m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   188 MiB used, 3.3 TiB / 3.3 TiB avail
    pgs:     1 active+clean
 
[root@ceph142 ~]# 

5 彩蛋: 標籤管理

	1.新增標籤
[root@ceph141 ~]# ceph orch host label add ceph142 _admin
Added label _admin to host ceph142
[root@ceph141 ~]# 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph orch host label add ceph143 _admin
Added label _admin to host ceph143
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph orch host label add ceph143 oldboyedu
Added label oldboyedu to host ceph143
[root@ceph141 ~]# 
[root@ceph141 ~]#  

	2.移除標籤
[root@ceph141 ~]# ceph orch host label rm ceph143 oldboyedu 
Removed label oldboyedu from host ceph143
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph orch host label rm ceph143 admin
Host ceph143 does not have label 'admin'. Please use 'ceph orch host ls' to list all the labels.
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph orch host label rm ceph143 _admin
Removed label _admin from host ceph143
[root@ceph141 ~]# 

		
	溫馨提示:
		1.可以在dashboard中檢視,但是會延遲,大概30s左右 
https://ceph141:8443/#/hosts


		2.一般情況下,管理節點,我們都會為節點打上對應的標籤,以便於日後工作交接
		
	參考連結:
https://docs.ceph.com/en/latest/cephadm/install/#adding-hosts

相關文章