Centos7下使用Ceph-deploy快速部署Ceph分散式儲存-操作記錄

散盡浮華發表於2018-06-05

 

之前已詳細介紹了Ceph分散式儲存基礎知識,下面簡單記錄下Centos7使用Ceph-deploy快速部署Ceph環境:
1)基本環境

192.168.10.220    ceph-admin(ceph-deploy)   mds1、mon1(也可以將monit節點另放一臺機器)
192.168.10.239    ceph-node1                  osd1   
192.168.10.212    ceph-node2                  osd2  
192.168.10.213    ceph-node3                  osd3   
-------------------------------------------------
每個節點修改主機名
# hostnamectl set-hostname ceph-admin
# hostnamectl set-hostname ceph-node1
# hostnamectl set-hostname ceph-node2
# hostnamectl set-hostname ceph-node3
-------------------------------------------------
每個節點繫結主機名對映
# cat /etc/hosts
192.168.10.220    ceph-admin
192.168.10.239    ceph-node1  
192.168.10.212    ceph-node2 
192.168.10.213    ceph-node3
-------------------------------------------------
每個節點確認連通性
# ping -c 3 ceph-admin
# ping -c 3 ceph-node1
# ping -c 3 ceph-node2
# ping -c 3 ceph-node3
-------------------------------------------------
每個節點關閉防火牆和selinux
# systemctl stop firewalld
# systemctl disable firewalld
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# setenforce 0
-------------------------------------------------
每個節點安裝和配置NTP(官方推薦的是叢集的所有節點全部安裝並配置 NTP,需要保證各節點的系統時間一致。沒有自己部署ntp伺服器,就線上同步NTP)
# yum install ntp ntpdate ntp-doc -y
# systemctl restart ntpd
# systemctl status ntpd
-------------------------------------------------
每個節點準備yum源
刪除預設的源,國外的比較慢
# yum clean all
# mkdir /mnt/bak
# mv /etc/yum.repos.d/* /mnt/bak/

下載阿里雲的base源和epel源
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

新增ceph源
# vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
------------------------------------------------------------
每個節點建立cephuser使用者,設定sudo許可權
# useradd -d /home/cephuser -m cephuser
# echo "cephuser"|passwd --stdin cephuser
# echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
# chmod 0440 /etc/sudoers.d/cephuser
# sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers

測試cephuser的sudo許可權
# su - cephuser
$ sudo su -
# 
------------------------------------------------------------
配置相互間的ssh信任關係
現在ceph-admin節點上產生公私鑰檔案,然後將ceph-admin節點的.ssh目錄拷貝給其他節點
[root@ceph-admin ~]# su - cephuser
[cephuser@ceph-admin ~]$ ssh-keygen -t rsa    #一路回車
[cephuser@ceph-admin ~]$ cd .ssh/
[cephuser@ceph-admin .ssh]$ ls
id_rsa  id_rsa.pub
[cephuser@ceph-admin .ssh]$ cp id_rsa.pub authorized_keys

[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node1:/home/cephuser/
[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node2:/home/cephuser/
[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node3:/home/cephuser/

然後在各節點直接驗證cephuser使用者下的ssh相互信任關係
$ ssh -p22 cephuser@ceph-admin
$ ssh -p22 cephuser@ceph-node1
$ ssh -p22 cephuser@ceph-node2
$ ssh -p22 cephuser@ceph-node3

2)準備磁碟(ceph-node1、ceph-node2、ceph-node3三個節點)

測試時使用的磁碟不要太小,否則後面新增磁碟時會報錯,建議磁碟大小為20G及以上。
三個節點均是在WebvirtMgr上建立的虛擬機器,參考:https://www.cnblogs.com/kevingrace/p/8387999.html 文件中建立並掛載虛擬機器磁碟。
如下分別在三個節點掛載了一塊20G的裸盤

檢查磁碟
$ sudo fdisk -l /dev/vdb
Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

格式化磁碟
$ sudo parted -s /dev/vdb mklabel gpt mkpart primary xfs 0% 100%
$ sudo mkfs.xfs /dev/vdb -f

檢視磁碟格式(xfs格式)
$ sudo blkid -o value -s TYPE /dev/vdb

3)部署階段(ceph-admin節點上使用ceph-deploy快速部署)

[root@ceph-admin ~]# su - cephuser
     
安裝ceph-deploy
[cephuser@ceph-admin ~]$ sudo yum update -y && sudo yum install ceph-deploy -y
     
建立cluster目錄
[cephuser@ceph-admin ~]$ mkdir cluster
[cephuser@ceph-admin ~]$ cd cluster/
     
建立叢集(後面填寫monit節點的主機名,這裡monit節點和管理節點是同一臺機器,即ceph-admin)
[cephuser@ceph-admin cluster]$ ceph-deploy new ceph-admin
.........
[ceph-admin][DEBUG ] IP addresses found: [u'192.168.10.220']
[ceph_deploy.new][DEBUG ] Resolving host ceph-admin
[ceph_deploy.new][DEBUG ] Monitor ceph-admin at 192.168.10.220
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-admin']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.10.220']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
     
修改ceph.conf檔案(注意:mon_host必須和public network 網路是同網段內!)
[cephuser@ceph-admin cluster]$ vim ceph.conf     #新增下面兩行配置內容
......
public network = 192.168.10.220/24
osd pool default size = 3
     
安裝ceph(過程有點長,需要等待一段時間....)
[cephuser@ceph-admin cluster]$ ceph-deploy install ceph-admin ceph-node1 ceph-node2 ceph-node3
     
初始化monit監控節點,並收集所有金鑰
[cephuser@ceph-admin cluster]$ ceph-deploy mon create-initial
[cephuser@ceph-admin cluster]$ ceph-deploy gatherkeys ceph-admin
     
新增OSD到叢集
檢查OSD節點上所有可用的磁碟
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
     
使用zap選項刪除所有osd節點上的分割槽
[cephuser@ceph-admin cluster]$ ceph-deploy disk zap ceph-node1:/dev/vdb ceph-node2:/dev/vdb ceph-node3:/dev/vdb
     
準備OSD(使用prepare命令)
[cephuser@ceph-admin cluster]$ ceph-deploy osd prepare ceph-node1:/dev/vdb ceph-node2:/dev/vdb ceph-node3:/dev/vdb
     
啟用OSD(注意由於ceph對磁碟進行了分割槽,/dev/vdb磁碟分割槽為/dev/vdb1)
[cephuser@ceph-admin cluster]$ ceph-deploy osd activate ceph-node1:/dev/vdb1 ceph-node2:/dev/vdb1 ceph-node3:/dev/vdb1
---------------------------------------------------------------------------------------------
可能出現下面的報錯:
[ceph-node1][WARNIN] ceph_disk.main.Error: Error: /dev/vdb1 is not a directory or block device
[ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/vdb1
     
但是這個報錯沒有影響ceph的部署,在三個osd節點上通過命令已顯示磁碟已成功mount:
[cephuser@ceph-node1 ~]$ lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0              11:0    1  4.2G  0 rom
vda             252:0    0   70G  0 disk
├─vda1          252:1    0    1G  0 part /boot
└─vda2          252:2    0   69G  0 part
  ├─centos-root 253:0    0 43.8G  0 lvm  /
  ├─centos-swap 253:1    0  3.9G  0 lvm  [SWAP]
  └─centos-home 253:2    0 21.4G  0 lvm  /home
vdb             252:16   0   20G  0 disk
├─vdb1          252:17   0   15G  0 part /var/lib/ceph/osd/ceph-0       #掛載成功
└─vdb2          252:18   0    5G  0 part
    
[cephuser@ceph-node2 ~]$ lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0              11:0    1  4.2G  0 rom
vda             252:0    0   70G  0 disk
├─vda1          252:1    0    1G  0 part /boot
└─vda2          252:2    0   69G  0 part
  ├─centos-root 253:0    0 43.8G  0 lvm  /
  ├─centos-swap 253:1    0  3.9G  0 lvm  [SWAP]
  └─centos-home 253:2    0 21.4G  0 lvm  /home
vdb             252:16   0   20G  0 disk
├─vdb1          252:17   0   15G  0 part /var/lib/ceph/osd/ceph-1        #掛載成功
└─vdb2          252:18   0    5G  0 part
    
[cephuser@ceph-node3 ~]$ lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0              11:0    1  4.2G  0 rom
vda             252:0    0   70G  0 disk
├─vda1          252:1    0    1G  0 part /boot
└─vda2          252:2    0   69G  0 part
  ├─centos-root 253:0    0 43.8G  0 lvm  /
  ├─centos-swap 253:1    0  3.9G  0 lvm  [SWAP]
  └─centos-home 253:2    0 21.4G  0 lvm  /home
vdb             252:16   0   20G  0 disk
├─vdb1          252:17   0   15G  0 part /var/lib/ceph/osd/ceph-2       #掛載成功
└─vdb2          252:18   0    5G  0 part
    
檢視OSD
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
........
[ceph-node1][DEBUG ]  /dev/vdb2 ceph journal, for /dev/vdb1      #如下顯示這兩個分割槽,則表示成功了
[ceph-node1][DEBUG ]  /dev/vdb1 ceph data, active, cluster ceph, osd.0, journal /dev/vdb2
........
[ceph-node3][DEBUG ]  /dev/vdb2 ceph journal, for /dev/vdb1        
[ceph-node3][DEBUG ]  /dev/vdb1 ceph data, active, cluster ceph, osd.1, journal /dev/vdb2
.......
[ceph-node3][DEBUG ]  /dev/vdb2 ceph journal, for /dev/vdb1
[ceph-node3][DEBUG ]  /dev/vdb1 ceph data, active, cluster ceph, osd.2, journal /dev/vdb2
    
    
用ceph-deploy把配置檔案和admin金鑰拷貝到管理節點和Ceph節點,這樣你每次執行Ceph命令列時就無需指定monit節點地址
和ceph.client.admin.keyring了
[cephuser@ceph-admin cluster]$ ceph-deploy admin ceph-admin ceph-node1 ceph-node2 ceph-node3
    
修改金鑰許可權
[cephuser@ceph-admin cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
    
檢查ceph狀態
[cephuser@ceph-admin cluster]$ sudo ceph health
HEALTH_OK
[cephuser@ceph-admin cluster]$ sudo ceph -s
    cluster 33bfa421-8a3b-40fa-9f14-791efca9eb96
     health HEALTH_OK
     monmap e1: 1 mons at {ceph-admin=192.168.10.220:6789/0}
            election epoch 3, quorum 0 ceph-admin
     osdmap e14: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
            100 MB used, 45946 MB / 46046 MB avail
                  64 active+clean
   
檢視ceph osd執行狀態
[cephuser@ceph-admin ~]$ ceph osd stat
     osdmap e19: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
  
檢視osd的目錄樹
[cephuser@ceph-admin ~]$ ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.04376 root default                                        
-2 0.01459     host ceph-node1                                 
 0 0.01459         osd.0            up  1.00000          1.00000
-3 0.01459     host ceph-node2                                 
 1 0.01459         osd.1            up  1.00000          1.00000
-4 0.01459     host ceph-node3                                 
 2 0.01459         osd.2            up  1.00000          1.00000
  
檢視monit監控節點的服務情況
[cephuser@ceph-admin cluster]$ sudo systemctl status ceph-mon@ceph-admin
[cephuser@ceph-admin cluster]$ ps -ef|grep ceph|grep 'cluster'
ceph     28190     1  0 11:44 ?        00:00:01 /usr/bin/ceph-mon -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph
   
分別檢視下ceph-node1、ceph-node2、ceph-node3三個節點的osd服務情況,發現已經在啟動中。
[cephuser@ceph-node1 ~]$ sudo systemctl status ceph-osd@0.service         #啟動是start、重啟是restart
[cephuser@ceph-node1 ~]$ sudo ps -ef|grep ceph|grep "cluster"
ceph     28749     1  0 11:44 ?        00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
cephuser 29197 29051  0 11:54 pts/2    00:00:00 grep --color=auto cluster
 
[cephuser@ceph-node2 ~]$ sudo systemctl status ceph-osd@1.service
[cephuser@ceph-node2 ~]$ sudo ps -ef|grep ceph|grep "cluster"
ceph     28749     1  0 11:44 ?        00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
cephuser 29197 29051  0 11:54 pts/2    00:00:00 grep --color=auto cluster
 
[cephuser@ceph-node3 ~]$ sudo systemctl status ceph-osd@2.service
[cephuser@ceph-node3 ~]$ sudo ps -ef|grep ceph|grep "cluster"
ceph     28749     1  0 11:44 ?        00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
cephuser 29197 29051  0 11:54 pts/2    00:00:00 grep --color=auto cluster

4)建立檔案系統

先檢視管理節點狀態,預設是沒有管理節點的。
[cephuser@ceph-admin ~]$ ceph mds stat
e1:
 
建立管理節點(ceph-admin作為管理節點)。
需要注意:如果不建立mds管理節點,client客戶端將不能正常掛載到ceph叢集!!
[cephuser@ceph-admin ~]$ pwd
/home/cephuser
[cephuser@ceph-admin ~]$ cd cluster/
[cephuser@ceph-admin cluster]$ ceph-deploy mds create ceph-admin
 
再次檢視管理節點狀態,發現已經在啟動中
[cephuser@ceph-admin cluster]$ ceph mds stat
e2:, 1 up:standby

[cephuser@ceph-admin cluster]$ sudo systemctl status ceph-mds@ceph-admin
[cephuser@ceph-admin cluster]$ ps -ef|grep cluster|grep ceph-mds
ceph     29093     1  0 12:46 ?        00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph
 
建立pool,pool是ceph儲存資料時的邏輯分割槽,它起到namespace的作用
[cephuser@ceph-admin cluster]$ ceph osd lspools           #先檢視pool
0 rbd,
 
新建立的ceph叢集只有rdb一個pool。這時需要建立一個新的pool
[cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_data 10       #後面的數字是PG的數量
pool 'cephfs_data' created
 
[cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_metadata 10     #建立pool的後設資料
pool 'cephfs_metadata' created
 
[cephuser@ceph-admin cluster]$ ceph fs new myceph cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
 
再次檢視pool狀態
[cephuser@ceph-admin cluster]$ ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,
 
檢查mds管理節點狀態
[cephuser@ceph-admin cluster]$ ceph mds stat
e5: 1/1/1 up {0=ceph-admin=up:active}
 
檢視ceph叢集狀態
[cephuser@ceph-admin cluster]$ sudo ceph -s
    cluster 33bfa421-8a3b-40fa-9f14-791efca9eb96
     health HEALTH_OK
     monmap e1: 1 mons at {ceph-admin=192.168.10.220:6789/0}
            election epoch 3, quorum 0 ceph-admin
      fsmap e5: 1/1/1 up {0=ceph-admin=up:active}           #多了此行狀態
     osdmap e19: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v48: 84 pgs, 3 pools, 2068 bytes data, 20 objects
            101 MB used, 45945 MB / 46046 MB avail
                  84 active+clean
 
檢視ceph叢集埠
[cephuser@ceph-admin cluster]$ sudo lsof -i:6789
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
ceph-mon 28190 ceph   10u  IPv4  70217      0t0  TCP ceph-admin:smc-https (LISTEN)
ceph-mon 28190 ceph   19u  IPv4  70537      0t0  TCP ceph-admin:smc-https->ceph-node1:41308 (ESTABLISHED)
ceph-mon 28190 ceph   20u  IPv4  70560      0t0  TCP ceph-admin:smc-https->ceph-node2:48516 (ESTABLISHED)
ceph-mon 28190 ceph   21u  IPv4  70583      0t0  TCP ceph-admin:smc-https->ceph-node3:44948 (ESTABLISHED)
ceph-mon 28190 ceph   22u  IPv4  72643      0t0  TCP ceph-admin:smc-https->ceph-admin:51474 (ESTABLISHED)
ceph-mds 29093 ceph    8u  IPv4  72642      0t0  TCP ceph-admin:51474->ceph-admin:smc-https (ESTABLISHED)

5)client端掛載ceph儲存(採用fuse方式)

安裝ceph-fuse(這裡的客戶機是centos6系統)
[root@centos6-02 ~]# rpm -Uvh https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@centos6-02 ~]# yum install -y ceph-fuse

建立掛載目錄
[root@centos6-02 ~]# mkdir /cephfs

複製配置檔案
將ceph配置檔案ceph.conf從管理節點copy到client節點(192.168.10.220為管理節點)
[root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/etc/ceph/ceph.conf /etc/ceph/
或者
[root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/home/cephuser/cluster/ceph.conf /etc/ceph/    #兩個路徑下的檔案內容一樣

複製金鑰
將ceph的ceph.client.admin.keyring從管理節點copy到client節點
[root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
或者
[root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/home/cephuser/cluster/ceph.client.admin.keyring /etc/ceph/

檢視ceph授權
[root@centos6-02 ~]# ceph auth list
installed auth entries:

mds.ceph-admin
    key: AQAZZxdbH6uAOBAABttpSmPt6BXNtTJwZDpSJg==
    caps: [mds] allow
    caps: [mon] allow profile mds
    caps: [osd] allow rwx
osd.0
    key: AQCuWBdbV3TlBBAA4xsAE4QsFQ6vAp+7pIFEHA==
    caps: [mon] allow profile osd
    caps: [osd] allow *
osd.1
    key: AQC6WBdbakBaMxAAsUllVWdttlLzEI5VNd/41w==
    caps: [mon] allow profile osd
    caps: [osd] allow *
osd.2
    key: AQDJWBdbz6zNNhAATwzL2FqPKNY1IvQDmzyOSg==
    caps: [mon] allow profile osd
    caps: [osd] allow *
client.admin
    key: AQCNWBdbf1QxAhAAkryP+OFy6wGnKR8lfYDkUA==
    caps: [mds] allow *
    caps: [mon] allow *
    caps: [osd] allow *
client.bootstrap-mds
    key: AQCNWBdbnjLILhAAT1hKtLEzkCrhDuTLjdCJig==
    caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
    key: AQCOWBdbmxEANBAAiTMJeyEuSverXAyOrwodMQ==
    caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
    key: AQCNWBdbiO1bERAARLZaYdY58KLMi4oyKmug4Q==
    caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgw
    key: AQCNWBdboBLXIBAAVTsD2TPJhVSRY2E9G7eLzQ==
    caps: [mon] allow profile bootstrap-rgw


將ceph叢集儲存掛載到客戶機的/cephfs目錄下
[root@centos6-02 ~]# ceph-fuse -m 192.168.10.220:6789 /cephfs
2018-06-06 14:28:54.149796 7f8d5c256760 -1 init, newargv = 0x4273580 newargc=11
ceph-fuse[16107]: starting ceph client
ceph-fuse[16107]: starting fuse

[root@centos6-02 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_centos602-lv_root
                       50G  3.5G   44G   8% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/vda1             477M   41M  412M   9% /boot
/dev/mapper/vg_centos602-lv_home
                       45G   52M   43G   1% /home
/dev/vdb1              20G  5.1G   15G  26% /data/osd1
ceph-fuse              45G  100M   45G   1% /cephfs

由上面可知,已經成功掛載了ceph儲存,三個osd節點,每個節點有15G(在節點上通過"lsblk"命令可以檢視ceph data分割槽大小),一共45G!

取消ceph儲存的掛載
[root@centos6-02 ~]# umount /cephfs

溫馨提示:
當有一半以上的OSD節點掛掉後,遠端客戶端掛載的Ceph儲存就會使用異常了,即暫停使用。比如本案例中有3個OSD節點,當其中一個OSD節點掛掉後(比如當機),
客戶端掛載的Ceph儲存使用正常;但當有2個OSD節點掛掉後,客戶端掛載的Ceph儲存就不能正常使用了(表現為Ceph儲存目錄下的資料讀寫操作一直卡著狀態),
當OSD節點恢復後,Ceph儲存也會恢復正常使用。OSD節點當機重新啟動後,osd程式會自動起來(通過監控節點自動起來)

============================其他記錄===========================

-------------------------------------------------------------------
清除ceph儲存
清除安裝包
[cephuser@ceph-admin ~]$ ceph-deploy purge ceph1 ceph2 ceph3
 
清除配置資訊
[cephuser@ceph-admin ~]$ ceph-deploy purgedata ceph1 ceph2 ceph3
[cephuser@ceph-admin ~]$ ceph-deploy forgetkeys
 
每個節點刪除殘留的配置檔案
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/osd/*
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/mon/*
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/mds/*
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/bootstrap-mds/*
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/bootstrap-osd/*
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/bootstrap-mon/*
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/tmp/*
[cephuser@ceph-admin ~]$ sudo rm -rf /etc/ceph/*
[cephuser@ceph-admin ~]$ sudo rm -rf /var/run/ceph/*

-----------------------------------------------------------------------
檢視ceph命令,關於ceph osd、ceph mds、ceph mon、ceph pg的命令
[cephuser@ceph-admin ~]$ ceph --help              

-----------------------------------------------------------------------
如下報錯
[cephuser@ceph-admin ~]$ ceph osd tree
2018-06-06 14:56:27.843841 7f8a0b6dd700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2018-06-06 14:56:27.843853 7f8a0b6dd700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2018-06-06 14:56:27.843854 7f8a0b6dd700  0 librados: client.admin initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound
[cephuser@ceph-admin ~]$ ceph osd stat
2018-06-06 14:55:58.165882 7f377a1c9700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2018-06-06 14:55:58.165894 7f377a1c9700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2018-06-06 14:55:58.165896 7f377a1c9700  0 librados: client.admin initialization error (2) No such file or directory

解決辦法:
[cephuser@ceph-admin ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

[cephuser@ceph-admin ~]$ ceph osd stat
     osdmap e35: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
[cephuser@ceph-admin ~]$ ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.04376 root default                                          
-2 0.01459     host ceph-node1                                   
 0 0.01459         osd.0            up  1.00000          1.00000 
-3 0.01459     host ceph-node2                                   
 1 0.01459         osd.1            up  1.00000          1.00000 
-4 0.01459     host ceph-node3                                   
 2 0.01459         osd.2            up  1.00000          1.00000 

相關文章