openstack pike版使用ceph作後端儲存
節點分佈
10.1.1.1 controller
10.1.1.2 compute
10.1.1.3 middleware
10.1.1.4 network
10.1.1.5 compute2
10.1.1.6 compute3
10.1.1.7 cinder
##分散式儲存
後端儲存用的是ceph,mon_host = 10.1.1.2,10.1.1.5,10.1.1.6
##給cinder建立資料庫,服務以及endpoint
mysql -u root -p
create database cinder;
grant all privileges on cinder.* to 'cinder'@'localhost' identified by '123456';
grant all privileges on cinder.* to 'cinder'@'%' identified by '123456';
cat admin-openrc
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PASSWORD=123456
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=
source admin-openrc
建立cinder使用者
openstack user create --domain default --password-prompt cinder
cinder 使用者加入admin組
openstack role add --project service --user cinder admin
建立service
openstack service create --name cinderv2 --description "OpentStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpentStack Block Storage" volumev3
建立API endpoint
openstack endpoint create --region RegionOne volumev2 public %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 public %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 internal %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 admin %\(tenant_id\)s
建立ceph pool
在ceph 上執行如下命令
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool vms 128
ceph 使用者授權
因為後端儲存用的是ceph,所以要給ceph客戶端授權,以便ceph使用者能訪問相應的ceph pool,使用到ceph的有glance,cinder,nova-compute
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=images'
ceph auth list
client.cinder
key: AQDQEWdaNU9YGBAAcEhKd6KQKHN9HeFIIS4+fw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images
client.glance
key: AQD4EWdaTdZjJhAAuj8CvNY59evhiGtEa9wLzw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=images
在controller,cinder,compute節點建/etc/ceph目錄
將給controller,cinder,compute節點,建立授權檔案
ceph auth get-or-create client.glance |ssh controller sudo tee /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder |ssh cinder sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute2 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute3 sudo tee /etc/ceph/ceph.client.cinder.keyring
給ceph.client.glance.keyring 賦予glance許可權ceph.client.cinder.keyring賦予cinder許可權
chown glance.glance /etc/ceph/ceph.client.glance.keyring
chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring
將ceph的配置檔案/etc/ceph/ceph.conf,複製一份到glance,cinder和compute節點的/etc/ceph目錄
安裝和配置元件
cinder 節點
yum install -y openstack-cinder python-ceph ceph-common python-rbd
在/etc/ceph目錄下要有如下檔案
[root@cinder ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 cinder cinder 64 1月 26 15:52 ceph.client.cinder.keyring
-rw-r--r-- 1 root root 263 1月 26 15:53 ceph.conf
cp /etc/cinder/cinder.conf{,.bak}
>/etc/cinder/cinder.conf
cat /etc/cindr/cinder.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@middleware
log_dir = /var/log/cinder/api.log
enabled_backends = ceph
[database]
connection = mysql+pymysql://cinder:123456@middleware/cinder
[keystone_authtoken]
auth_uri =
auth_url =
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a
重啟cinder服務
systemctl restart openstack-api.service
systemctl restart openstack-scheduler.service
systemctl restart openstack-volume.service
glance節點
安裝ceph客戶端
yum install -y python-ceph ceph-common python-rbd
在/etc/ceph目錄下,要有如下檔案
[root@controller ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 glance glance 64 1月 23 19:31 ceph.client.glance.keyring
-rw-r--r-- 1 root root 416 1月 24 10:32 ceph.conf
有關ceph的配置
/etc/glance/glance.conf
[DEFAULT]
#enable image locaions and take advantage of copy-on-write cloning for images
show_image_direct_url = true
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
重啟glance服務
systemctl restart openstack-glance-api.service
compute節點
安裝ceph客戶端
yum install -y python-ceph ceph-common python-rbd
uuidgen生產uid,和/etc/cinder/cinder.conf中的uuid保持一致
f85def47-c1ac-46fe-a1d5-c0139c46d91a
建立secret檔案
cat secret.xml
<secret ephemeral='no' private='no'>
<uuid>f85def47-c1ac-46fe-a1d5-c0139c46d91a</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
定義secret
sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret f85def47-c1ac-46fe-a1d5-c0139c46d91a --base64 $(cat ceph.client.cinder.keyring |awk '/key/{print $3}')
virsh secret-list
UUID Usage
--------------------------------------------------------------------------------
f85def47-c1ac-46fe-a1d5-c0139c46d91a ceph client.cinder secret
/etc/nova/nova.conf配置
[libvirt]
virt_type = qemu
cpu_mode = none
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
在/etc/ceph目錄下要有如下檔案
[root@cinder ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 cinder cinder 64 1月 26 15:52 ceph.client.cinder.keyring
-rw-r--r-- 1 root root 263 1月 26 15:53 ceph.conf
重啟nova-compute服務
systemctl restart openstack-nova-compute.service
10.1.1.1 controller
10.1.1.2 compute
10.1.1.3 middleware
10.1.1.4 network
10.1.1.5 compute2
10.1.1.6 compute3
10.1.1.7 cinder
##分散式儲存
後端儲存用的是ceph,mon_host = 10.1.1.2,10.1.1.5,10.1.1.6
##給cinder建立資料庫,服務以及endpoint
mysql -u root -p
create database cinder;
grant all privileges on cinder.* to 'cinder'@'localhost' identified by '123456';
grant all privileges on cinder.* to 'cinder'@'%' identified by '123456';
cat admin-openrc
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PASSWORD=123456
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=
source admin-openrc
建立cinder使用者
openstack user create --domain default --password-prompt cinder
cinder 使用者加入admin組
openstack role add --project service --user cinder admin
建立service
openstack service create --name cinderv2 --description "OpentStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpentStack Block Storage" volumev3
建立API endpoint
openstack endpoint create --region RegionOne volumev2 public %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 public %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 internal %\(tenant_id\)s
openstack endpoint create --region RegionOne volumev3 admin %\(tenant_id\)s
建立ceph pool
在ceph 上執行如下命令
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool vms 128
ceph 使用者授權
因為後端儲存用的是ceph,所以要給ceph客戶端授權,以便ceph使用者能訪問相應的ceph pool,使用到ceph的有glance,cinder,nova-compute
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=images'
ceph auth list
client.cinder
key: AQDQEWdaNU9YGBAAcEhKd6KQKHN9HeFIIS4+fw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images
client.glance
key: AQD4EWdaTdZjJhAAuj8CvNY59evhiGtEa9wLzw==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=images
在controller,cinder,compute節點建/etc/ceph目錄
將給controller,cinder,compute節點,建立授權檔案
ceph auth get-or-create client.glance |ssh controller sudo tee /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder |ssh cinder sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute2 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder |ssh compute3 sudo tee /etc/ceph/ceph.client.cinder.keyring
給ceph.client.glance.keyring 賦予glance許可權ceph.client.cinder.keyring賦予cinder許可權
chown glance.glance /etc/ceph/ceph.client.glance.keyring
chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring
將ceph的配置檔案/etc/ceph/ceph.conf,複製一份到glance,cinder和compute節點的/etc/ceph目錄
安裝和配置元件
cinder 節點
yum install -y openstack-cinder python-ceph ceph-common python-rbd
在/etc/ceph目錄下要有如下檔案
[root@cinder ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 cinder cinder 64 1月 26 15:52 ceph.client.cinder.keyring
-rw-r--r-- 1 root root 263 1月 26 15:53 ceph.conf
cp /etc/cinder/cinder.conf{,.bak}
>/etc/cinder/cinder.conf
cat /etc/cindr/cinder.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@middleware
log_dir = /var/log/cinder/api.log
enabled_backends = ceph
[database]
connection = mysql+pymysql://cinder:123456@middleware/cinder
[keystone_authtoken]
auth_uri =
auth_url =
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a
重啟cinder服務
systemctl restart openstack-api.service
systemctl restart openstack-scheduler.service
systemctl restart openstack-volume.service
glance節點
安裝ceph客戶端
yum install -y python-ceph ceph-common python-rbd
在/etc/ceph目錄下,要有如下檔案
[root@controller ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 glance glance 64 1月 23 19:31 ceph.client.glance.keyring
-rw-r--r-- 1 root root 416 1月 24 10:32 ceph.conf
有關ceph的配置
/etc/glance/glance.conf
[DEFAULT]
#enable image locaions and take advantage of copy-on-write cloning for images
show_image_direct_url = true
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
重啟glance服務
systemctl restart openstack-glance-api.service
compute節點
安裝ceph客戶端
yum install -y python-ceph ceph-common python-rbd
uuidgen生產uid,和/etc/cinder/cinder.conf中的uuid保持一致
f85def47-c1ac-46fe-a1d5-c0139c46d91a
建立secret檔案
cat secret.xml
<secret ephemeral='no' private='no'>
<uuid>f85def47-c1ac-46fe-a1d5-c0139c46d91a</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
定義secret
sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret f85def47-c1ac-46fe-a1d5-c0139c46d91a --base64 $(cat ceph.client.cinder.keyring |awk '/key/{print $3}')
virsh secret-list
UUID Usage
--------------------------------------------------------------------------------
f85def47-c1ac-46fe-a1d5-c0139c46d91a ceph client.cinder secret
/etc/nova/nova.conf配置
[libvirt]
virt_type = qemu
cpu_mode = none
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
在/etc/ceph目錄下要有如下檔案
[root@cinder ~]# ll /etc/ceph/
total 12
-rw-r--r-- 1 cinder cinder 64 1月 26 15:52 ceph.client.cinder.keyring
-rw-r--r-- 1 root root 263 1月 26 15:53 ceph.conf
重啟nova-compute服務
systemctl restart openstack-nova-compute.service
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/27181165/viewspace-2150660/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 使用NFS作為Glance儲存後端NFS後端
- kubernetes配置後端儲存 rook-ceph後端
- Ceph儲存後端ObjectStore架構和技術演進後端Object架構
- openstack pike安裝
- Ceph RBD CephFS 儲存
- Ceph儲存池管理
- 配置Kubernetes共享使用Ceph儲存
- Kubernetes 使用 ceph-csi 消費 RBD 作為持久化儲存持久化
- 分散式儲存ceph 物件儲存配置zone同步分散式物件
- OpenStack Cinder與各種後端儲存技術的整合敘述與實踐後端
- How OpenStack integrates with Ceph?
- openstack pike linuxbridge換成openvswitchLinux
- openstack(pike) dvr 中南北資料流向分析VR
- ceph儲存的monitor選舉流程
- Centos7下使用Ceph-deploy快速部署Ceph分散式儲存-操作記錄CentOS分散式
- CEPH-4:ceph RadowGW物件儲存功能詳解物件
- Postmark測試後端儲存效能後端
- CEPH分散式儲存搭建(物件、塊、檔案三大儲存)分散式物件
- kubernets1.13.1叢集使用ceph rbd塊儲存
- Nodejs:使用Mongodb儲存和提供後端CRD服務NodeJSMongoDB後端
- Ceph分散式儲存技術解讀分散式
- 分散式儲存ceph之快速安裝分散式
- 如何在 CentOS 7.0 上配置 Ceph 儲存CentOS
- k8s使用ceph實現動態持久化儲存K8S持久化
- 實戰:用“廉價”的NFS作為K8S後端儲存NFSK8S後端
- 基於Ceph物件儲存構建實踐物件
- Ceph分散式儲存-運維操作筆記分散式運維筆記
- 使用 PostgreSQL 16.1 + Citus 12.1 作為多個微服務的分散式 Sharding 儲存後端SQL微服務分散式後端
- 《客戶端儲存技術》讀後感客戶端
- OpenStack學習系列之十二:安裝ceph並對接OpenStack
- Laravel 使用 Elasticsearch 作為日誌儲存LaravelElasticsearch
- 分散式儲存Ceph之PG狀態詳解分散式
- Ceph分散式儲存叢集-硬體選擇分散式
- 【北亞企安資料恢復】Ceph儲存原理&Ceph資料恢復流程資料恢復
- 照著官網來安裝openstack pike之environment設定
- 照著官網來安裝openstack pike之glance安裝
- 照著官網來安裝openstack pike之nova安裝
- 照著官網來安裝openstack pike之neutron安裝