一:keystone元件部署
只在控制節點上面操作
1、安裝和配置keystone
# 1.安裝keystone軟體包 # wsgi:使web伺服器支援WSGI的外掛 # httpd:Apache軟體包 # openstack-keystone:keystone的軟體包 [root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi # 檢視keystone使用者資訊 [root@controller ~]# cat /etc/passwd | grep keystone keystone:x:163:163:OpenStack Keystone Daemons:/var/lib/keystone:/sbin/nologin # 檢視keystone使用者組 [root@controller ~]# cat /etc/group | grep keystone keystone:x:163: # 2.建立keystone的資料庫並授權 [root@controller ~]# mysql -uroot -p000000 # 建立資料庫 MariaDB [(none)]> CREATE DATABASE keystone; Query OK, 1 row affected (0.000 sec) # 授權本地登入keystone使用者 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) # 授權任意遠端主機登入keystone使用者 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.000 sec) # 退出資料庫 MariaDB [(none)]> quit Bye # 3.修改keystone配置檔案 [root@controller ~]# vi /etc/keystone/keystone.conf # 找到[database] 部分,加入如下內容,配置資料庫連線資訊 connection=mysql+pymysql://keystone:000000@controller/keystone # 找到[token] 部分,解開註釋,配置令牌的加密方式 provider = fernet # 4.初始化keytone資料庫 # 同步資料庫 # su keytone:表示切換到keytone使用者 # '-s /bin/sh':表示指定使用什麼編譯器來執行命令 # '-c':表示具體執行的命令 [root@controller ~]# su keystone -s /bin/sh -c "keystone-manage db_sync" # 檢查資料庫 [root@controller ~]# mysql -uroot -p000000 # 開啟keystone資料庫 MariaDB [(none)]> use keystone; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed # 顯示keytone資料庫中的資料表 MariaDB [keystone]> show tables; +------------------------------------+ | Tables_in_keystone | +------------------------------------+ | access_rule | | access_token | | application_credential | | application_credential_access_rule | | application_credential_role | # 出現如上所示的很多資料表,說明資料庫同步成功。
2、keystone元件初始化
# 1.初始化Fernet金鑰庫 [root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone # 執行命令後建立/etc/keystone/fernet-keys,並在目錄中生成兩個fernet金鑰,分別用於加密和解密 [root@controller fernet-keys]# pwd /etc/keystone/fernet-keys [root@controller fernet-keys]# du -sh * 4.0K 0 4.0K 1 [root@controller keystone]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone # 執行命令後建立/etc/keystone/credential-keys目錄,生成兩個fetnet金鑰,用於加密/解密使用者憑證 [root@controller credential-keys]# pwd /etc/keystone/credential-keys [root@controller credential-keys]# du -sh * 4.0K 0 4.0K 1 # 2.初始化使用者身份認證資訊 # openstack預設有一個admin使用者,還沒有對應的密碼等登入所必須的資訊。使用 `keystone-manage bootstrap` 初始化登入憑證。 [root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000 \ # 設定密碼 > --bootstrap-admin-url http://controller:5000/v3 \ # 設定使用者服務端點 > --bootstrap-internal-url http://controller:5000/v3 \ # 設定內部使用者的服務端點 > --bootstrap-public-url http://controller:5000/v3 \ # 設定公共使用者的服務端點 > --bootstrap-region-id RegionOne # 設定區域ID # 命令執行後,keystone資料庫中就已經存放了登入需要的驗證資訊。 # 3.配置web服務 # (1)給apache增加wsgi支援 # 將wsgi-keystone.conf檔案軟連結到'/etc/httpd/conf.d/目錄',作為apache的配置檔案 [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ [root@controller ~]# ls /etc/httpd/conf.d/ autoindex.conf README userdir.conf welcome.conf wsgi-keystone.conf # (2)修改apache伺服器配置並啟動 [root@controller ~]# vi /etc/httpd/conf/httpd.conf # 修改為web服務所在的IP地址或域名 96 ServerName controller # (3)啟動apache [root@controller ~]# systemctl start httpd [root@controller ~]# systemctl enable httpd Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
3、模擬登入驗證
# 建立一個檔案儲存身份憑證 [root@controller ~]# vi admin-login export OS_USERNAME=admin export OS_PASSWORD=000000 export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 # 匯入環境變數 [root@controller ~]# source admin-login # 檢視現有環境資訊 [root@controller ~]# export -p declare -x OS_AUTH_URL="http://controller:5000/v3" declare -x OS_IDENTITY_API_VERSION="3" declare -x OS_IMAGE_API_VERSION="2" declare -x OS_PASSWORD="000000" declare -x OS_PROJECT_DOMAIN_NAME="Default"/ declare -x OS_PROJECT_NAME="admin" declare -x OS_USERNAME="admin" declare -x OS_USER_DOMAIN_NAME="Default"
4、檢測keystone服務
# 在default域建立名為 'project' 的專案 [root@controller ~]# openstack project create --domain default project +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | e3a549077f354998aa1a75677cfde62e | | is_domain | False | | name | project | | options | {} | | parent_id | default | | tags | [] | +-------------+----------------------------------+ # 檢視現有專案列表 [root@controller ~]# openstack project list +----------------------------------+---------+ | ID | Name | +----------------------------------+---------+ | 4188570a34464b938ed3fa7e08681df8 | admin | | e3a549077f354998aa1a75677cfde62e | project | +----------------------------------+---------+ # 建立名為user的角色 [root@controller ~]# openstack role create user +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | None | | domain_id | None | | id | 700ec993d3cf456fa591c03e72f37856 | | name | user | | options | {} | +-------------+----------------------------------+ # 檢視當前角色列表 [root@controller ~]# openstack role list +----------------------------------+--------+ | ID | Name | +----------------------------------+--------+ | 47670bbd6cc1472ab42db560637c7ebe | reader | | 5eee0910aeb844a1b82f48100da7adc9 | admin | | 700ec993d3cf456fa591c03e72f37856 | user | | bc2c8147bbd643629a020a6bd9591eca | member | +----------------------------------+--------+ # 檢視現有域列表 [root@controller ~]# openstack domain list +---------+---------+---------+--------------------+ | ID | Name | Enabled | Description | +---------+---------+---------+--------------------+ | default | Default | True | The default domain | +---------+---------+---------+--------------------+ # 檢視現有使用者列表 [root@controller ~]# openstack user list +----------------------------------+-------+ | ID | Name | +----------------------------------+-------+ | f4f16d960e0643d7b5a35db152c87dae | admin | +----------------------------------+-------+
二:glacne部署
安裝openstack映象服務,只在控制節點進行操作
1、安裝glance
# 1.安裝glance軟體包 # 原生的源缺包,把阿里的源下載到這個路徑 [root@controller yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo [root@controller ~]# yum install -y openstack-glance # 安裝後會自動生成glance使用者和使用者組 [root@controller ~]# cat /etc/passwd | grep glance glance:x:161:161:OpenStack Glance Daemons:/var/lib/glance:/sbin/nologin [root@controller ~]# cat /etc/group | grep glance glance:x:161: # 2.建立glance資料庫並授權 # 連線資料庫 [root@controller ~]# mysql -uroot -p000000 # 新建glance資料庫 MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.001 sec) # 為新資料庫授權本地和遠端登入glance使用者 MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) # 退出資料庫 MariaDB [(none)]> quit Bye
2、配置glance
glacne的配置檔案是/etc/glance/glance-api.conf,修改可以實現glacne與資料庫及keystone的連線
# 1.備份配置檔案 [root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak # 2.去掉配置檔案註釋和空行 # grep:查詢檔案中符合條件的字串。 -E:採用正規表示式;-v:匹配所有不滿足正則的條件(反選) # ^:以什麼開頭; $:匹配字元結尾;|:匹配|左或|右的字元 [root@controller ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf # 3.編輯配置檔案 # default_store = file:預設儲存系統為本地系統 # filesystem_store_datadir = /var/lib/glance/images/ : 映象檔案實際儲存的目錄 [root@controller ~]# vi /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:000000@controller/glance [glance_store] stores = file default_store = file filesystem_store_datadir = /var/lib/glance/images/ [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password username = glance password = 000000 project_name = project user_domain_name = Default project_domain_name = Default [paste_deploy] flavor = keystone # 4.初始化資料庫 # 同步資料庫:將安裝檔案中的資料庫表資訊填入資料庫中 [root@controller ~]# su glance -s /bin/sh -c "glance-manage db_sync" # 檢查資料庫 [root@controller ~]# mysql -uroot -p000000 MariaDB [(none)]> use glance MariaDB [glance]> show tables; +----------------------------------+ | Tables_in_glance | +----------------------------------+ | alembic_version | | image_locations | | image_members | | image_properties | | image_tags | | images | .....
3、glance元件初始化
glance安裝配置成功後,需要給glance初始化使用者,密碼並分配角色,初始化服務和服務端點
(1)建立glance使用者並分配角色
# 匯入環境變數模擬登入 [root@controller ~]# source admin-login # 在default域建立名為glance,密碼為000000的使用者 [root@controller ~]# openstack user create --domain default --password 000000 glance +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 81238b556a444c8f80cb3d7dc72a24d3 | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ # 檢視當前已有的專案 [root@controller ~]# openstack project list +----------------------------------+---------+ | ID | Name | +----------------------------------+---------+ | 4188570a34464b938ed3fa7e08681df8 | admin | | e3a549077f354998aa1a75677cfde62e | project | +----------------------------------+---------+ # 檢視已有的使用者 [root@controller ~]# openstack user list +----------------------------------+--------+ | ID | Name | +----------------------------------+--------+ | f4f16d960e0643d7b5a35db152c87dae | admin | | 81238b556a444c8f80cb3d7dc72a24d3 | glance | +----------------------------------+--------+ # 授予glance使用者操作project專案時的admin角色許可權 [root@controller ~]# openstack role add --project project --user glance admin # 檢視glance使用者詳情 [root@controller ~]# openstack user show glance +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 81238b556a444c8f80cb3d7dc72a24d3 | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
(2)建立glacne服務及服務端點
# 1.建立服務 # 建立名為glance,型別為image的服務 [root@controller ~]# openstack service create --name glance image +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | 324a07034ea4453692570e3edf73cf2c | | name | glance | | type | image | +---------+----------------------------------+ # 2.建立映象服務端點 # 服務端點有三種:公共使用者(public)、內部元件(internal)、Admin使用者(admin)服務的地址。 # 建立公共使用者訪問服務端點 [root@controller ~]# openstack endpoint create --region RegionOne glance public http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | ab3208eb36fd4a8db9c90b9113da9bbb | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 324a07034ea4453692570e3edf73cf2c | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ # 建立內部元件訪問服務端點 [root@controller ~]# openstack endpoint create --region RegionOne glance internal http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 54994f15e8184e099334760060b9e2a9 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 324a07034ea4453692570e3edf73cf2c | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ # 建立Admin使用者訪問服務端點 [root@controller ~]# openstack endpoint create --region RegionOne glance admin http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 97ae61936255471f9f55858cc0443e61 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 324a07034ea4453692570e3edf73cf2c | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ # 查克服務端點 [root@controller ~]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+ | 0d31919afb564c8aa52ec5eddf474a55 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3 | | 243f1e7ace4f444cba2978b900aeb165 | RegionOne | keystone | identity | True | internal | http://controller:5000/v3 | | 54994f15e8184e099334760060b9e2a9 | RegionOne | glance | image | True | internal | http://controller:9292 | | 702df46845be40fb9e75fb988314ee90 | RegionOne | keystone | identity | True | public | http://controller:5000/v3 | | 97ae61936255471f9f55858cc0443e61 | RegionOne | glance | image | True | admin | http://controller:9292 | | ab3208eb36fd4a8db9c90b9113da9bbb | RegionOne | glance | image | True | public | http://controller:9292 | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
(3)啟動glacne服務
# 開機啟動glance服務 [root@controller ~]# systemctl enable openstack-glance-api Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service. # 啟動glance服務 [root@controller ~]# systemctl start openstack-glance-api
4、驗證glacne服務
# 方法一:檢視埠占用情況(9292是否被佔用) [root@controller ~]# netstat -tnlup | grep 9292 tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 5740/python2 # 方法二:檢視服務執行狀態(active (running)說明服務正在執行) [root@controller ~]# systemctl status openstack-glance-api ● openstack-glance-api.service - OpenStack Image Service (code-named Glance) API server Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2022-10-19 17:09:13 CST;
5、用glacne製作映象
# 安裝lrzsz工具 [root@controller ~]# yum install -y lrzsz # 上傳cirros-0.5.1-x86_64-disk.img 到/root目錄 [root@controller ~]# rz z waiting to receive.**B0100000023be50 [root@controller ~]# ls admin-login cirros-0.5.1-x86_64-disk.img # glance建立映象 [root@controller ~]# openstack image create --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros # 檢視映象列表 [root@controller ~]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | a859fddb-3ec1-4cd8-84ec-482112af929b | cirros | active | +--------------------------------------+--------+--------+ # 刪除映象 [root@controller ~]# openstack image delete a859fddb-3ec1-4cd8-84ec-482112af929b # 重新創映象 [root@controller ~]# openstack image create --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | checksum | 1d3062cd89af34e419f7100277f38b2b | | container_format | bare | | created_at | 2022-10-19T09:20:03Z | | disk_format | qcow2 | | file | /v2/images/7096885c-0a58-4086-8014-b92affceb0e8/file | | id | 7096885c-0a58-4086-8014-b92affceb0e8 | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | 4188570a34464b938ed3fa7e08681df8 | | properties | os_hash_algo='sha512', os_hash_value='553d220ed58cfee7dafe003c446a9f197ab5edf8ffc09396c74187cf83873c877e7ae041cb80f3b91489acf687183adcd689b53b38e3ddd22e627e7f98a09c46', os_hidden='False' | | protected | False | | schema | /v2/schemas/image | | size | 16338944 | | status | active | | tags | | | updated_at | 2022-10-19T09:20:03Z | | virtual_size | None | | visibility | public | +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ # 檢視映象物理檔案 # /var/lib/glance/images/資料夾是 glance-api.conf配置檔案中定義映象檔案儲存的位置。 [root@controller ~]# ll /var/lib/glance/images/ total 15956 -rw-r----- 1 glance glance 16338944 Oct 19 17:20 7096885c-0a58-4086-8014-b92affceb0e8
三:放置服務(placement)部署
從stein版本開始,將系統資源的監控功能從nova中獨立出來,成為一個獨立的元件
1、安裝placement軟體包
# 安裝placement軟體包 # 安裝好會自動生成placement使用者和使用者組 [root@controller ~]# yum install -y openstack-placement-api # 檢視確認使用者和使用者組已經建立 [root@controller ~]# cat /etc/passwd | grep placement placement:x:993:990:OpenStack Placement:/:/bin/bash [root@controller ~]# cat /etc/group | grep placement placement:x:990: # 登入資料庫 [root@controller ~]# mysql -uroot -p000000 # 建立placement資料庫 MariaDB [(none)]> create database placement; Query OK, 1 row affected (0.000 sec) # 資料庫授權 MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.000 sec) MariaDB [(none)]> quit Bye
2、配置placement服務
# 備份配置檔案 [root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf.bak [root@controller ~]# ls /etc/placement/ placement.conf placement.conf.bak policy.json # 去掉配置檔案註釋和空行 [root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf [root@controller ~]# cat /etc/placement/placement.conf [DEFAULT] [api] [cors] [keystone_authtoken] [oslo_policy] [placement] [placement_database] [profiler] # 編輯配置檔案 [root@controller ~]# vi /etc/placement/placement.conf [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = placement password = 000000 [placement_database] connection = mysql+pymysql://placement:000000@controller/placement # 編輯修改apache配置檔案 # 在"VirtualHost"節點加入如下 Directory內容 [root@controller ~]# vi /etc/httpd/conf.d/00-placement-api.conf Listen 8778 <VirtualHost *:8778> WSGIProcessGroup placement-api ...略 <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> </Directory> </VirtualHost> # 檢視Apache版本號 [root@controller ~]# httpd -v Server version: Apache/2.4.6 (CentOS) Server built: Jan 25 2022 14:08:43 # 同步資料庫,將資料庫的表資訊填充進資料庫 [root@controller ~]# su placement -s /bin/sh -c "placement-manage db sync" # 檢查資料庫同步 [root@controller ~]# mysql -uroot -p000000 MariaDB [(none)]> use placement; MariaDB [placement]> show tables; +------------------------------+ | Tables_in_placement | +------------------------------+ | alembic_version | | allocations | | consumers | | inventories | | placement_aggregates | | projects | | resource_classes | | resource_provider_aggregates | | resource_provider_traits | | resource_providers | | traits | | users | +------------------------------+ 12 rows in set (0.000 sec)
3、placement元件初始化
# 匯入環境變數模擬登入 [root@controller ~]# source admin-login # 建立placement使用者 [root@controller ~]# openstack user create --domain default --password 000000 placement +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | e0d6a46f9b1744d8a7ab0332ab45d59c | | name | placement | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ # 給placement使用者分配admin角色 [root@controller ~]# openstack role add --project project --user placement admin # 建立placement服務 [root@controller ~]# openstack service create --name placement placement +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | da038496edf04ce29d7d3d6b8e647755 | | name | placement | | type | placement | +---------+----------------------------------+ # 檢視當前已經建立的服務列表 [root@controller ~]# openstack service list +----------------------------------+-----------+-----------+ | ID | Name | Type | +----------------------------------+-----------+-----------+ | 324a07034ea4453692570e3edf73cf2c | glance | image | | 5d25b4ed1443497599707e043866eaae | keystone | identity | | da038496edf04ce29d7d3d6b8e647755 | placement | placement | +----------------------------------+-----------+-----------+ # 建立服務端點 # placement服務端點有三個:公眾使用者、內部元件、Admin使用者(admin)服務。 [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | da0c279c9a394d0f80e7a33acb9e0d8d | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | da038496edf04ce29d7d3d6b8e647755 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 79ca63ffd52d4d96b418cdf962c1e3ca | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | da038496edf04ce29d7d3d6b8e647755 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | fbee454f73d64bb18a52d8696c7aa596 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | da038496edf04ce29d7d3d6b8e647755 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ # 檢視檢查 [root@controller ~]# openstack endpoint list
4、placement元件檢測
(1)檢測placement元件的執行狀態的2種方法
# 方法一:檢視埠占用情況(8778是否被佔用) [root@controller ~]# netstat -tnlup | grep 8778 tcp6 0 0 :::8778 :::* LISTEN 1018/httpd # 方法二:檢視服務端點通訊 [root@controller ~]# curl http://controller:8778 {"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}
(2)安裝完成情況檢查
# 1.控制節點是否建立了placement使用者 [root@controller ~]# cat /etc/passwd | grep placement placement:x:993:990:OpenStack Placement:/:/bin/bash # 2.控制節點是否建立placement使用者組 [root@controller ~]# cat /etc/group | grep placement placement:x:990: # 3.是否建立placement資料庫 MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | glance | | information_schema | | keystone | | mysql | | performance_schema | | placement | +--------------------+ # 4.檢視placement使用者對資料庫的許可權 MariaDB [(none)]> show grants for placement@'%'; MariaDB [(none)]> show grants for placement@'localhost'; # 5.檢視placement資料表列表 MariaDB [(none)]> use placement; MariaDB [placement]> show tables; +------------------------------+ | Tables_in_placement | +------------------------------+ | alembic_version | | allocations | | consumers | # 6.檢視placement使用者是否建立 [root@controller ~]# openstack user list +----------------------------------+-----------+ | ID | Name | +----------------------------------+-----------+ | f4f16d960e0643d7b5a35db152c87dae | admin | | 81238b556a444c8f80cb3d7dc72a24d3 | glance | | e0d6a46f9b1744d8a7ab0332ab45d59c | placement | +----------------------------------+-----------+ # 7.檢視placement使用者是否有admin許可權 # 檢視使用者id和角色id,然後在role assignment列表中檢視id對應關係 [root@controller ~]# openstack user list +----------------------------------+-----------+ | ID | Name | +----------------------------------+-----------+ | f4f16d960e0643d7b5a35db152c87dae | admin | | 81238b556a444c8f80cb3d7dc72a24d3 | glance | | e0d6a46f9b1744d8a7ab0332ab45d59c | placement | +----------------------------------+-----------+ [root@controller ~]# openstack role list +----------------------------------+--------+ | ID | Name | +----------------------------------+--------+ | 47670bbd6cc1472ab42db560637c7ebe | reader | | 5eee0910aeb844a1b82f48100da7adc9 | admin | | 700ec993d3cf456fa591c03e72f37856 | user | | bc2c8147bbd643629a020a6bd9591eca | member | +----------------------------------+--------+ [root@controller ~]# openstack role assignment list +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | 5eee0910aeb844a1b82f48100da7adc9 | 81238b556a444c8f80cb3d7dc72a24d3 | | e3a549077f354998aa1a75677cfde62e | | | False | | 5eee0910aeb844a1b82f48100da7adc9 | e0d6a46f9b1744d8a7ab0332ab45d59c | | e3a549077f354998aa1a75677cfde62e | | | False | | 5eee0910aeb844a1b82f48100da7adc9 | f4f16d960e0643d7b5a35db152c87dae | | 4188570a34464b938ed3fa7e08681df8 | | | False | | 5eee0910aeb844a1b82f48100da7adc9 | f4f16d960e0643d7b5a35db152c87dae | | | | all | False | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ # 8.placement服務是否建立 [root@controller ~]# openstack service list +----------------------------------+-----------+-----------+ | ID | Name | Type | +----------------------------------+-----------+-----------+ | 324a07034ea4453692570e3edf73cf2c | glance | image | | 5d25b4ed1443497599707e043866eaae | keystone | identity | | da038496edf04ce29d7d3d6b8e647755 | placement | placement | +----------------------------------+-----------+-----------+ # 9.檢測placement服務端點 [root@controller ~]# openstack endpoint list # 10.檢視placement服務埠是否正常 [root@controller ~]# curl http://controller:8778 {"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}
四:計算服務nova部署
nova負責雲主機例項的建立,刪除,啟動,停止等
1、安裝配置控制節點nova服務
(1)安裝nova軟體包
# 安裝nova相關軟體包 [root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy # 檢視nova使用者 [root@controller ~]# cat /etc/passwd | grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin # 檢視nova使用者組 [root@controller ~]# cat /etc/group | grep nova nobody:x:99:nova nova:x:162:nova
(2)建立nova資料庫並授權
支援nova元件的資料庫有三個:nova_api,nova_cell0,nova
# 登入資料庫 [root@controller ~]# mysql -uroot -p000000 # 建立三個資料庫 MariaDB [(none)]> create database nova_api; MariaDB [(none)]> create database nova_cell0; MariaDB [(none)]> create database nova; MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | nova | | nova_api | | nova_cell0 | # 為資料庫授權本地和遠端管理許可權 MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000';
(3)修改nova配置檔案
nova配置檔案,/etc/nova/nova.conf。修改這個檔案可以實現nova與資料庫,keystone和其他元件的連線
api_database:配置與資料庫nova_api連線
database:配置與資料庫nova連線
api、keystone_authtoken:配置與keystone互動
placement:配置與placement元件互動
glance:配置glacne元件互動
oslo_concurrency:配置鎖的路徑,為openstack中的程式碼提供執行緒及程序鎖,lock_path配置這個模組指定的路徑
DEFAULT:配置使用訊息佇列和防火牆等資訊
VNC:配置vnc連線模式
配置檔案中的$表示取變數值
# 備份配置檔案 [root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak # 去除配置檔案中的註釋和空行 [root@controller ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf # 修改配置檔案 [root@controller ~]# vi /etc/nova/nova.conf [DEFAULT] enable_apis = osapi_compute,metadata transport_url = rabbit://rabbitmq:000000@controller:5672 my_ip = 192.168.108.10 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:000000@controller/nova_api [database] connection = mysql+pymysql://nova:000000@controller/nova [glance] api_servers = http://controller:9292 [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = nova password = 000000 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = placement password = 000000 region_name = RegionOne [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip
(4)初始化資料庫
將安裝檔案中的資料庫表資訊填入資料庫中
# 初始化 nova_api資料庫 [root@controller ~]# su nova -s /bin/sh -c "nova-manage api_db sync" # 建立‘cell1’單元,該單元使用nova資料庫 [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1" # 對映nova 到cell0資料庫,使cell0的表結構與nova表結構一致 [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0" # 初始化nova資料庫,因為對映,cell0會同步建立相同資料庫(有warning先忽略) [root@controller ~]# su nova -s /bin/sh -c "nova-manage db sync"
(5)驗證單元是否都正確註冊
存在cell0和cell2個單元則正常
cell0:系統管理
cell1:雲主機管理,每增加一個計算計算節點則增加一個和cell1功能相同的單元
[root@controller ~]# nova-manage cell_v2 list_cells +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False | | cell1 | 83ad6d17-f245-4310-8729-fccaa033edf2 | rabbit://rabbitmq:****@controller:5672 | mysql+pymysql://nova:****@controller/nova | False | +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+
2、控制節點nova元件初始化及檢測
(1)nova元件初始化
# 匯入環境變數模擬登入 [root@controller ~]# source admin-login # 在default域建立名為nova的使用者 [root@controller ~]# openstack user create --domain default --password 000000 nova +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 2f5041ed122d4a50890c34ea02881b47 | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ # 為nova使用者分配admin角色 [root@controller ~]# openstack role add --project project --user nova admin # 建立compute型別的nova服務 [root@controller ~]# openstack service create --name nova compute +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | e7cccf0a4d2549139801ac51bb8546db | | name | nova | | type | compute | +---------+----------------------------------+ # 建立服務端點 [root@controller ~]# openstack endpoint create --region RegionOne nova public http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | c60a9641abbb47b391751c9a0b0d6828 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | e7cccf0a4d2549139801ac51bb8546db | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne nova internal http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 49b042b01ad44784888e65366d61dede | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | e7cccf0a4d2549139801ac51bb8546db | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 6dd22acff2ab4c2195cefee39f371cc0 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | e7cccf0a4d2549139801ac51bb8546db | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint list | grep nova | 49b042b01ad44784888e65366d61dede | RegionOne | nova | compute | True | internal | http://controller:8774/v2.1 | | 6dd22acff2ab4c2195cefee39f371cc0 | RegionOne | nova | compute | True | admin | http://controller:8774/v2.1 | | c60a9641abbb47b391751c9a0b0d6828 | RegionOne | nova | compute | True | public | http://controller:8774/v2.1 | # 開機啟動控制節點的nova服務 [root@controller ~]# systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service. # 啟動nova服務 [root@controller ~]# systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
(2)檢測控制節點nova服務
nova服務會佔用8874和9775埠,檢視埠是否啟動,可判斷nova服務是否已經執行了
nova-conductor和nova-scheduler2個服務在控制節點的模組均處於up狀態,則服務正常
# 方法一:檢視埠占用 [root@controller ~]# netstat -nutpl | grep 877 tcp 0 0 0.0.0.0:8774 0.0.0.0:* LISTEN 2487/python2 tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN 2487/python2 tcp6 0 0 :::8778 :::* LISTEN 1030/httpd # 方法二:檢視計算服務列表 [root@controller ~]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-conductor | controller | internal | enabled | up | 2022-10-28T10:53:26.000000 | | 4 | nova-scheduler | controller | internal | enabled | up | 2022-10-28T10:53:28.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+
3、安裝配置計算節點nova服務
nova需要再計算節點安裝nova-compute計算模組,所有的雲主機均為該模組在計算節點生成
(1)安裝nova軟體包
# 把阿里雲的源下載過來 [root@compute yum.repos.d]# scp root@192.168.108.10:/etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/ root@192.168.108.10's password: CentOS-Base.repo 100% 2523 1.1MB/s 00:00 [root@compute yum.repos.d]# ls CentOS-Base.repo OpenStack.repo repo.bak # 安裝nova的計算模組 [root@compute yum.repos.d]# yum install -y openstack-nova-compute # 檢視使用者資訊 [root@compute ~]# cat /etc/passwd | grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin # 檢視使用者組資訊 [root@compute ~]# cat /etc/group | grep nova nobody:x:99:nova qemu:x:107:nova libvirt:x:987:nova nova:x:162:nova
(2)修改nova配置檔案
nova的配置檔案/etc/nova/nova.conf,修改他實現nova與資料庫,keystone和其他元件的連線
與控制節點的主要區別:
my_ip = 192.168.108.10
libvirt中多了配置:virt_type = qemu
# 備份配置檔案 [root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak # 去掉配置檔案註釋和空行 [root@compute ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf # 編輯配置檔案 [root@compute ~]# vi /etc/nova/nova.conf [DEFAULT] enable_apis = osapi_compute,metadata transport_url = rabbit://rabbitmq:000000@controller:5672 my_ip = 192.168.108.20 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [glance] api_servers = http://controller:9292 [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = nova password = 000000 [libvirt] virt_type = qemu [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = placement password = 000000 region_name = RegionOne [vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://192.168.108.10:6080/vnc_auto.html
(3)啟動計算節點nova服務
# 開機啟動 [root@compute ~]# systemctl enable libvirtd openstack-nova-compute Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. # 啟動 [root@compute ~]# systemctl start libvirtd openstack-nova-compute # 在控制節點檢視服務狀態 [root@controller ~]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-conductor | controller | internal | enabled | up | 2022-10-28T11:19:57.000000 | | 4 | nova-scheduler | controller | internal | enabled | up | 2022-10-28T11:19:49.000000 | | 6 | nova-compute | compute | nova | enabled | up | 2022-10-28T11:19:56.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+
4、發現計算節點並檢驗服務
每個計算節點要加入系統,都需要在控制節點上執行一次發現計算節點的操作,被發現的計算節點才能被對映為一個單元
(1)發現計算節點
注意是控制節點操作
# 模擬登入驗證 [root@controller ~]# source admin-login # 切換nova使用者執行發現未註冊計算節點 # 發現計算節點後,將自動與cell1單元形成關聯,後面可透過cell1對計算節點管理 [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" Found 2 cell mappings. Getting computes from cell 'cell1': 83ad6d17-f245-4310-8729-fccaa033edf2 Checking host mapping for compute host 'compute': 13af5106-c1c1-4b3f-93f5-cd25e030f39d Creating host mapping for compute host 'compute': 13af5106-c1c1-4b3f-93f5-cd25e030f39d Found 1 unmapped computes in cell: 83ad6d17-f245-4310-8729-fccaa033edf2 Skipping cell0 since it does not contain hosts. # 設定自動發現 # 1.每隔60秒執行一次發現命令 [root@controller ~]# vi /etc/nova/nova.conf [scheduler] discover_hosts_in_cells_interval = 60 # 2.重啟nova-api服務,讓配置生效 [root@controller ~]# systemctl restart openstack-nova-api
(2)驗證nova服務
均在控制節點上操作
# 方法一:檢視計算服務列表 [root@controller ~]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-conductor | controller | internal | enabled | up | 2022-10-28T12:02:46.000000 | | 4 | nova-scheduler | controller | internal | enabled | up | 2022-10-28T12:02:38.000000 | | 6 | nova-compute | compute | nova | enabled | up | 2022-10-28T12:02:40.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+ # 方法二:檢視Openstack服務及端點列表 [root@controller ~]# openstack catalog list +-----------+-----------+-----------------------------------------+ | Name | Type | Endpoints | +-----------+-----------+-----------------------------------------+ | glance | image | RegionOne | | | | internal: http://controller:9292 | | | | RegionOne | | | | admin: http://controller:9292 | | | | RegionOne | | | | public: http://controller:9292 | | | | | | keystone | identity | RegionOne | | | | admin: http://controller:5000/v3 | | | | RegionOne | | | | internal: http://controller:5000/v3 | | | | RegionOne | | | | public: http://controller:5000/v3 | | | | | | placement | placement | RegionOne | | | | internal: http://controller:8778 | | | | RegionOne | | | | public: http://controller:8778 | | | | RegionOne | | | | admin: http://controller:8778 | | | | | | nova | compute | RegionOne | | | | internal: http://controller:8774/v2.1 | | | | RegionOne | | | | admin: http://controller:8774/v2.1 | | | | RegionOne | | | | public: http://controller:8774/v2.1 | | | | | +-----------+-----------+-----------------------------------------+ # 方法三:使用Nova狀態檢測工具進行檢查 [root@controller ~]# nova-status upgrade check +--------------------------------+ | Upgrade Check Results | +--------------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +--------------------------------+ | Check: Placement API | | Result: Success | | Details: None | +--------------------------------+ | Check: Ironic Flavor Migration | | Result: Success | | Details: None | +--------------------------------+ | Check: Cinder API | | Result: Success | | Details: None | +--------------------------------+
5、安裝並完成情況檢測
# 1.檢查控制節點nova使用者和使用者組 [root@controller ~]# cat /etc/passwd | grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin [root@controller ~]# cat /etc/group | grep nova nobody:x:99:nova nova:x:162:nova # 2.檢查計算節點nova使用者和使用者組 [root@compute ~]# cat /etc/passwd | grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin [root@compute ~]# cat /etc/group | grep nova nobody:x:99:nova qemu:x:107:nova libvirt:x:987:nova nova:x:162:nova # 3.檢視控制節點資料庫 MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | glance | | information_schema | | keystone | | mysql | | nova | | nova_api | | nova_cell0 | # 4.檢視nova使用者對資料庫的許可權 MariaDB [(none)]> show grants for nova@'%'; +-----------------------------------------------------------------------------------------------------+ | Grants for nova@% | +-----------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'nova'@'%' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' | | GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'%' | | GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova'@'%' | | GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'%' | +-----------------------------------------------------------------------------------------------------+ 4 rows in set (0.000 sec) MariaDB [(none)]> show grants for nova@'localhost'; +-------------------------------------------------------------------------------------------------------------+ | Grants for nova@localhost | +-------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'nova'@'localhost' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' | | GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'localhost' | | GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'localhost' | | GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova'@'localhost' # 5.nova\nova_api\nova_cell0資料庫表同步 MariaDB [(none)]> use nova Database changed MariaDB [nova]> show tables; +--------------------------------------------+ | Tables_in_nova | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | # 6.檢查nova使用者是否存在 [root@controller ~]# openstack user list +----------------------------------+-----------+ | ID | Name | +----------------------------------+-----------+ | f4f16d960e0643d7b5a35db152c87dae | admin | | 81238b556a444c8f80cb3d7dc72a24d3 | glance | | e0d6a46f9b1744d8a7ab0332ab45d59c | placement | | 2f5041ed122d4a50890c34ea02881b47 | nova | # 7.檢查nova使用者是否有admin許可權 [root@controller ~]# openstack role list +----------------------------------+--------+ | ID | Name | +----------------------------------+--------+ | 47670bbd6cc1472ab42db560637c7ebe | reader | | 5eee0910aeb844a1b82f48100da7adc9 | admin | | 700ec993d3cf456fa591c03e72f37856 | user | | bc2c8147bbd643629a020a6bd9591eca | member | +----------------------------------+--------+ [root@controller ~]# openstack role assignment list +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | 5eee0910aeb844a1b82f48100da7adc9 | 2f5041ed122d4a50890c34ea02881b47 | | e3a549077f354998aa1a75677cfde62e | | | False | # 8.檢查是否建立看了服務實體nova [root@controller ~]# openstack service list +----------------------------------+-----------+-----------+ | ID | Name | Type | +----------------------------------+-----------+-----------+ | 324a07034ea4453692570e3edf73cf2c | glance | image | | 5d25b4ed1443497599707e043866eaae | keystone | identity | | da038496edf04ce29d7d3d6b8e647755 | placement | placement | | e7cccf0a4d2549139801ac51bb8546db | nova | compute | # 9.檢查nova服務端點 [root@controller ~]# openstack endpoint list | grep nova | 49b042b01ad44784888e65366d61dede | RegionOne | nova | compute | True | internal | http://controller:8774/v2.1 | | 6dd22acff2ab4c2195cefee39f371cc0 | RegionOne | nova | compute | True | admin | http://controller:8774/v2.1 | | c60a9641abbb47b391751c9a0b0d6828 | RegionOne | nova | compute | True | public | http://controller:8774/v2.1 | # 10.nova服務是否正常執行 [root@controller ~]# nova-status upgrade check +--------------------------------+ | Upgrade Check Results | +--------------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +--------------------------------+ | Check: Placement API | | Result: Success | | Details: None | +--------------------------------+ | Check: Ironic Flavor Migration | | Result: Success | | Details: None | +--------------------------------+ | Check: Cinder API | | Result: Success | | Details: None | +--------------------------------+
五:網路服務(Neutron)部署
neutron負責虛擬網路裝置的建立,管理,包含網橋,網路,埠等
1、網路初始環境準備
(1)設定外網網路卡為混雜模式
需要將網路卡設定為混雜模式,網路卡能將透過自己的介面的所有資料都捕獲,為了實現虛擬網路的資料轉發,neutron需要將外網網路卡設定為混雜模式
# 設定控制節點 [root@controller ~]# ifconfig ens34 promisc [root@controller ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.108.10 netmask 255.255.255.0 broadcast 192.168.10.255 略... ens34: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1500 《————這裡增加了PROMISC # 設定計算節點 [root@compute ~]# ifconfig ens34 promisc [root@compute ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.108.20 netmask 255.255.255.0 broadcast 192.168.10.255 略... ens34: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1500
網路卡資訊中出現了PROMISC字樣,則表示陳工設定為混雜模式,凡是透過該網路卡的資料均可被該網路卡接收,
在設定開機後混雜模式自動生效
# 控制節點執行 [root@controller ~]# echo 'ifconfig ens34 promisc' >> /etc/profile [root@controller ~]# tail -1 /etc/profile ifconfig ens34 promisc # 計算節點執行 [root@compute ~]# echo 'ifconfig ens34 promisc' >> /etc/profile [root@compute ~]# tail -1 /etc/profile ifconfig ens34 promisc
(2)載入橋接模式防火牆模組
網路過濾器時linux核心中的一個軟體框架,用於管理網路資料包,能網路地址轉換,還能修改資料包,資料包過濾等
# 1.修改系統引數配置檔案 # 控制節點修改 [root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf [root@controller ~]# tail -n 2 /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 # 計算節點修改 [root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1 > net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf [root@compute ~]# tail -n 2 /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 # 2.分別載入br_netfilter模組 [root@controller ~]# modprobe br_netfilter [root@compute ~]# modprobe br_netfilter # 3.分別檢查模組載入 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
2、控制節點neutron服務安裝配置
(1)安裝neutron軟體包
openstack-neutron:neuron-server的包
openstack-neutron-ml2:ml2外掛的包
openstack-neutron-linuxbridge:網橋和網路提供者相關的包
# 安裝相關軟體包 # 阿里雲上有包dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm缺失 [root@controller ~]# yum install -y wget [root@controller ~]# wget http://mirror.centos.org/centos/7/updates/x86_64/Packages/dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm [root@controller ~]# ls admin-login cirros-0.5.1-x86_64-disk.img dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm [root@controller ~]# rpm -ivh dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:dnsmasq-utils-2.76-17.el7_9.3 ################################# [100%] [root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge # 檢查使用者資訊 [root@controller ~]# cat /etc/passwd | grep neutron neutron:x:990:987:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin # 檢查使用者組資訊 [root@controller ~]# cat /etc/group | grep neutron neutron:x:987:
(2)建立neutron資料庫並授權
# 1.登入並建立資料庫 [root@controller ~]# mysql -uroot -p000000 MariaDB [(none)]> create database neutron; Query OK, 1 row affected (0.000 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | glance | | information_schema | | keystone | | mysql | | neutron | # 2.為資料庫授權本地和遠端管理許可權 MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.000 sec)
(3)修改neutron服務相關的配置檔案
1.配置neutron元件資訊
修改default與keystone_autjoken部分,實現與keystone互動
修改database部分,實現與資料庫連線
修改default部分,實現與訊息佇列互動及核心外掛等
修改oslo_concurrency,配置鎖路徑
# 備份配置檔案 [root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak # 去掉配置檔案註釋和空行 [root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf # 編輯配置檔案 [root@controller ~]# vi /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = router transport_url = rabbit://rabbitmq:000000@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [database] connection = mysql+pymysql://neutron:000000@controller/neutron [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = neutron password = 000000 [oslo_concurrency] lock_path = /var/lib/nova/tmp [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default project_name = project username = nova password = 000000 region_name = RegionOne server_proxyclient_address = 192.168.108.10
2、修改二層模組外掛(ml2plugin)的配置檔案
是neutron的核心外掛
# 備份配置檔案 [root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak # 去除配置檔案中的註釋和空行 [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini # 編輯配置檔案 [root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,local,vlan,gre,vxlan,geneve tenant_network_types = local,flat mechanism_drivers = linuxbridge extension_drivers = port_security [ml2_type_flat] flat_networks = provider [securitygroup] enable_ipset = true # 設定對映啟用ML2外掛 [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller ~]# ll /etc/neutron/ lrwxrwxrwx 1 root root 37 Nov 4 20:01 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
3、修改網橋代理配置檔案
要在ml2的配置檔案中設定機制驅動mechanism_drivers的值為linuxbridge
# 1.備份配置檔案 [root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak # 2.刪除註釋和空行 [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] # 3.編輯配置檔案 [root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [linux_bridge] physical_interface_mappings = provider:ens34 [vxlan] enable_vxlan = false [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
4、修改dhcp代理配置檔案
dhcp-agent為雲主機提供了自動分配ip地址的服務
# 1.備份和去除空行和註釋配置檔案 [root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini [root@controller ~]# cat /etc/neutron/dhcp_agent.ini [DEFAULT] # 2.編輯配置檔案 [root@controller ~]# vi /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
5、修改後設資料代理配置檔案
雲主機在計算節點,執行過程中需要和控制節點nova-api模組互動,互動需要使用neutron-metadata-agent
# 1.備份和去除空行註釋配置檔案 [root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini [root@controller ~]# cat /etc/neutron/metadata_agent.ini [DEFAULT] [cache] # 2.編輯配置檔案 [root@controller ~]# vi /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET [cache]
6、修改Nova配置檔案
nova處於雲平臺的核心位置,需要在nova配置檔案中指明如何與neutron進行互動
# 注意檔案目錄 [root@controller ~]# echo ' [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = project username = neutron password = 000000 service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET ' >> /etc/nova/nova.conf
(4)初始化資料庫
neutron資料庫同步,將安裝檔案中的資料庫的表資訊填充到資料庫中
# 資料庫同步 [root@controller neutron]# su neutron -s /bin/sh -c "neutron-db-manage \ --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" # 資料庫驗證 [root@controller neutron]# mysql -uroot -p000000 MariaDB [(none)]> use neutron; Database changed MariaDB [neutron]> show tables; +-----------------------------------------+ | Tables_in_neutron | +-----------------------------------------+ | address_scopes | | agents | | alembic_version | | allowedaddresspairs | | arista_provisioned_nets | | arista_provisioned_tenants |
3、neutron元件初始化
任務均在控制節點完成
(1)建立neutron使用者並分配角色
# 模擬登入 [root@controller ~]# source admin-login # 在 default 域建立neutron使用者 [root@controller ~]# openstack user create --domain default --password 000000 neutron +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 67bd1f9c48174e3e96bb41e0f76687ca | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ # 給neutron使用者分配admin角色 [root@controller ~]# openstack role add --project project --user neutron admin # 驗證 [root@controller ~]# openstack role assignment list +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | 5eee0910aeb844a1b82f48100da7adc9 | 2f5041ed122d4a50890c34ea02881b47 | | e3a549077f354998aa1a75677cfde62e | | | False | | 5eee0910aeb844a1b82f48100da7adc9 | 67bd1f9c48174e3e96bb41e0f76687ca | | e3a549077f354998aa1a75677cfde62e | | | False |
(2)建立neutron服務及服務端點
# 建立network型別neutron服務 [root@controller ~]# openstack service create --name neutron network +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | 459c365a11c74e5894b718b5406022a8 | | name | neutron | | type | network | +---------+----------------------------------+ # 建立3個服務端點 [root@controller ~]# openstack endpoint create --region RegionOne neutron public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 1d59d497c89c4fa9b8789d685fab9fe5 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 459c365a11c74e5894b718b5406022a8 | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne neutron internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 44de22606819441aa845b370a9304bf5 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 459c365a11c74e5894b718b5406022a8 | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne neutron admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 75e7eaf8bc664a2c901b7ad58141bedc | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 459c365a11c74e5894b718b5406022a8 | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+
(3)啟動控制節點上的neutron服務
由於修改了nova的配置檔案,啟動neutron服務前,需要重啟nova服務
# 重啟nova服務 [root@controller ~]# systemctl restart openstack-nova-api # 服務開機啟動 [root@controller ~]# systemctl enable neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service. [root@controller neutron]# systemctl start neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
4、檢測控制節點上的neutron服務
# 方法一:檢視埠占用情況 [root@controller neutron]# netstat -tnlup|grep 9696 tcp 0 0 0.0.0.0:9696 0.0.0.0:* LISTEN 4652/server.log # 方法二:檢驗服務端點 [root@controller neutron]# curl http://controller:9696 {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://controller:9696/v2.0/", "rel": "self"}]}]} # 方法三:檢視服務執行狀態 # Loaded:值為enabled,表示服務以及設定了開機啟動 # Active:值為active(running),表示服務當前處於執行狀態 [root@controller neutron]# systemctl status neutron-server ● neutron-server.service - OpenStack Neutron Server Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2022-11-11 16:31:20 CST; 5min ago Main PID: 4652 (/usr/bin/python)
5、安裝和配置計算節點的neutron服務
均在計算節點上完成
(1)安裝neutron軟體包
# 計算節點安裝軟體包,包含網橋和網路提供者的相關軟體 [root@compute ~]# yum install -y openstack-neutron-linuxbridge # 檢視neutron使用者和使用者組 [root@compute ~]# cat /etc/passwd | grep neutron neutron:x:989:986:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin [root@compute ~]# cat /etc/group | grep neutron neutron:x:986:
(2)修改neutron配置檔案
要對neutron元件,網橋代理,Nova元件進行配置
1.neutron配置檔案
# 備份配置檔案 [root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak # 去除空行和註釋 [root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf [root@compute ~]# cat /etc/neutron/neutron.conf [DEFAULT] [cors] [database] [keystone_authtoken] # 修改Neutron配置檔案 [root@compute ~]# vi /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://rabbitmq:000000@controller:5672 auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = project username = neutron password = 000000 [oslo_concurrency] lock_path = /var/lib/neutron/tmp
2.網橋代理配置檔案
# 網橋代理的配置檔案備份和去空行和註釋 [root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak [root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini # 修改網橋代理的配置檔案 [root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [linux_bridge] physical_interface_mappings = provider:ens34 [vxlan] enable_vxlan = false [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
3.nova配置檔案
# 在Nova配置檔案中,需要在[DEFAULT]部分加入兩行內容。在[neutron]部分加入內容 [root@compute ~]# vi /etc/nova/nova.conf [DEFAULT] enable_apis = osapi_compute,metadata transport_url = rabbit://rabbitmq:000000@controller:5672 my_ip = 192.168.108.20 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver vif_plugging_is_fatal = false vif_plugging_timeout = 0 [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = project username = neutron password = 000000
(3)啟動計算節點neutron服務
[root@compute ~]# systemctl restart openstack-nova-compute [root@compute ~]# systemctl enable neutron-linuxbridge-agent Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@compute ~]# systemctl start neutron-linuxbridge-agent
6、檢測neutron服務
2種方法檢測neutron元件的執行狀態,均在控制節點執行
# 方法一:檢視網路代理服務列表 # 查詢出四個資料,均為UP狀態 [root@controller ~]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 0e2c0f8f-8fa7-4b64-8df2-6f1aedaa7c2b | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent | | c6688165-593d-4c5e-b25c-5ff2b6c75866 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | | dc335348-5639-40d1-b121-3abfc9aefc8e | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | | ddc49378-aea8-4f2e-b1b4-568fa4c85038 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ # 方法二:用Neutron狀態檢測工具檢測 [root@controller ~]# neutron-status upgrade check +---------------------------------------------------------------------+ | Upgrade Check Results | +---------------------------------------------------------------------+ | Check: Gateway external network | | Result: Success | | Details: L3 agents can use multiple networks as external gateways. | +---------------------------------------------------------------------+ | Check: External network bridge | | Result: Success | | Details: L3 agents are using integration bridge to connect external | | gateways | +---------------------------------------------------------------------+ | Check: Worker counts configured | | Result: Warning | | Details: The default number of workers has changed. Please see | | release notes for the new values, but it is strongly | | encouraged for deployers to manually set the values for | | api_workers and rpc_workers. | +---------------------------------------------------------------------+
7、安裝完成情況檢測
# 1.控制節點外網路卡設定了混雜模式(PROMISC) [root@controller ~]# ip a 3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 # 2.計算節點外網路卡設定了混雜模式(PROMISC) [root@compute ~]# ip a 3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 # 3.控制節點建立neutron使用者和使用者組 [root@controller ~]# cat /etc/passwd | grep neutron neutron:x:990:987:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin [root@controller ~]# cat /etc/group | grep neutron neutron:x:987: # 4.計算節點建立neutron使用者和使用者組 [root@compute ~]# cat /etc/passwd | grep neutron neutron:x:989:986:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin [root@compute ~]# cat /etc/group | grep neutron neutron:x:986: # 5.控制節點是否建立neutron資料庫 [root@controller ~]# mysql -uroot -p000000 MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | glance | | information_schema | | keystone | | mysql | | neutron | # 6.檢查neutron使用者對資料庫的許可權 MariaDB [(none)]> show grants for neutron; +--------------------------------------------------------------------------------------------------------+ | Grants for neutron@% | +--------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'neutron'@'%' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' | | GRANT ALL PRIVILEGES ON `neutron`.* TO 'neutron'@'%' | +--------------------------------------------------------------------------------------------------------+ # 7.檢查neutron資料庫中的資料表是否同步 MariaDB [(none)]> use neutron; Database changed MariaDB [neutron]> show tables; +-----------------------------------------+ | Tables_in_neutron | +-----------------------------------------+ | address_scopes | | agents | | alembic_version | | allowedaddresspairs | | arista_provisioned_nets | | arista_provisioned_tenants | # 8.檢查openstack使用者列表 [root@controller ~]# openstack user list | grep neutron | 67bd1f9c48174e3e96bb41e0f76687ca | neutron | # 9.檢視neutron使用者是否有ADMIN許可權 [root@controller ~]# openstack role list | grep admin | 5eee0910aeb844a1b82f48100da7adc9 | admin | [root@controller ~]# openstack role assignment list | grep 67bd1f9c48174e3e96bb41e0f76687ca | 5eee0910aeb844a1b82f48100da7adc9 | 67bd1f9c48174e3e96bb41e0f76687ca | | e3a549077f354998aa1a75677cfde62e | | | False | # 10.檢查是否建立了服務實體neutron [root@controller ~]# openstack service list +----------------------------------+-----------+-----------+ | ID | Name | Type | +----------------------------------+-----------+-----------+ | 324a07034ea4453692570e3edf73cf2c | glance | image | | 459c365a11c74e5894b718b5406022a8 | neutron | network | # 11.neutron的三個域端點是否建立 [root@controller ~]# openstack endpoint list | grep neutron | 1d59d497c89c4fa9b8789d685fab9fe5 | RegionOne | neutron | network | True | public | http://controller:9696 | | 44de22606819441aa845b370a9304bf5 | RegionOne | neutron | network | True | internal | http://controller:9696 | | 75e7eaf8bc664a2c901b7ad58141bedc | RegionOne | neutron | network | True | admin | http://controller:9696 # 12.檢視網路代理列表,檢查服務是否正常執行 [root@controller ~]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 0e2c0f8f-8fa7-4b64-8df2-6f1aedaa7c2b | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent | | c6688165-593d-4c5e-b25c-5ff2b6c75866 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | | dc335348-5639-40d1-b121-3abfc9aefc8e | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | | ddc49378-aea8-4f2e-b1b4-568fa4c85038 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
六:儀表盤服務(dashboard)部署
提供了圖形化介面來管理openstack平臺,就是一個web前端控制檯
1、安裝dashboard軟體包
在計算節點安裝dashboard軟體包
[root@compute ~]# yum install -y openstack-dashboard
2、修改horizon配置檔案
# 修改Horizon配置檔案 [root@compute ~]# vi /etc/openstack-dashboard/local_settings # 控制節點位置 OPENSTACK_HOST = "controller" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST # 啟用對多域的支援 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True # 配置API版本 OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 3 } # 配置透過儀表盤建立的使用者預設域為Default OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" # 配置透過儀表盤建立的使用者預設角色為user OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" # 修改配置二層網路 OPENSTACK_NEUTRON_NETWORK = { 'enable_auto_allocated_network': False, 'enable_distributed_router': False, 'enable_fip_topology_check': False, 'enable_ha_router': False, 'enable_ipv6': False, 'enable_quotas': False, 'enable_rbac_policy': False, 'enable_router': False, 'default_dns_nameservers': [], 'supported_provider_types': ['*'], 'segmentation_id_range': {}, 'extra_provider_types': {}, 'supported_vnic_types': ['*'], 'physical_networks': [], } # 配置時區 TIME_ZONE = "Asia/Shanghai" # 允許從任意主機訪問 ALLOWED_HOSTS = ['*'] # 配置使用快取服務 CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', }, } SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
(3)重建apache下dashboard配置檔案
dashboard是一個web應用,必須執行在apache這樣的伺服器因此要設定讓apache知道如何執行該服務
# 進入Dashboard網站目錄 [root@compute ~]# cd /usr/share/openstack-dashboard/ [root@compute openstack-dashboard]# ls manage.py manage.pyc manage.pyo openstack_dashboard static # 編譯生成Dashboard的WEB服務檔案 [root@compute openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf [root@compute openstack-dashboard]# cat /etc/httpd/conf.d/openstack-dashboard.conf <VirtualHost *:80> ServerAdmin webmaster@openstack.org ServerName openstack_dashboard DocumentRoot /usr/share/openstack-dashboard/ LogLevel warn ErrorLog /var/log/httpd/openstack_dashboard-error.log CustomLog /var/log/httpd/openstack_dashboard-access.log combined WSGIScriptReloading On WSGIDaemonProcess openstack_dashboard_website processes=3 WSGIProcessGroup openstack_dashboard_website WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py <Location "/"> Require all granted </Location> Alias /static /usr/share/openstack-dashboard/static <Location "/static"> SetHandler None </Location> </Virtualhost>
這樣就是實現了apache的web伺服器配置目錄下生成一個配置檔案
4、建立策略檔案軟連線
在/etc/openstack-dashboard目錄下儲存了一些dashboard與其他元件互動式的預設策略
# 檢視互動預設策略 [root@compute ~]# cd /etc/openstack-dashboard/ [root@compute openstack-dashboard]# ls cinder_policy.json keystone_policy.json neutron_policy.json nova_policy.json glance_policy.json local_settings nova_policy.d # 將這些策略連結到Dashboard專案中,讓策略生效 [root@compute openstack-dashboard]# ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf [root@compute openstack-dashboard]# ll /usr/share/openstack-dashboard/openstack_dashboard/ total 240 drwxr-xr-x 3 root root 4096 Nov 18 15:00 api lrwxrwxrwx 1 root root 24 Nov 18 15:33 conf -> /etc/openstack-dashboard
5、啟動服務並驗證
# Apaceh服務開機啟動和重啟 [root@compute ~]# systemctl enable httpd Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. [root@compute ~]# systemctl restart httpd
訪問計算節點的ip地址
使用者名稱admin,密碼000000
七: 塊儲存服務(cinder部署)
控制節點和計算節點部署配置cinder服務
1、控制節點安裝和配置cinder
(1)安裝cinder軟體包
在openstack-cinder軟體包中包括cinder-api和cindder-scheduler模組
# 安裝cinder軟體包 [root@controller ~]# yum install -y openstack-cinder # 檢視cinder使用者和使用者組 [root@controller ~]# cat /etc/passwd | grep cinder cinder:x:165:165:OpenStack Cinder Daemons:/var/lib/cinder:/sbin/nologin [root@controller ~]# cat /etc/group | grep cinder nobody:x:99:nova,cinder cinder:x:165:cinder
(2)建立cinder資料庫並授權
# 登入資料庫 [root@controller ~]# mysql -uroot -p000000 # 建立cinder資料庫 MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.004 sec) # 給cinder使用者授權本地和遠端訪問 MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.007 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.000 sec)
(3)修改cinder配置檔案
# 備份配置檔案 [root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak # 去除配置檔案空行和註釋 [root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf # 編輯配置檔案 [root@controller ~]# vi /etc/cinder/cinder.conf [DEFAULT] auth_strategy = keystone transport_url = rabbit://rabbitmq:000000@controller:5672 [database] connection = mysql+pymysql://cinder:000000@controller/cinder [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password username = cinder password = 000000 project_name = project user_domain_name = Default project_domain_name = Default [oslo_concurrency] lock_path = /var/lib/cinder/tmp
(4)修改Nova配置檔案
cinder與nova互動,需要修改nova配置檔案
[root@controller ~]# vi /etc/nova/nova.conf [cinder] os_region_name = RegionOne
(5)初始化cinder資料庫
# 執行初始化操作,同步資料庫 [root@controller ~]# su cinder -s /bin/sh -c "cinder-manage db sync" Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT". # 驗證檢視cinder庫裡的表 MariaDB [cinder]> show tables; +----------------------------+ | Tables_in_cinder | +----------------------------+ | attachment_specs | | backup_metadata | | backups |
(6)建立cinder使用者並分配角色
# 模擬登陸 [root@controller ~]# source admin-login # 平臺建立cinder使用者 [root@controller ~]# openstack user create --domain default --password 000000 cinder +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | b9a2bdfcbf3b445ab0db44c9e35af678 | | name | cinder | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ # 給使用者cinder分配admin角色 [root@controller ~]# openstack role add --project project --user cinder admin
(7)建立cinder服務及端點
# 建立服務 [root@controller ~]# openstack service create --name cinderv3 volumev3 +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | 90dc0dcf9879493d98144b481ea0df2b | | name | cinderv3 | | type | volumev3 | +---------+----------------------------------+ # 建立服務端點 [root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 6bb167be751241d1922a81b6b4c18898 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 90dc0dcf9879493d98144b481ea0df2b | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | e8ad2286c57443a5970e9d17ca33076a | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 90dc0dcf9879493d98144b481ea0df2b | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | dd6d3b221e244cd5a5bb6a2b33159c1d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 90dc0dcf9879493d98144b481ea0df2b | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+
(8)啟動cinder服務
# 重啟nova服務(配置檔案改過了) [root@controller ~]# systemctl restart openstack-nova-api # 開機啟動 [root@controller ~]# systemctl enable openstack-cinder-api openstack-cinder-scheduler Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service. # 立即啟動 [root@controller ~]# systemctl start openstack-cinder-api openstack-cinder-scheduler
(9)檢測控制節點cinder服務
# 方法一:檢視8776埠占用情況 [root@controller ~]# netstat -nutpl | grep 8776 tcp 0 0 0.0.0.0:8776 0.0.0.0:* LISTEN 15517/python2 # 方法二:檢視儲存服務列表,是否處於UP狀態 [root@controller ~]# openstack volume service list +------------------+------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2022-11-18T11:08:47.000000 | +------------------+------------+------+---------+-------+----------------------------+
2、搭建儲存節點
(1)為計算節點新增硬碟
(2)建立卷組
cinder使用lvm來實現塊裝置(卷)的管理
# 1.檢視系統硬碟掛載情況 [root@compute ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 40G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 39G 0 part ├─centos-root 253:0 0 35G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm [SWAP] sdb 8:16 0 40G 0 disk 《—————sdb裝置還沒有分割槽和掛載 sr0 11:0 1 1024M 0 rom # 2.建立LVM物理卷組 # 2.1 硬碟初始化為物理卷 [root@compute ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created. # 2.2 物理卷歸併為卷組 # 格式:vgcreate 卷組名 物理卷... [root@compute ~]# vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created # 2.3 修改LVM配置 # 在配置檔案中的devices部分,新增過濾器,只接受/dev/sdb # a表示接受,r表示拒絕 [root@compute ~]# vi /etc/lvm/lvm.conf devices { filter = ["a/sdb/","r/.*/"] # 3.啟動LVM後設資料服務 [root@compute ~]# systemctl enable lvm2-lvmetad [root@compute ~]# systemctl start lvm2-lvmetad
3、安裝和配置儲存節點
均在計算節點操作
(1)安裝cinder相關的軟體包
openstack-cinder是cinder的軟體包
targetcli是一個命令列工具,用於管理Linux的儲存資源
python-keystone是與keystone的連線外掛
[root@compute ~]# yum install -y openstack-cinder targetcli python-keystone
(2)配置檔案修改
# 備份配置檔案 [root@compute ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak # 去除空行和註釋 [root@compute ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf # 修改配置檔案 # 配置檔案中“volume_group”的值應和“建立LVM物理卷組”部分建立的卷組名一致:cinder-volumes [root@compute ~]# vi /etc/cinder/cinder.conf [DEFAULT] auth_stategy = keystone transport_url = rabbit://rabbitmq:000000@controller:5672 enabled_backends = lvm glance_api_servers = http://controller:9292 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [database] connection = mysql+pymysql://cinder:000000@controller/cinder [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password username = cinder password = 000000 project_name = project user_domain_name = Default project_domain_name = Default [oslo_concurrency] lock_path = /var/lib/cinder/tmp
(3)儲存節點啟動cinder服務
[root@compute ~]# systemctl enable openstack-cinder-volume target Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service. Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service. [root@compute ~]# systemctl start openstack-cinder-volume target
4、檢查cinder服務
# 方法一:檢視儲存服務列表 # 檢視Cinder服務中各個模組的服務狀態 [root@controller ~]# openstack volume service list +------------------+-------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2022-11-18T12:15:46.000000 | | cinder-volume | compute@lvm | nova | enabled | up | 2022-11-18T12:15:43.000000 | +------------------+-------------+------+---------+-------+----------------------------+ # 方法二:檢視Dashboard檢查卷狀態 # 1.左側導航欄出現卷的選項 # 2.在專案概況中出現卷、卷快照、卷儲存三個餅圖
5、用cinder建立卷
(1)命令模式建立卷
要在控制節點上面執行命令
[root@controller ~]# openstack volume create --size 8 volume1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-11-25T06:26:14.000000 | | description | None | | encrypted | False | | id | 690449e4-f950-4949-a0d4-7184226a2447 | | migration_status | None | | multiattach | False | | name | volume1 | | properties | | | replication_status | None | | size | 8 | | snapshot_id | None | | source_volid | None | | status | creating | | type | __DEFAULT__ | | updated_at | None | | user_id | f4f16d960e0643d7b5a35db152c87dae | +---------------------+--------------------------------------+ [root@controller ~]# openstack volume list +--------------------------------------+---------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+---------+-----------+------+-------------+ | 690449e4-f950-4949-a0d4-7184226a2447 | volume1 | available | 8 | | +--------------------------------------+---------+-----------+------+-------------+
(2)用dashboard建立