照著官網來安裝openstack pike之nova安裝

wadeson發表於2017-10-25

nova元件安裝分為控制節點和計算節點,還是先從控制節點安裝

1、前提條件,資料庫為nova建立庫和賬戶密碼來連線資料庫
# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';


GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

 2、驗證keystone

# source admin-openrc

 3、建立計算服務認證:

# openstack user create --domain default --password-prompt nova
輸入密碼:nova

# openstack role add --project service --user nova admin
# openstack service create --name nova --description "OpenStack Compute" compute

 4、建立計算服務的API endpoints:

# openstack endpoint create --region RegionOne compute public    http://192.168.101.10:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://192.168.101.10:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://192.168.101.10:8774/v2.1

 5、建立一個placement服務的使用者,並設定密碼:

# openstack user create --domain default --password-prompt placement
輸入密碼:placement

 6、新增placement使用者到service這個專案中,使用admin角色:

# openstack role add --project service --user placement admin

 7、Create the Placement API entry in the service catalog:

將placement新增到一個service中
# openstack service create --name placement --description "Placement API" placement

 8、建立一個placement API service endpoints:

# openstack endpoint create --region RegionOne placement public http://192.168.101.10:8778
# openstack endpoint create --region RegionOne placement internal http://192.168.101.10:8778
# openstack endpoint create --region RegionOne placement admin http://192.168.101.10:8778

 9、安裝nova服務需要的依賴:

# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

 安裝完成後,需要修改配置檔案進行設定

修改配置檔案:/etc/nova/nova.conf
在[DEFAULT]部分下:
[DEFAULT]
enabled_apis = osapi_compute,metadata

 在[api_database]和[database]下:

[api_database]
connection = mysql+pymysql://nova:nova@192.168.101.10/nova_api

[database]
connection = mysql+pymysql://nova:nova@192.168.101.10/nova

[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.101.10
    使用的是:openstack:openstack賬號和密碼登入的rabbitmq

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.101.10:5000
auth_url = http://192.168.101.10:35357
memcached_servers = 192.168.101.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova          這裡是nova在keystone那裡認證的賬號和密碼

[DEFAULT]
my_ip = 192.168.101.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

 By default, Compute uses an internal firewall driver. Since the Networking service includes a firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.

[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
api_servers = http://192.168.101.10:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp


[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.101.10:35357/v3
username = placement
password = placement

 設定能夠訪問placement API的許可權:/etc/httpd/conf.d/00-nova-placement-api.conf(追加到此檔案)

<Directory /usr/bin>
     <IfVersion >= 2.4>
        Require all granted
     </IfVersion>
     <IfVersion < 2.4>
        Order allow,deny
        Allow from all
     </IfVersion>
</Directory>

 然後重啟httpd服務:

# systemctl restart httpd

 向nova-api資料庫匯入資料:

# su -s /bin/sh -c "nova-manage api_db sync" nova

   Ignore any deprecation messages in this output.

註冊cell0資料庫:
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

 建立cell1:

# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

 匯入nova資料:

# su -s /bin/sh -c "nova-manage db sync" nova

上述忽略輸出
校驗nova cell0、cell1是否註冊成功:
# nova-manage cell_v2 list_cells

 最後開啟計算服務:

# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

 至此nova的控制節點已經安裝成功,接下來nova在計算節點的安裝:

1、計算節點node2
[root@node2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.101.10    node1
192.168.101.11    node2

 2、時間同步(控制節點)

# yum install chrony

 修改vim /etc/chrony.conf

allow 192.168.101.0/16    開啟
註釋掉:
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.101.10 iburst           (新增控制節點)

開啟:
systemctl enable chronyd.service
systemctl start chronyd.service

 校驗:

3、在計算節點上執行需要的包:

# yum install centos-release-openstack-pike
# yum upgrade
    If the upgrade process includes a new kernel, reboot your host to activate it.
reboot

# yum install python-openstackclient

# yum install openstack-selinux
    RHEL and CentOS enable SELinux by default. Install the openstack-selinux package to automatically manage security policies for OpenStack services:

 前提環境安裝完成後,於是開始安裝必須必要的包環境:

# yum install openstack-nova-compute

 修改配置檔案/etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@192.168.101.10
my_ip = 192.168.101.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://192.168.101.10:5000
auth_url = http://192.168.101.10:35357
memcached_servers = 192.168.101.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.101.10:6080/vnc_auto.html


[glance]
api_servers = http://192.168.101.10:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp


[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.101.10:35357/v3
username = placement
password = placement

 上述引數中my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS 用計算節點的管理ip替換,這裡的計算節點ip為192.168.101.11,所以改為

my_ip = 192.168.101.11
在計算節點上檢視是否支援虛擬化技術:
Determine whether your compute node supports hardware acceleration for virtual machines:
# egrep -c '(vmx|svm)' /proc/cpuinfo

 a、如果結果為one or greater,那麼計算節點能夠支援硬體加速,配置檔案就無需修改

虛擬機器安裝的作業系統需要開啟cpu虛擬化
b、如果結果返回為0,那麼計算節點不支援硬體加速,you must configure libvirt to use QEMU instead of KVM.
Edit the [libvirt] section in the /etc/nova/nova.conf file as follows:
[libvirt]
virt_type = qemu

 執行上述命令:

[root@node2 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
2

 所以支援虛擬化技術,預設kvm,所以配置libvirt不需要修改

開啟計算服務:
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

 If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall on the controller node is preventing access to port 5672. Configure the firewall to open port 5672 on the controller node and restart nova-compute service on the compute node.

控制節點和計算節點都安裝完成後,於是需要將計算節點新增到控制節點,於是接下來的操作在控制節點node1上操作:

將上面建立的計算節點新增到cell 資料庫中,以下操作在控制節點上執行:
# source admin-openrc   進行身份認證

# openstack compute service list --service nova-compute

發現計算hosts:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

當你新增了一個計算節點,必須執行上面的nova-manage cell_v2 discover_hosts命令在控制節點上,另外也可以設定一個恰當的間隔進行發現:
修改/etc/nova/nova.conf:
[scheduler]
discover_hosts_in_cells_interval = 300

 最後檢驗操作:在控制節點node1上執行

# source admin-openrc    進行身份認證

 列出計算服務元件

# openstack compute service list

列出API endpoints在身份服務中校驗連線狀態(忽略輸出的警告資訊)
# openstack catalog list

 列出映象的連線狀態:

# openstack image list

 Check the cells and placement API are working successfully:

# nova-status upgrade check

 

至此控制節點和計算節點的nova都已安裝成功

相關文章