完整部署CentOS7.2+OpenStack+kvm 雲平臺環境(1)--基礎環境搭建

散盡浮華發表於2016-07-26

 

公司在IDC機房有兩臺很高配置的伺服器,計劃在上面部署openstack雲平臺虛擬化環境,用於承載後期開發測試和其他的一些對內業務。
以下對openstack的部署過程及其使用做一詳細介紹,僅僅依據本人實際經驗而述,如有不當,敬請指出~

********************************************************************************************************************************

1 OpenStack 介紹

1.1 百度百科
OpenStack 是一個由 NASA ( 美國國家航空航天局)和 Rackspace 合作研發併發起的,以 Apache
許可證授權的自由軟體和開放原始碼專案。

1.2 版本歷史

1.3 openstack 架構概念

1.4 openstack 各個服務名稱對應

 

***************************************************************************************************************

以下安裝部署已經過測試,完全通過!

建議在物理機上部署openstack,並且是centos7或ubuntu系統下,centos6x的源裡已不支援openstack部分元件下載了。

2 環境準備
openstack 主機名不能改,裝的時候是什麼就是什麼, 運維標準化。

1、 CentOS 7.2 系統 2 臺
node1 即作為控制節點,也作為計算節點;(即可以單機部署,單機部署時則下面記錄的控制節點和計算節點的操作步驟都要在本機執行下)
node2 就只是計算節點
控制節點去操控計算節點,計算節點上可以建立虛擬機器

linux-node1.openstack   192.168.1.17 網路卡 NAT em2 (外網ip假設是58.68.250.17)(em2是內網網路卡,下面neutron配置檔案裡會設定到)
linux-node2.openstack   192.168.1.8   網路卡 NAT em2

控制節點:linux-node1.openstack    192.168.1.17                                                                                     

                     

計算節點:linux-node2.openstack 192.168.1.8  

 

2.域名解析和關閉防火牆 (控制節點和計算節點都做)
/etc/hosts                                                         #主機名一開始設定好,後面就不能更改了,否則就會出問題!這裡設定好ip與主機名的對應關係
192.168.1.17 linux-node1.openstack      
192.168.1.8   linux-node2.openstack  

關閉 selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
setenforce 0
關閉 iptables
systemctl start firewalld.service
systemctl stop firewalld.service
systemctl disable firewalld.service

3 安裝配置 OpenStack
官方文件 http://docs.openstack.org/

3.1 安裝軟體包

linux-node1.openstack   安裝

*************************************************************************************

#Base
yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum install -y centos-release-openstack-liberty
yum install -y python-openstackclient

##MySQL
yum install -y mariadb mariadb-server MySQL-python

##RabbitMQ
yum install -y rabbitmq-server

##Keystone
yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached

##Glance
yum install -y openstack-glance python-glance python-glanceclient

##Nova
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

##Neutron linux-node1.example.com
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset

##Dashboard
yum install -y openstack-dashboard

##Cinder
yum install -y openstack-cinder python-cinderclient

*************************************************************************************

linux-node2.openstack   安裝

##Base
yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum install centos-release-openstack-liberty
yum install python-openstackclient

##Nova linux-node2.openstack
yum install -y openstack-nova-compute sysfsutils

##Neutron linux-node2.openstack
yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset

##Cinder
yum install -y openstack-cinder python-cinderclient targetcli python-oslo-policy

*************************************************************************************

3.2 設定時間同步、 關閉 selinux 和 iptables
在 linux-node1 上配置( 只有 centos7 能用, 6 還用 ntp)
[root@linux-node1 ~]# yum install -y chrony
[root@linux-node1 ~]# vim /etc/chrony.conf
allow 192.168/16 #允許那些伺服器和自己同步時間
[root@linux-node1 ~]# systemctl enable chronyd.service    #開機啟動
[root@linux-node1 ~]# systemctl start chronyd.service
[root@linux-node1 ~]# timedatectl set-timezone Asia/Shanghai     #設定時區
[root@linux-node1 ~]# timedatectl status
Local time: Fri 2016-08-26 11:14:19 CST
Universal time: Fri 2016-08-26 03:14:19 UTC
RTC time: Fri 2016-08-26 03:14:19
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a

在 linux-node2 上配置
[root@linux-node2 ~]# yum install -y chrony
[root@linux-node2 ~]# vim /etc/chrony.conf
server 192.168.1.17 iburst #只留一行
[root@linux-node2 ~]# systemctl enable chronyd.service
[root@linux-node2 ~]# systemctl start chronyd.service
[root@linux-node2 ~]# timedatectl set-timezone Asia/Shanghai
[root@linux-node2 ~]# chronyc sources

3.3 安裝及配置 mysql

[root@linux-node1 ~]# cp /usr/share/mysql/my-medium.cnf /etc/my.cnf                   #或者是/usr/share/mariadb/my-medium.cnf
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
[root@linux-node1 ~]# systemctl enable mariadb.service                                                      #Centos7裡面mysql叫maridb
[root@linux-node1 ~]# ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'
[root@linux-node1 ~]# mysql_install_db --datadir="/var/lib/mysql" --user="mysql"               #初始化資料庫
[root@linux-node1 ~]# systemctl start mariadb.service
[root@linux-node1 ~]# mysql_secure_installation                                                                 #設定密碼及初始化
密碼 123456,一路 y 回車

建立資料庫
[root@openstack-server ~]# mysql -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5579
Server version: 5.5.50-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema   |
| cinder                       |
| glance                       |
| keystone                    |
| mysql                        |
| neutron                     |
| nova                         |
| performance_schema |
+--------------------+
8 rows in set (0.00 sec)

MariaDB [(none)]>

----------------------------------------------------------------------------------------------------------------------------
參考另一篇部落格:http://www.cnblogs.com/kevingrace/p/5811167.html
修改下mysql的連線數,否則openstack後面的操作會報錯:“ERROR 1040 (08004): Too many connections ”
----------------------------------------------------------------------------------------------------------------------------

3.4 配置 rabbitmq
MQ 全稱為 Message Queue, 訊息佇列( MQ)是一種應用程式對應用程式的通訊方法。應用
程式通過讀寫出入佇列的訊息(針對應用程式的資料)來通訊,而無需專用連線來連結它們。
消 息傳遞指的是程式之間通過在訊息中傳送資料進行通訊,而不是通過直接呼叫彼此來通
信,直接呼叫通常是用於諸如遠端過程呼叫的技術。排隊指的是應用程式通過 佇列來通訊。
佇列的使用除去了接收和傳送應用程式同時執行的要求。
RabbitMQ 是一個在 AMQP 基礎上完整的,可複用的企業訊息系統。他遵循 Mozilla Public
License 開源協議。
啟動 rabbitmq, 埠 5672,新增 openstack 使用者

[root@linux-node1 ~]# systemctl enable rabbitmq-server.service
[root@linux-node1 ~]# ln -s '/usr/lib/systemd/system/rabbitmq-server.service' '/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service'
[root@linux-node1 ~]# systemctl start rabbitmq-server.service
[root@linux-node1 ~]# rabbitmqctl add_user openstack openstack                               #新增使用者及密碼
[root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"                                   #允許配置、寫、讀訪問 openstack
[root@linux-node1 ~]# rabbitmq-plugins list                                 #檢視支援的外掛
.........
[ ] rabbitmq_management 3.6.2                                    #使用此外掛實現 web 管理
.........
[root@linux-node1 ~]# rabbitmq-plugins enable rabbitmq_management #啟動外掛
The following plugins have been enabled:
mochiweb
webmachine
rabbitmq_web_dispatch
amqp_client
rabbitmq_management_agent
rabbitmq_management
Plugin configuration has changed. Restart RabbitMQ for changes to take effect.
[root@linux-node1 ~]# systemctl restart rabbitmq-server.service

[root@linux-node1 ~]#lsof -i:15672

訪問RabbitMQ,訪問地址是http://58.68.250.17:15672
預設使用者名稱密碼都是guest,瀏覽器新增openstack使用者到組並登陸測試,連不上情況一般是防火牆沒有關閉所致!

之後退出使用 openstack 登入
如何使用 zabbix 監控?
左下角有 HTTP API 的介紹,可以實現 zabbix 的監控

*********************************************************************************************

以上完成基礎環境的配置,下面開始安裝 openstack 的元件

3.5 配置 Keystone 驗證服務
所有的服務,都需要在 keystone 上註冊
3.5.1 Keystone 介紹

3.5.2 配置 Keystone
埠 5000 和 35357

1、修改/etc/keystone/keystone.conf
取一個隨機數
[root@linux-node1 ~]# openssl rand -hex 10
35d6e6f377a889571bcf
[root@linux-node1 ~]# cat /etc/keystone/keystone.conf|grep -v "^#"|grep -v "^$"
[DEFAULT]
admin_token = 35d6e6f377a889571bcf                                    #設定 token,和上面產生的隨機數值一致
verbose = true
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql://keystone:keystone@192.168.1.17/keystone                                          #設定資料庫連線 寫到database下
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[matchmaker_ring]
[memcache]
servers = 192.168.1.17:11211
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
driver = sql
[role]
[saml]
[signing]
[ssl]
[token]
provider = uuid
driver = memcache
[tokenless_auth]
[trust]

2、 建立資料庫表, 使用命令同步
[root@linux-node1 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
No handlers could be found for logger "oslo_config.cfg"                                          #出現這個資訊,不影響後續操作!忽略~

[root@linux-node1 ~]# ll /var/log/keystone/keystone.log
-rw-r--r--. 1 keystone keystone 298370 Aug 26 11:36 /var/log/keystone/keystone.log      #之所以上面 su 切換是因為這個日誌檔案屬主
[root@linux-node1 config]# mysql -h 192.168.1.17 -u keystone -p      #資料庫檢查表,生產環境密碼不要用keystone,改成複雜點的密碼

3、 啟動 memcached 和 apache
啟動 memcached
[root@linux-node1 ~]# systemctl enable memcached
[root@linux-node1 ~]#ln -s '/usr/lib/systemd/system/memcached.service' '/etc/systemd/system/multi-user.target.wants/memcached.service'
[root@linux-node1 ~]# systemctl start memcached
配置 httpd
[root@linux-node1 ~]# vim /etc/httpd/conf/httpd.conf
ServerName 192.168.1.17:80
[root@linux-node1 ~]# cat /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357

<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>

<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>

啟動 httpd
[root@linux-node1 config]# systemctl enable httpd
[root@linux-node1 config]#ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
[root@linux-node1 config]# systemctl start httpd
[root@linux-node1 ~]# netstat -lntup|grep httpd
tcp6 0 0 :::5000 :::* LISTEN 23632/httpd
tcp6 0 0 :::80 :::* LISTEN 23632/httpd
tcp6 0 0 :::35357 :::* LISTEN 23632/httpd
如果 http 起不來關閉 selinux 或者安裝 yum install openstack-selinux

4、 建立 keystone 使用者
臨時設定 admin_token 使用者的環境變數,用來建立使用者
[root@linux-node1 ~]# export OS_TOKEN=35d6e6f377a889571bcf                           #上面產生的隨機數值
[root@linux-node1 ~]# export OS_URL=http://192.168.1.17:35357/v3
[root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3

建立 admin 專案---建立 admin 使用者(密碼 admin,生產不要這麼玩) ---建立 admin 角色---把 admin 使用者加入到 admin 專案賦予 admin 的角色(三個 admin 的位置:專案,使用者,角色)
[root@linux-node1 ~]#openstack project create --domain default --description "Admin Project" admin
[root@linux-node1 ~]#openstack user create --domain default --password-prompt admin
[root@linux-node1 ~]#openstack role create admin
[root@linux-node1 ~]#openstack role add --project admin --user admin admin
建立一個普通使用者 demo
[root@linux-node1 ~]#openstack project create --domain default --description "Demo Project" demo
[root@linux-node1 ~]#openstack user create --domain default --password=demo demo
[root@linux-node1 ~]#openstack role create user
[root@linux-node1 ~]#openstack role add --project demo --user demo user

建立 service 專案,用來管理其他服務用
[root@linux-node1 ~]#openstack project create --domain default --description "Service Project" service

以上的名字都是固定的,不能改

檢視建立的而使用者和專案
[root@linux-node1 ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| b1f164577a2d43b9a6393527f38e3f75 | demo |
| b694d8f0b70b41d883665f9524c77766 | admin |
+----------------------------------+-------+
[root@linux-node1 ~]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 604f9f78853847ac9ea3c31f2c7f677d | demo |
| 777f4f0108b1476eabc11e00dccaea9f | admin |
| aa087f62f1d44676834d43d0d902d473 | service |
+----------------------------------+---------+
5、註冊 keystone 服務,以下三種型別分別為公共的、內部的、管理的。
[root@linux-node1 ~]#openstack service create --name keystone --description "OpenStack Identity" identity
[root@linux-node1 ~]#openstack endpoint create --region RegionOne identity public http://192.168.1.17:5000/v2.0
[root@linux-node1 ~]#openstack endpoint create --region RegionOne identity internal http://192.168.1.17:5000/v2.0
[root@linux-node1 ~]#openstack endpoint create --region RegionOne identity admin http://192.168.1.17:35357/v2.0
[root@linux-node1 ~]# openstack endpoint list #檢視
+----------------------------------+-----------+--------------+--------------+---------+----
-------+---------------------------------+
| ID | Region | Service Name | Service Type | Enabled |
Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+----
-------+---------------------------------+
| 011a24def8664506985815e0ed2f8fa5 | RegionOne | keystone | identity | True |
internal | http://192.168.1.17:5000/v2.0 |
| b0981cae6a8c4b3186edef818733fec6 | RegionOne | keystone | identity | True | public
| http://192.168.1.17:5000/v2.0 |
| c4e0c79c0a8142eda4d9653064563991 | RegionOne | keystone | identity | True | admin
| http://192.168.1.17:35357/v2.0 |
+----------------------------------+-----------+--------------+--------------+---------+----
-------+---------------------------------+
[root@linux-node1 ~]# openstack endpoint delete ID                                    #使用這個命令刪除

6、 驗證,獲取 token,只有獲取到才能說明 keystone 配置成功
[root@linux-node1 ~]# unset OS_TOKEN
[root@linux-node1 ~]# unset OS_URL
[root@linux-node1 ~]# openstack --os-auth-url http://192.168.1.17:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue                   #回車
Password: admin
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2015-12-17T04:22:00.600668Z |
| id | 1b530a078b874438aadb77af11ce297e |
| project_id | 777f4f0108b1476eabc11e00dccaea9f |
| user_id | b694d8f0b70b41d883665f9524c77766 |
+------------+----------------------------------+

使用環境變數來獲取 token,環境變數在後面建立虛擬機器時也需要用。
建立兩個環境變數檔案,使用時直接 source!!!(注意,下面兩個sh檔案所在的路徑,在檢視命令前都要source下,不然會報錯!!)
[root@linux-node1 ~]# cat admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.1.17:35357/v3
export OS_IDENTITY_API_VERSION=3

[root@linux-node1 ~]# cat demo-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.1.17:5000/v3
export OS_IDENTITY_API_VERSION=3
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack token issue
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2015-12-17T04:26:08.625399Z |
| id | 58370ae3b9bb4c07a67700dd184ad3b1 |
16
| project_id | 777f4f0108b1476eabc11e00dccaea9f |
| user_id | b694d8f0b70b41d883665f9524c77766 |
+------------+----------------------------------+

3.6 配置 glance 映象服務
3.6.1 glance 介紹

 

3.6.2 glance 配置
埠:
api            9191
registry    9292
1、修改/etc/glance/glance-api.conf 和/etc/glance/glance-registry.conf
[root@linux-node1 ~]# cat /etc/glance/glance-api.conf|grep -v "^#"|grep -v "^$"
[DEFAULT]
verbose=True
notification_driver = noop                                           #galnce 不需要訊息佇列
[database]
connection=mysql://glance:glance@192.168.1.17/glance
[glance_store]
default_store=file
filesystem_store_datadir=/var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://192.168.1.17:5000
auth_url = http://192.168.1.17:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor=keystone
[store_type_location_strategy]
[task]
[taskflow_executor]

[root@linux-node1 ~]# cat /etc/glance/glance-registry.conf|grep -v "^#"|grep -v "^$"
[DEFAULT]
verbose=True
notification_driver = noop
[database]
connection=mysql://glance:glance@192.168.1.17/glance
[glance_store]
[keystone_authtoken]
auth_uri = http://192.168.1.17:5000
auth_url = http://192.168.1.17:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor=keystone

2、建立資料庫表,同步資料庫
[root@linux-node1 ~]# su -s /bin/sh -c "glance-manage db_sync" glance
[root@linux-node1 ~]# mysql -h 192.168.1.17 -uglance -p

3、 建立關於 glance 的 keystone 使用者
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack user create --domain default --password=glance glance
[root@linux-node1 ~]# openstack role add --project service --user glance admin

4、啟動 glance
[root@linux-node1 ~]#systemctl enable openstack-glance-api
[root@linux-node1 ~]#systemctl enable openstack-glance-registry
[root@linux-node1 ~]#systemctl start openstack-glance-api
[root@linux-node1 ~]#systemctl start openstack-glance-registry
[root@linux-node1 ~]# netstat -lnutp |grep 9191 #registry
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN
24890/python2
[root@linux-node1 ~]# netstat -lnutp |grep 9292 #api
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN
24877/python2

5、在 keystone 上註冊
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]#openstack service create --name glance --description "OpenStack Image service" image
[root@linux-node1 ~]#openstack endpoint create --region RegionOne image public http://192.168.1.17:9292
[root@linux-node1 ~]#openstack endpoint create --region RegionOne image internal http://192.168.1.17:9292
[root@linux-node1 ~]#openstack endpoint create --region RegionOne image admin http://192.168.1.17:9292

6、新增 glance 環境變數並測試
[root@linux-node1 src]# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
[root@linux-node1 src]# glance image-list
+----+------+
| ID | Name |
+----+------+
+----+------+

7、 下載映象並上傳到 glance 【此處下載的qcow2格式映象比較小,可以直接下載ios格式映象,然後用oz工具製造】
[root@linux-node1 ~]# wget -q http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img                                   #也可以提前下載下來
[root@linux-node1 ~]# glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2015-12-17T04:11:02Z |
| disk_format | qcow2 |
| id | 2707a30b-853f-4d04-861d-e05b0f1855c8 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 777f4f0108b1476eabc11e00dccaea9f |
| protected | False |
| size | 13287936 |
| status | active |
| tags | [] |
| updated_at | 2015-12-17T04:11:03Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------+
-------------------------------------------------------------------------------------------------------------------------------
下載ios格式映象,需要用OZ工具製造openstack映象,具體操作請見另一篇部落格:

實際生產環境下,肯定要使用ios映象進行製作了

http://www.cnblogs.com/kevingrace/p/5821823.html

-------------------------------------------------------------------------------------------------------------------------------

或者直接下載centos的qcow2格式映象進行上傳,qcow2格式映象直接就可以在openstack裡使用,不需要進行格式轉換!
下載地址:http://cloud.centos.org/centos,可以到裡面下載centos5/6/7的qcow2格式的映象

[root@linux-node1 ~]#wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
[root@linux-node1 ~]#glance image-create --name "CentOS-7-x86_64" --file CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

------------------------------------------------------------------------------------------------------------------------------

[root@linux-node1 ~]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 2707a30b-853f-4d04-861d-e05b0f1855c8 | cirros |
+--------------------------------------+--------+

[root@linux-node1 ~]# ll /var/lib/glance/images/
總用量 12980
-rw-r-----. 1 glance glance 1569390592 Aug 26 12:50 35b36f08-eeb9-4a91-9366-561f0a308a1b

3.7 配置 nova 計算服務
3.7.1 nova 介紹
nova 必備的元件

nova scheduler

3.7.2 Nova 控制節點配置

1、修改/etc/nova/nova.conf
[root@linux-node1 ~]# cat /etc/nova/nova.conf|grep -v "^#"|grep -v "^$"
[DEFAULT]
my_ip=192.168.1.17
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
security_group_api=neutron
firewall_driver = nova.virt.firewall.NoopFirewallDriver
debug=true
verbose=true
rpc_backend=rabbit
allow_resize_to_same_host=True
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
[api_database]
[barbican]
[cells]
[cinder]
[conductor]
[cors]
[cors.subdomain]
[database]
connection=mysql://nova:nova@192.168.1.17/nova
[ephemeral_storage_encryption]
[glance]
host=$my_ip
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://192.168.1.17:5000
auth_url = http://192.168.1.17:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = nova
[libvirt]
virt_type=kvm                                  #如果控制節點也作為計算節點(單機部署的話),這一行也新增上(這行是計算節點配置的)
[matchmaker_redis]
[matchmaker_ring]
[metrics]
[neutron]
url = http://192.168.1.17:9696
auth_url = http://192.168.1.17:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = neutron
lock_path=/var/lib/nova/tmp
[osapi_v21]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host=192.168.1.17
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
[oslo_middleware]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
novncproxy_base_url=http://58.68.250.17:6080/vnc_auto.html      #如果控制節點也作為計算節點(單機部署的話),這一行也新增上(這行是計算節點配置的),配置控制節點的公網ip
vncserver_listen= $my_ip
vncserver_proxyclient_address= $my_ip
keymap=en-us           #如果控制節點也作為計算節點(單機部署的話),這一行也新增上(這行是計算節點配置的)
[workarounds]
[xenserver]
[zookeeper]

***********************************************************************
{網路部分為啥這麼寫:network_api_class=nova.network.neutronv2.api.API}
[root@linux-node1 ~]# ls /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py
/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py
這裡面有一個 API 方法,其他配置類似
***********************************************************************

2、同步資料庫
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
[root@linux-node1 ~]# mysql -h 192.168.1.17 -unova -p 檢查

3、建立 nova 的 keystone 使用者
[root@linux-node1 ~]# openstack user create --domain default --password=nova nova
[root@linux-node1 ~]# openstack role add --project service --user nova admin

4、啟動 nova 相關服務
[root@linux-node1 ~]#systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@linux-node1 ~]#systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

5、在 keystone 上註冊
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack service create --name nova --description "OpenStack Compute" compute
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute public http://192.168.1.17:8774/v2/%\(tenant_id\)s
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.1.17:8774/v2/%\(tenant_id\)s
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.1.17:8774/v2/%\(tenant_id\)s
檢查
[root@linux-node1 ~]# openstack host list
+---------------------------+-------------+----------+
| Host Name | Service | Zone |
+---------------------------+-------------+----------+
| linux-node1.oldboyedu.com | conductor | internal |
| linux-node1.oldboyedu.com | scheduler | internal |
| linux-node1.oldboyedu.com | consoleauth | internal |
| linux-node1.oldboyedu.com | cert | internal |
+---------------------------+-------------+----------+

3.7.3   nova 計算節點配置
1、 nova compute 介紹

2、修改配置檔案/etc/nova/nova.conf
可以直接從 node1 拷貝到 node2 上
[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.1.8:/etc/nova/
手動更改如下配置
[root@linux-node2 ~]# vim /etc/nova/nova.conf
my_ip=192.168.1.8
novncproxy_base_url=http://192.168.1.17:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address= $my_ip
keymap=en-us
[glance]
host=192.168.56.17
[libvirt]
virt_type=kvm                   #虛擬機器型別,預設是 kvm

3、啟動服務
[root@linux-node2 ~]# systemctl enable libvirtd openstack-nova-compute
[root@linux-node2 ~]# systemctl start libvirtd openstack-nova-compute

4、在控制節點測試(計算節點上也行,需要環境變數)
[root@linux-node1 ~]# openstack host list
+---------------------------+-------------+----------+
| Host Name | Service | Zone |
+---------------------------+-------------+----------+
| linux-node1.oldboyedu.com | conductor | internal |
| linux-node1.oldboyedu.com | consoleauth | internal |
| linux-node1.oldboyedu.com | scheduler | internal |
| linux-node1.oldboyedu.com | cert | internal |
| linux-node2.oldboyedu.com | compute | nova |
+---------------------------+-------------+----------+

[root@linux-node1 ~]# nova image-list                  #測試 glance 是否正常
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 2707a30b-853f-4d04-861d-e05b0f1855c8 | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
[root@linux-node1 ~]# nova endpoints                     #測試 keystone
WARNING: keystone has no endpoint in ! Available endpoints for this service:           #這一行告警不影響後面的操作
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | 02fed35802734518922d0ca2d672f469 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:5000/v2.0 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | 52b0a1a700f04773a220ff0e365dea45 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:5000/v2.0 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone | Value |
+-----------+----------------------------------+
| id | 88df7df6427d45619df192979219e65c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:35357/v2.0 |
+-----------+----------------------------------+
WARNING: nova has no endpoint in ! Available endpoints for this service:
+-----------+--------------------------------------------------------------+
| nova | Value |
+-----------+--------------------------------------------------------------+
| id | 1a3115941ff54b7499a800c7c43ee92a |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:8774/v2/65a0c00638c247a0a274837aa6eb165f |
+-----------+--------------------------------------------------------------+
+-----------+--------------------------------------------------------------+
| nova | Value |
+-----------+--------------------------------------------------------------+
| id | 5278f33a42754c9a8d90937932b8c0b3 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:8774/v2/65a0c00638c247a0a274837aa6eb165f |
+-----------+--------------------------------------------------------------+
+-----------+--------------------------------------------------------------+
| nova | Value |
+-----------+--------------------------------------------------------------+
| id | 8c4fa7b9a24949c5882949d13d161d36 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:8774/v2/65a0c00638c247a0a274837aa6eb165f |
+-----------+--------------------------------------------------------------+
WARNING: glance has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | 31fbf72537a14ba7927fe9c7b7d06a65 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | be788b4aa2ce4251b424a3182d0eea11 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:9292 |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance | Value |
+-----------+----------------------------------+
| id | d0052712051a4f04bb59c06e2d5b2a0b |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| url | http://192.168.1.17:9292 |
+-----------+----------------------------------+

3.8 Neutron 網路服務
3.8.1 Neutron 介紹
neutron 由來

 

openstack 網路分類:

 

Neutron 元件

3.8.2 Neutron 控制節點配置( 5 個配置檔案)
1、修改/etc/neutron/neutron.conf 檔案

[root@linux-node1 ~]# cat /etc/neutron/neutron.conf|grep -v "^#"|grep -v "^$"
[DEFAULT]
state_path = /var/lib/neutron
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://192.168.1.17:8774/v2
rpc_backend=rabbit
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://192.168.1.17:5000
auth_url = http://192.168.1.17:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = neutron
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
[database]
connection = mysql://neutron:neutron@192.168.1.17:3306/neutron
[nova]
auth_url = http://192.168.1.17:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = $state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 192.168.1.17
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack
[qos]

2、 配置/etc/neutron/plugins/ml2/ml2_conf.ini
[root@linux-node1 ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep -v "^#"|grep -v "^$"
[ml2]
type_drivers = flat,vlan,gre,vxlan,geneve
tenant_network_types = vlan,gre,vxlan,geneve
mechanism_drivers = openvswitch,linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = physnet1
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
[ml2_type_geneve]
[securitygroup]
enable_ipset = True

3、配置/etc/neutron/plugins/ml2/ linuxbridge_agent.ini
[root@linux-node1 ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini|grep -v "^#"|grep -v "^$"
[linux_bridge]
physical_interface_mappings = physnet1:em2
[vxlan]
enable_vxlan = false
[agent]
prevent_arp_spoofing = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True

4、修改/etc/neutron/dhcp_agent.ini
[root@linux-node1 ~]# cat /etc/neutron/dhcp_agent.ini|grep -v "^#"|grep -v "^$"
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[AGENT]

5、修改/etc/neutron/metadata_agent.ini
[root@linux-node1 ~]# cat /etc/neutron/metadata_agent.ini|grep -v "^#"|grep -v "^$"
[DEFAULT]
auth_uri = http://192.168.1.17:5000
auth_url = http://192.168.1.17:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = neutron
nova_metadata_ip = 192.168.1.17
metadata_proxy_shared_secret = neutron
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
[AGENT]

6、建立連線並建立 keystone 的使用者
[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@linux-node1 ~]# openstack user create --domain default --password=neutron neutron
[root@linux-node1 ~]# openstack role add --project service --user neutron admin

7、更新資料庫
[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

8、註冊 keystone
[root@linux-node1 ~]# source admin-openrc.sh
[root@linux-node1 ~]# openstack service create --name neutron --description "OpenStack Networking" network
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.1.17:9696
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network internal http://192.168.1.17:9696
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network admin http://192.168.1.17:9696

9、 啟動服務並檢查
因為neutron和nova有聯絡,做neutron時修改nova的配置檔案,上面nova.conf已經做了neutron的關聯配置,所以要重啟openstack-nova-api服務。
這裡將nova的關聯服務都一併重啟了:
[root@linux-node1 ~]# systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

啟動neutron相關服務
[root@linux-node1 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@linux-node1 ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

檢查
[root@linux-node1 ~]# neutron agent-list
+--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+
| 385cebf9-9b34-4eca-b780-c515dbc7eec0 | Linux bridge agent | openstack-server | :-) | True | neutron-linuxbridge-agent |
| b3ff8ffe-1ff2-4659-b823-331def4e6a93 | DHCP agent | openstack-server | :-) | True | neutron-dhcp-agent |
| b5bed625-47fd-4e79-aa55-01cf8a8cc577 | Metadata agent | openstack-server | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+

檢視註冊資訊
[root@openstack-server src]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+
| 02fed35802734518922d0ca2d672f469 | RegionOne | keystone | identity | True | internal | http://192.168.1.17:5000/v2.0 |
| 1a3115941ff54b7499a800c7c43ee92a | RegionOne | nova | compute | True | internal | http://192.168.1.17:8774/v2/%(tenant_id)s |
| 31fbf72537a14ba7927fe9c7b7d06a65 | RegionOne | glance | image | True | admin | http://192.168.1.17:9292 |
| 5278f33a42754c9a8d90937932b8c0b3 | RegionOne | nova | compute | True | admin | http://192.168.1.17:8774/v2/%(tenant_id)s |
| 52b0a1a700f04773a220ff0e365dea45 | RegionOne | keystone | identity | True | public | http://192.168.1.17:5000/v2.0 |
| 88df7df6427d45619df192979219e65c | RegionOne | keystone | identity | True | admin | http://192.168.1.17:35357/v2.0 |
| 8c4fa7b9a24949c5882949d13d161d36 | RegionOne | nova | compute | True | public | http://192.168.1.17:8774/v2/%(tenant_id)s |
| be788b4aa2ce4251b424a3182d0eea11 | RegionOne | glance | image | True | public | http://192.168.1.17:9292 |
| c059a07fa3e141a0a0b7fc2f46ca922c | RegionOne | neutron | network | True | public | http://192.168.1.17:9696 |
| d0052712051a4f04bb59c06e2d5b2a0b | RegionOne | glance | image | True | internal | http://192.168.1.17:9292 |
| ea325a8a2e6e4165997b2e24a8948469 | RegionOne | neutron | network | True | internal | http://192.168.1.17:9696 |
| ffdec11ccf024240931e8ca548876ef0 | RegionOne | neutron | network | True | admin | http://192.168.1.17:9696 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-------------------------------------------+

3.8.3 Neutron 計算節點配置
1、修改相關配置檔案
從 node1 上直接拷貝

[root@linux-node1 ~]# scp /etc/neutron/neutron.conf 192.168.1.8:/etc/neutron/
[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.1.8:/etc/neutron/plugins/ml2/
[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/ml2_conf.ini 192.168.1.8:/etc/neutron/plugins/ml2/
修改計算節點的 nova 配置檔案中 neutron 部分, 並重啟 openstack-nova-compute 服務, 因為
上面 nova 計算節點也是從控制節點拷貝的,此處無需操作

2、 建立軟連線並啟動服務
[root@linux-node2 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@linux-node2 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@linux-node2 ~]# systemctl start neutron-linuxbridge-agent.service

檢查
[root@linux-node1 ~]# neutron agent-list
+--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+
| 385cebf9-9b34-4eca-b780-c515dbc7eec0 | Linux bridge agent | openstack-server | :-) | True | neutron-linuxbridge-agent |
| b3ff8ffe-1ff2-4659-b823-331def4e6a93 | DHCP agent | openstack-server | :-) | True | neutron-dhcp-agent |
| b5bed625-47fd-4e79-aa55-01cf8a8cc577 | Metadata agent | openstack-server | :-) | True | neutron-metadata-agent |
+--------------------------------------+--------------------+------------------+-------+----------------+---------------------------+

3.9 建立虛擬機器
3.9.1 建立橋接網路

1、 建立網路
[root@linux-node1 ~]# source admin-openrc.sh                     #在哪個專案下建立虛擬機器,這裡選擇在demo下建立;也可以在admin下
[root@linux-node1 ~]# neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flat

2、 建立子網(填寫宿主機的內網閘道器,下面DNS和內網閘道器可以設定成宿主機的內網ip,下面192.168.1.100-200是分配給虛擬機器的ip範圍)
[root@linux-node1 ~]# neutron subnet-create flat 192.168.1.0/24 --name flat-subnet --allocation-pool start=192.168.1.100,end=192.168.1.200 --dns-nameserver 192.168.1.1 --gateway 192.168.1.1

3、 檢視子網
[root@linux-node1 ~]# neutron net-list
+--------------------------------------+------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+-----------------------------------------------------+
| 1d9657f6-de9e-488f-911f-020c8622fe78 | flat | c53da14a-01fe-4f6c-8485-232489deaa6e 192.168.1.0/24 |
+--------------------------------------+------+-----------------------------------------------------+

[root@linux-node1 ~]# neutron subnet-list
+--------------------------------------+-------------+----------------+----------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+-------------+----------------+----------------------------------------------------+
| c53da14a-01fe-4f6c-8485-232489deaa6e | flat-subnet | 192.168.1.0/24 | {"start": "192.168.1.100", "end": "192.168.1.200"} |
+--------------------------------------+-------------+----------------+----------------------------------------------------+
需要關閉 VMware 的 dhcp

3.9.2 建立虛擬機器(為vm分配內網ip,後續利用squid代理或宿主機NAT埠轉發進行對外或對內訪問)
1、建立 key
[root@linux-node1 ~]# source demo-openrc.sh               (這是在demo賬號下建立虛擬機器;要是在admin賬號下建立虛擬機器,就用source admin-openrc.sh)
[root@linux-node1 ~]# ssh-keygen -q -N ""

2、將公鑰新增到虛擬機器
[root@linux-node1 ~]# nova keypair-add --pub-key /root/.ssh/id_rsa.pub mykey
[root@linux-node1 ~]# nova keypair-list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | cd:7a:1e:cd:c0:43:9b:b1:f4:3b:cf:cd:5e:95:f8:00 |
+-------+-------------------------------------------------+

3、建立安全組
[root@linux-node1 ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
[root@linux-node1 ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

4、 建立虛擬機器
檢視支援的虛擬機器型別
[root@linux-node1 ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
檢視映象
[root@linux-node1 ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 2707a30b-853f-4d04-861d-e05b0f1855c8 | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
檢視網路
[root@linux-node1 ~]# neutron net-list
+--------------------------------------+------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+-----------------------------------------------------+
| 1d9657f6-de9e-488f-911f-020c8622fe78 | flat | c53da14a-01fe-4f6c-8485-232489deaa6e 192.168.1.0/24 |
+--------------------------------------+------+-----------------------------------------------------+

建立虛擬機器 【這一步容易報錯,一般都是由於上面的 nova.conf 配置填寫有誤所致】
[root@linux-node1 ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=1d9657f6-de9e-488f-911f-020c8622fe78 --security-group default --key-name mykey hello-instance

5、檢視虛擬機器

[root@linux-node1 ~]# nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+
| 7a6215ac-aea7-4e87-99a3-b62c06d4610e | hello-instance| ACTIVE | - | Running | flat=192.168.1.102 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------+

*************************************************************************
如果要刪除虛擬機器(利用虛擬機器ID進行刪除)
[root@linux-node1 ~]# nova delete 7a6215ac-aea7-4e87-99a3-b62c06d4610e
*************************************************************************

[root@linux-node1 src]# nova list
+--------------------------------------+----------------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+--------------------+
| 007db18f-ae3b-463a-b86d-9a8455a21e2d | hello-instance | ACTIVE | - | Running | flat=192.168.1.101 |
+--------------------------------------+----------------+--------+------------+-------------+--------------------+

[root@linux-node1 ~]# ssh cirros@192.168.1.101 登入檢視

******************************************************************************

上面建立虛擬機器的時候,openstack在neutron組網內是採用dhcp-agent自動分配ip的!

可以在建立虛擬機器的時候,指定固定ip,方法詳見於另一篇部落格:

http://www.cnblogs.com/kevingrace/p/5822660.html

******************************************************************************

6、 web 介面開啟虛擬機器
[root@linux-node1 ~]# nova get-vnc-console hello-instance novnc
+-------+-----------------------------------------------------------------------------------
-+
| Type | Url
| +
-------+-----------------------------------------------------------------------------------
-+
| novnc | http://58.68.250.17:6080/vnc_auto.html?token=303d5a78-c85f-4ed9-93b6-be9d5d28fba6 |       #訪問這個連結即可開啟vnc介面
+-------+-----------------------------------------------------------------------------------
-+

4.0 安裝 dashboard,登陸 web 管理介面
[root@linux-node1 ~]# yum install openstack-dashboard -y
[root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings               #按照下面幾行進行配置修改
OPENSTACK_HOST = "192.168.1.17"                                 #更改為keystone機器地址
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"              #預設的角色
ALLOWED_HOSTS = ['*']                                                 #允許所有主機訪問
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.1.17:11211',                                   #連線memcached
}
}
#CACHES = {
# 'default': {
# 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
# }
#}
TIME_ZONE = "Asia/Shanghai"                        #設定時區

重啟 httpd 服務
[root@linux-node1 ~]# systemctl restart httpd

web 介面登入訪問dashboard
http://58.68.250.17/dashboard/
使用者密碼 demo 或者 admin(管理員)

---------------------------------------------------------------------------------------------
如果要修改dashboard的訪問埠(比如將80埠改為8080埠),則需要修改下面兩個配置檔案:
1)vim /etc/httpd/conf/httpd.conf
將80埠修改為8080埠

Listen 8080
ServerName 192.168.1.17:8080

2)vim /etc/openstack-dashboard/local_settings #將下面兩處的埠由80改為8080
'from_port': '8080',
'to_port': '8080',

然後重啟http服務:
systemctl restart httpd

如果開啟了防火牆,還需要開通8080埠訪問規則

這樣,dashboard訪問url:
http://58.68.250.17:8080/dashboard
---------------------------------------------------------------------------------------------

 

 

前面建立了兩個賬號:admin 和 demo,兩個賬號都可以登陸web!只不過, admin 是管理員賬號, admin 登陸後可以看到其他賬號下的狀態
demo 等普通賬號登陸後只能看到自己的狀態
注意:
上面的 Rabbit 賬號 admin 和 openstack 是訊息佇列的 web 登陸賬號。
比如一下子要建立 10 個虛擬機器的指令,但是當前資源處理不過來,就通過 Rabbit 進行排隊!!

-----------------------------------------------------------------------------------------------------------------------
修改OpenStack中dashboard使用者登陸密碼的方法:

登陸dashboard:

 

--------------------------------------------------------------------------------------------------------------------------------------------------

建立虛擬機器的時候,我們可以自己定義虛擬機器的型別(即配置)。

登陸openstack的web管理介面裡進行自定義,也可以將之前的刪除。

檢視上傳到glance的映象

檢視建立的虛擬機器例項

自定義虛擬主機型別,設定如下:

(如果想讓虛擬機器有空閒磁碟空間,用於新建分割槽之用,則可以在這裡分配臨時磁碟)

 

我建立了四個虛擬機器例項,採用的是同一個虛擬主機型別(即上面的kvm002),四個例項總共佔用宿主機40G的空間。

登陸到openstack,可以看到,左側一共有四個標籤欄:

----------------------------------------------------------------------------------------------------------------------------------------------------
可以登陸dashboard介面,在“計算”->“例項”裡選擇“啟動雲主機"或者“計算->網路->網路拓撲”裡選擇“啟動虛擬機器”就可以再建立一個虛擬機器
也可以按照快照再啟動(建立)一個虛擬機器,不過這樣啟動起來的虛擬機器是一個新的ip(快照前的源虛擬機器就要關機了)

檢視例項,發現kvm-server005虛擬機器已經建立成功了。預設建立後的ip是dhcp自動分配的,可以登陸虛擬機器改成static靜態ip

---------------------------------------------------------------------------------------------------------------------------------------------

在openstack 中重啟例項有兩種,分別被稱為“軟重啟”和“硬重啟”。所謂的軟重啟會嘗試正常關機並重啟例項,硬重啟會直接將例項“斷電”並重啟。也就是說硬重啟會“關閉”電源。其具體命令如下:
預設情況下,如果您通過nova重啟,執行的是軟重啟。
$ nova reboot SERVER
如果您需要執行硬重啟,新增--hard引數即可:
$ nova reboot --hard SERVER

nova命令管理虛擬機器:

$ nova list #檢視虛擬機器
$ nova stop [vm-name]或[vm-id] #關閉虛擬機器
$ nova start [vm-name]或[vm-id] #啟動虛擬機器
$ nova suspend [vm-name]或[vm-id] #暫停虛擬機器
$ nova resume [vm-name]或[vm-id] #啟動暫停的虛擬機器
$ nova delete [vm-name]或[vm-id] #刪除虛擬機器

$nova-manage service list    #檢查服務是否正常

[root@openstack-server ~]# source /usr/local/src/admin-openrc.sh
[root@openstack-server ~]# nova list
+--------------------------------------+----------------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+--------------------+
| 11e7ad7f-c0a8-482b-abca-3a4b7cfdd55d | hello-instance | ACTIVE | - | Running | flat=192.168.1.107 |
| 67f71703-c32c-4bf1-8778-b2a6600ad34a | kvm-server0 | ACTIVE | - | Running | flat=192.168.1.120 |
+--------------------------------------+----------------+--------+------------+-------------+--------------------+
[root@openstack-server ~]# ll /var/lib/nova/instances/           #下面是虛擬機器的存放路徑
total 8
drwxr-xr-x. 2 nova nova 85 Aug 29 15:22 11e7ad7f-c0a8-482b-abca-3a4b7cfdd55d
drwxr-xr-x. 2 nova nova 85 Aug 29 15:48 67f71703-c32c-4bf1-8778-b2a6600ad34a
drwxr-xr-x. 2 nova nova 80 Aug 29 15:40 _base
-rw-r--r--. 1 nova nova 39 Aug 29 16:44 compute_nodes
drwxr-xr-x. 2 nova nova 4096 Aug 29 13:58 locks

-----------------------------------------------------------------------------------------------------------------------------------
virsh命令列管理虛擬機器:

[root@openstack-server ~]# virsh list #檢視虛擬機器

Id Name State
----------------------------------------------------
9 instance-00000008 running
41 instance-00000015 running
[root@openstack-server ~]# ll /etc/libvirt/qemu/ #虛擬機器檔案
total 16
-rw-------. 1 root root 4457 Aug 26 17:46 instance-00000008.xml
-rw-------. 1 root root 4599 Aug 29 15:40 instance-00000015.xml
drwx------. 3 root root 22 Aug 24 12:06 networks

其中:
virsh list #顯示本地活動虛擬機器
virsh list --all #顯示本地所有的虛擬機器(活動的+不活動的)
virsh define instance-00000015.xml #通過配置檔案定義一個虛擬機器(這個虛擬機器還不是活動的)
virsh edit instance-00000015 # 編輯配置檔案(一般是在剛定義完虛擬機器之後)
virsh start instance-00000015 #啟動名字為ubuntu的非活動虛擬機器
virsh reboot instance-00000015 #重啟虛擬機器
virsh create instance-00000015.xml #建立虛擬機器(建立後,虛擬機器立即執行,成為活動主機)
virsh suspend instance-00000015 #暫停虛擬機器
virsh resume instance-00000015 #啟動暫停的虛擬機器
virsh shutdown instance-00000015 #正常關閉虛擬機器
virsh destroy instance-00000015 #強制關閉虛擬機器
virsh dominfo instance-00000015 #顯示虛擬機器的基本資訊
virsh domname 2 #顯示id號為2的虛擬機器名
virsh domid instance-00000015 #顯示虛擬機器id號
virsh domuuid instance-00000015 #顯示虛擬機器的uuid
virsh domstate instance-00000015 #顯示虛擬機器的當前狀態
virsh dumpxml instance-00000015 #顯示虛擬機器的當前配置檔案(可能和定義虛擬機器時的配置不同,因為當虛擬機器啟動時,需要給虛擬機器分配id號、uuid、vnc埠號等等)
virsh setmem instance-00000015 512000 #給不活動虛擬機器設定記憶體大小
virsh setvcpus instance-00000015 4 # 給不活動虛擬機器設定cpu個數
virsh save instance-00000015 a  #將該instance-00000015虛擬機器的執行狀態儲存到檔案a中
virsh restore a    #恢復被儲存狀態的虛擬機器的狀態,即便虛擬機器被刪除也可以恢復(如果虛擬機器已經被undefine移除,那麼恢復的虛擬機器也只是一個臨時的狀態,關閉後自動消失)
virsh undefine instance-00000015    #移除虛擬機器,虛擬機器處於關閉狀態後還可以啟動,但是被該指令刪除後不能啟動。在虛擬機器處於Running狀態時,呼叫該指令,該指令暫時不生效,但是當虛擬機器被關閉後,該指令生效移除該虛擬機器,也可以在該指令生效之前呼叫define+TestKVM.xml取消該指令

注意:
virsh destroy instance-00000015 這條命令並不是真正的刪除這個虛擬機器,只是將這個虛擬機器強制關閉了。可以通過該虛擬機器的xml檔案恢復。如下:
[root@kvm-server ~]# virsh list
Id Name State
----------------------------------------------------
1 dev-new-test2 running
2 beta-new2 running
5 test-server running
8 ubuntu-test03 running
9 elk-node1 running
10 elk-node2 running
11 ubuntu-test01 running
12 ubuntu-test02 running

強制關閉虛擬機器
[root@kvm-server ~]# virsh destroy ubuntu-test02
Domain ubuntu-test02 destroyed

發現ubuntu-test02虛擬機器已經關閉了
[root@kvm-server ~]# virsh list
Id Name State
----------------------------------------------------
1 dev-new-test2 running
2 beta-new2 running
5 test-server running
8 ubuntu-test03 running
9 elk-node1 running
10 elk-node2 running
11 ubuntu-test01 running

但是該虛擬機器的xml檔案還在,可以通過這個檔案恢復
[root@kvm-server ~]# ll /etc/libvirt/qemu/ubuntu-test02.xml
-rw------- 1 root root 2600 Dec 26 13:55 /etc/libvirt/qemu/ubuntu-test02.xml
[root@kvm-server ~]# virsh define /etc/libvirt/qemu/ubuntu-test02.xml #這只是重新新增了這個虛擬機器,目前還不是活動的虛擬機器,需要啟動下
[root@kvm-server ~]# virsh start ubuntu-test02
Domain ubuntu-test02 started
[root@kvm-server ~]# virsh list
Id Name State
----------------------------------------------------
1 dev-new-test2 running
2 beta-new2 running
5 test-server running
8 ubuntu-test03 running
9 elk-node1 running
10 elk-node2 running
11 ubuntu-test01 running
12 ubuntu-test02 running

相關文章