一、簡介
neutron的主要作用是在openstack中為啟動虛擬機器例項提供網路服務,對於neutron來講,它可以提供兩種型別的網路;第一種是provider network,這種網路就是我們常說的橋接網路,虛擬機器內部網路通常是通過bridge的方式直接橋接到宿主機的某塊物理網路卡上,從而實現虛擬機器可以正常的訪問外部網路,同時虛擬機器外部網路也可以訪問虛擬機器的內部網路;第二種是self-service networks,這種網路就是nat網路;nat網路的實現是通過在虛擬機器和宿主機之間實現了虛擬路由器,在虛擬機器內部可以是一個私有地址連線至虛擬路由器的一個介面上,而虛擬路由器的另外一端通過網橋橋接到宿主機的某一張物理網路卡;所以nat網路很好的隱藏了虛擬機器的地址,它能夠實現虛擬機器訪問外部網路,而外網使用者是不能夠直接訪問虛擬機器的;但在openstack中,它能夠實現虛擬機器和外部的網路做一對一nat繫結,從而實現從虛擬機器外部網路訪問虛擬機器;
self-service network 示意圖
提示:self-service network 和provide network最大的區別是自服務網路中有虛擬路由器;有路由器就意味著虛擬機器要和外網通訊,網路報文要走三層,而對於provide network 來講,它的網路報文就可以直接走二層網路;所以在openstack上這兩種型別的網路實現方式和對應的元件也有所不同;
provide network 實現所需元件
提示:橋接網路也叫共享網路,虛擬機器例項網路是通過橋接的方式直接共享宿主機網路;虛擬機器和宿主機通訊,就類似宿主機同區域網的其他主機通訊一樣;所以虛擬機器和宿主機通訊報文都不會到三層,所以這裡面就不涉及三層網路相關的操作和配置;
self-service network實現所需元件
Self-service networks連線示意圖
對比上面兩種網路的實現所需元件,我們可以發現self-service network的實現要比provide network要多一個networking L3 Agent外掛;這個外掛用作實現3層網路功能,比如,提供或管理虛擬路由器;從上面的兩種網路連線示意圖也可以看出,self-service network是包含provide network,也就是說我們選擇使用self-service network這種型別的網路結構,我們即可以 建立自服務網路,也可以建立橋接網路;對於自服務網路來講,我們在計算節點啟動的虛擬機器,虛擬機器想要訪問外部網路,它會通過計算節點的vxlan介面,這裡的vxlan我們可以理解為在計算節點內部實現的虛擬交換機,各虛擬機器例項通過連線不同的vni(網路識別符號,類似vlan id一樣)的vxlan來實現網路的隔離,同時vxlan這個虛擬介面通常是橋接在本地管理網路介面上,這個管理網路一般是不能夠和外部網路通訊;虛擬機器訪問外部網路,通過vxlan介面實現的vxlan隧道,這個隧道是一頭是和計算節點的管理網路介面連線,一頭是和控制節點的管理網路介面連線;虛擬機器訪問外部網路是通過vxlan隧道,再通過控制節點中的虛擬路由器,將請求通過路由規則,路由到控制節點能夠上外網的介面上,然後發出去,從而實現虛擬機器能夠和外部網路進行互動;而對於外部網路要訪問虛擬機器,在openstack上是通過一對一nat繫結實現;也就說在控制節點能夠上外網的介面上配置很多ip地址,這些IP地址都是可以正常訪問外部網路的,在虛擬機器訪問外部網路時,在控制節點的虛擬機器路由器上就固定的把計算節點的某個虛擬機器的流量通過固定SNAT的方式進行資料傳送,對於這個固定地址在控制節點上再做固定的DNAT,從而實現外部網路訪問控制節點上的這個固定ip,通過DNAT規則把外部流量引入到虛擬機器,從而實現外部網路和虛擬機器通訊;
neutron工作流程
neutron服務主要由neutron-server、neutron agents、neutron plugins這三個元件組成,這三者都依賴訊息佇列服務;其中neutron server主要用來接收使用者的請求,比如建立或管理網路;當neutron server接收到客戶端(openstack其他服務,如nova,neutron專有客戶端)請求後,它會把請求丟到訊息佇列中去,然後neutron agents負責從訊息佇列中取出客戶端的請求,在本地完成網路建立或管理,並把對應的操作的結果寫到neutron 資料庫中進行儲存;這裡需要說明一點neutron agents是指很多agent,每個agent都負責完成一件事,比如DHCP agent負責分配ip地址,network manage agent負責管理網路;而對於neutron plugins 主要用來藉助外部外掛的方式提供某種服務;比如ML2 plugin用來提供2層虛擬網路服務的;如果neutron agents在建立或管理網路需要用到某個外掛服務時,它會把請求外掛的訊息丟到訊息佇列,然後neutron plugins 從訊息佇列取出訊息,並響應請求,把結果丟到訊息佇列,同時也會寫到資料庫中;
二、neutron服務的安裝、配置
1、準備neutron 資料庫、使用者以及授權使用者對neutron資料庫下的所有表有所有許可權;
[root@node02 ~]# mysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 184 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]>
驗證:用其他主機用neutron使用者,看看是否可以正常連線資料庫?
[root@node01 ~]# mysql -uneutron -pneutron -hnode02 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 185 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | neutron | | test | +--------------------+ 3 rows in set (0.00 sec) MariaDB [(none)]>
2、在控制節點安裝配置neutron
匯出admin環境變數,建立neutron使用者,設定其密碼為neutron
[root@node01 ~]# source admin.sh [root@node01 ~]# openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | 47c0915c914c49bb8670703e4315a80f | | enabled | True | | id | e7d0eae696914cc19fb8ebb24f4b5b0f | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@node01 ~]#
將neutron使用者新增至service專案,並授權為admin角色
[root@node01 ~]# openstack role add --project service --user neutron admin [root@node01 ~]#
建立neutron服務
[root@node01 ~]# openstack service create --name neutron \ > --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | 3dc79e6a21e2484e8f92869e8745122c | | name | neutron | | type | network | +-------------+----------------------------------+ [root@node01 ~]#
建立neutron服務端點(註冊neutron服務)
公共端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > network public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 4a8c9c97417f4764a0e61b5a7a1f3a5f | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 3dc79e6a21e2484e8f92869e8745122c | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@node01 ~]#
私有端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > network internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 1269653296e14406920bc43db65fd8af | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 3dc79e6a21e2484e8f92869e8745122c | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@node01 ~]#
管理端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > network admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 8bed1c51ed6d4f0185762edc2d5afd8a | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 3dc79e6a21e2484e8f92869e8745122c | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@node01 ~]#
安裝neutron服務元件包
[root@node01 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
編輯neutron服務的配置檔案/etc/neutron/neutron.conf的【DEFAULT】配置段配置連線rabbitmq相關資訊以及核心外掛和網路外掛等;
提示:我這裡選擇使用自服務網路型別;所以這裡要配置service_plugins = router 並且啟用疊加網路選項;
在【database】配置段配置連線neutron資料庫相關資訊
在【keystone_authtoken】配置段配置使用keystone做認證的相關資訊
在【DEFAULT】配置段配置網路通知相關選項
在【nova】配置段配置nova服務相關資訊
在【oslo_concurrency】配置段配置鎖路徑
neutron.conf的最終配置
[root@node01 ~]# grep -i ^"[a-z\[]" /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:openstack123@node02 core_plugin = ml2 service_plugins = router allow_overlapping_ips = true auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [agent] [cors] [database] connection = mysql+pymysql://neutron:neutron@node02/neutron [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = node02:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [matchmaker_redis] [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] [root@node01 ~]#
配置ML2外掛
編輯配置檔案/etc/neutron/plugins/ml2/ml2_conf.ini ,在【ml2】配置段配置支援flat(平面網路),vlan和vxlan
提示:配置ML2外掛之後,刪除type_drivers選項中的值可能會導致資料庫不一致;意思是初始化資料庫後,如果在刪除上面的值,可能導致資料庫不一致的情況;
在【ml2】配置段開啟租戶網路型別為vxlan
在【ml2】配置段啟用Linux橋接和二層填充機制
在【ml2】配置段中啟用埠安全擴充套件驅動程式
在【ml2_type_flat】配置段配置flat_networks = provider
提示:這裡主要是指定平面網路的名稱,就是虛擬機器內部網路叫什麼名,這個名稱可以自定義,但後面會用到把該網路橋接到物理網路卡中的配置,以及後續的建立網路都要用到這名稱,請確保後續的名稱和這裡的名稱保持一致;
在【ml2_type_vxlan】配置段中配置vxlan的標識範圍
在【securitygroup】配置段啟用ipset
ml2_conf.ini的最終配置
[root@node01 ~]# grep -i ^"[a-z\[]" /etc/neutron/plugins/ml2/ml2_conf.ini [DEFAULT] [l2pop] [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_geneve] [ml2_type_gre] [ml2_type_vlan] [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true [root@node01 ~]#
配置linux bridge agent
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini,在【linux_bridge】配置段配置provider網路對映到物理的那個介面
提示:這裡主要是配置把把虛擬機器內部的那個網路和物理介面的橋接對映,請確保虛擬機器內部網路名稱和這裡配置的保持一致;冒號前指定虛擬機器內部網路名稱,冒號後面指定要橋接的物理網路卡介面名稱;
在【vxlan】配置段配置啟用vxlan,並配置本地管理ip地址和開啟l2_population
提示:local_ip寫控制節點的管理ip地址(如果有多個ip地址的話);
在【securitygroup】配置段配置啟用安全組並配置Linux bridge iptables防火牆驅動程式
linuxbridge_agent.ini的最終配置
[root@node01 ~]# grep -i ^"[a-z\[]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:ens33 [network_log] [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = true local_ip = 192.168.0.41 l2_population = true [root@node01 ~]#
確定br_netfilter核心模組是載入啟用,若沒載入,載入核心模組並配置相關核心引數
[root@node01 ~]# lsmod |grep br_netfilter [root@node01 ~]# modprobe br_netfilter [root@node01 ~]# lsmod |grep br_netfilter br_netfilter 22209 0 bridge 136173 1 br_netfilter [root@node01 ~]#
配置相關核心引數
[root@node01 ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 [root@node01 ~]#
配置L3 agent
編輯/etc/neutron/l3_agent.ini配置檔案,在【DEFAULT】配置段網路介面驅動為linuxbridge
[DEFAULT] interface_driver = linuxbridge
配置DHCP agent
編輯/etc/neutron/dhcp_agent.ini配置檔案,在【DEFAULT】配置段配置網路介面驅動為linuxbridge,啟用後設資料隔離,並配置dhcp驅動程式
[DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
配置metadata agent
編輯/etc/neutron/metadata_agent.ini配置檔案,在【DEFAULT】配置段配置metadata server地址和共享金鑰
[DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET
提示:metadata_proxy_shared_secret 這個是配置共享金鑰的引數,後面的金鑰可以隨機生成,也可以設定任意字串;
配置nova服務使用neutron服務
編輯/etc/nova/nova.conf配置檔案,在【neutron】配置段配置neutron相關資訊
[neutron] url = http://controller:9696 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET
提示:這裡的metadata_proxy_shared_secret要和上面配置的metadata agent中配置的金鑰保持一致即可;
將ml2的配置檔案軟連線到/etc/neutron/plugin.ini
[root@node01 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@node01 ~]# ll /etc/neutron/ total 132 drwxr-xr-x 11 root root 260 Oct 31 00:03 conf.d -rw-r----- 1 root neutron 10867 Oct 31 01:23 dhcp_agent.ini -rw-r----- 1 root neutron 14466 Oct 31 01:23 l3_agent.ini -rw-r----- 1 root neutron 11394 Oct 31 01:30 metadata_agent.ini -rw-r----- 1 root neutron 72285 Oct 31 00:25 neutron.conf lrwxrwxrwx 1 root root 37 Oct 31 01:36 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini drwxr-xr-x 3 root root 17 Oct 31 00:03 plugins -rw-r----- 1 root neutron 12689 Feb 28 2020 policy.json -rw-r--r-- 1 root root 1195 Feb 28 2020 rootwrap.conf [root@node01 ~]#
初始化neutron資料庫
[root@node01 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> kilo INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225 INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151 INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773 INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592 INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7 INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79 INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051 INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136 INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59 INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25 INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9 INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4 INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664 INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5 INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821 INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4 INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81 INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6 INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532 INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73 INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502 INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048 INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4 INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99 INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016 INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3 INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297 INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39 INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050 INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9 INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53 INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70 INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90 INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4 INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426 INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524 INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37 INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4 INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70 INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192 INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9 INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6 INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c INFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0 INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62 INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353 INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586 INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d OK [root@node01 ~]#
驗證:連線neutron資料庫中是否有表生成?
MariaDB [(none)]> use neutron Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [neutron]> show tables; +-----------------------------------------+ | Tables_in_neutron | +-----------------------------------------+ | address_scopes | | agents | | alembic_version | | allowedaddresspairs | | arista_provisioned_nets | | arista_provisioned_tenants | | arista_provisioned_vms | | auto_allocated_topologies | | bgp_peers | | bgp_speaker_dragent_bindings | | bgp_speaker_network_bindings | | bgp_speaker_peer_bindings | | bgp_speakers | | brocadenetworks | | brocadeports | | cisco_csr_identifier_map | | cisco_hosting_devices | | cisco_ml2_apic_contracts | | cisco_ml2_apic_host_links | | cisco_ml2_apic_names | | cisco_ml2_n1kv_network_bindings | | cisco_ml2_n1kv_network_profiles | | cisco_ml2_n1kv_policy_profiles | | cisco_ml2_n1kv_port_bindings | | cisco_ml2_n1kv_profile_bindings | | cisco_ml2_n1kv_vlan_allocations | | cisco_ml2_n1kv_vxlan_allocations | | cisco_ml2_nexus_nve | | cisco_ml2_nexusport_bindings | | cisco_port_mappings | | cisco_router_mappings | | consistencyhashes | | default_security_group | | dnsnameservers | | dvr_host_macs | | externalnetworks | | extradhcpopts | | firewall_policies | | firewall_rules | | firewalls | | flavors | | flavorserviceprofilebindings | | floatingipdnses | | floatingips | | ha_router_agent_port_bindings | | ha_router_networks | | ha_router_vrid_allocations | | healthmonitors | | ikepolicies | | ipallocationpools | | ipallocations | | ipamallocationpools | | ipamallocations | | ipamsubnets | | ipsec_site_connections | | ipsecpeercidrs | | ipsecpolicies | | logs | | lsn | | lsn_port | | maclearningstates | | members | | meteringlabelrules | | meteringlabels | | ml2_brocadenetworks | | ml2_brocadeports | | ml2_distributed_port_bindings | | ml2_flat_allocations | | ml2_geneve_allocations | | ml2_geneve_endpoints | | ml2_gre_allocations | | ml2_gre_endpoints | | ml2_nexus_vxlan_allocations | | ml2_nexus_vxlan_mcast_groups | | ml2_port_binding_levels | | ml2_port_bindings | | ml2_ucsm_port_profiles | | ml2_vlan_allocations | | ml2_vxlan_allocations | | ml2_vxlan_endpoints | | multi_provider_networks | | networkconnections | | networkdhcpagentbindings | | networkdnsdomains | | networkgatewaydevicereferences | | networkgatewaydevices | | networkgateways | | networkqueuemappings | | networkrbacs | | networks | | networksecuritybindings | | networksegments | | neutron_nsx_network_mappings | | neutron_nsx_port_mappings | | neutron_nsx_router_mappings | | neutron_nsx_security_group_mappings | | nexthops | | nsxv_edge_dhcp_static_bindings | | nsxv_edge_vnic_bindings | | nsxv_firewall_rule_bindings | | nsxv_internal_edges | | nsxv_internal_networks | | nsxv_port_index_mappings | | nsxv_port_vnic_mappings | | nsxv_router_bindings | | nsxv_router_ext_attributes | | nsxv_rule_mappings | | nsxv_security_group_section_mappings | | nsxv_spoofguard_policy_network_mappings | | nsxv_tz_network_bindings | | nsxv_vdr_dhcp_bindings | | nuage_net_partition_router_mapping | | nuage_net_partitions | | nuage_provider_net_bindings | | nuage_subnet_l2dom_mapping | | poolloadbalanceragentbindings | | poolmonitorassociations | | pools | | poolstatisticss | | portbindingports | | portdataplanestatuses | | portdnses | | portforwardings | | portqueuemappings | | ports | | portsecuritybindings | | providerresourceassociations | | provisioningblocks | | qos_bandwidth_limit_rules | | qos_dscp_marking_rules | | qos_fip_policy_bindings | | qos_minimum_bandwidth_rules | | qos_network_policy_bindings | | qos_policies | | qos_policies_default | | qos_port_policy_bindings | | qospolicyrbacs | | qosqueues | | quotas | | quotausages | | reservations | | resourcedeltas | | router_extra_attributes | | routerl3agentbindings | | routerports | | routerroutes | | routerrules | | routers | | securitygroupportbindings | | securitygrouprules | | securitygroups | | segmenthostmappings | | serviceprofiles | | sessionpersistences | | standardattributes | | subnet_service_types | | subnetpoolprefixes | | subnetpools | | subnetroutes | | subnets | | subports | | tags | | trunks | | tz_network_bindings | | vcns_router_bindings | | vips | | vpnservices | +-----------------------------------------+ 167 rows in set (0.00 sec) MariaDB [neutron]>
重啟nova-api服務
[root@node01 ~]# systemctl restart openstack-nova-api.service [root@node01 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:9292 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 100 *:6080 *:* LISTEN 0 128 *:8774 *:* LISTEN 0 128 *:8775 *:* LISTEN 0 128 *:9191 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::5000 :::* LISTEN 0 128 :::8778 :::* [root@node01 ~]#
提示:重啟確保nova-api服務的8774和8775埠正常監聽;
啟動neutron相關服務,並將其設定為開機啟動
[root@node01 ~]# systemctl start neutron-server.service \ > neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ > neutron-metadata-agent.service [root@node01 ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service. [root@node01 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:9292 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 *:9696 *:* LISTEN 0 100 *:6080 *:* LISTEN 0 128 *:8774 *:* LISTEN 0 128 *:8775 *:* LISTEN 0 128 *:9191 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::5000 :::* LISTEN 0 128 :::8778 :::* [root@node01 ~]#
提示:請確保9696埠正常監聽;
如果我們選用的是self-service network 我們還需要啟動L3 agent 服務,並將其設定為開機啟動
[root@node01 ~]# systemctl start neutron-l3-agent.service [root@node01 ~]# systemctl enable neutron-l3-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service. [root@node01 ~]#
到此控制節點的neutron服務就配置好了
3、在計算節點安裝配置neutron服務
安裝neutron相關服務包
[root@node03 ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
編輯/etc/neutron/neutron.conf,在【DEFAULT】配置段配置連線rabbitmq相關資訊,以及配置認證策略為keystone
在【keystone_authtoken】配置段配置keystone認證相關資訊
在【oslo_concurrency】配置段配置鎖路徑
neutron.conf最終配置
[root@node03 ~]# grep -i ^"[a-z\[]" /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:openstack123@node02 auth_strategy = keystone [agent] [cors] [database] [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = node02:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [matchmaker_redis] [nova] [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] [root@node03 ~]#
配置linux bridge agent
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini配置檔案,在【linux_bridge】配置段配置provider網路對映到物理的那個介面
提示:這裡冒號前邊的是虛擬機器內部網路名稱,這個名稱請確保和控制節點上配置的虛擬機器內部網路名稱相同;冒號後面的是配置要橋接的物理介面名稱;
在【vxlan】配置段配置啟用vxlan,並配置本地管理ip地址和開啟l2_population
在【securitygroup】配置段配置啟用安全組並配置Linux bridge iptables防火牆驅動程式
linuxbridge_agent.ini最終配置
[root@node03 ~]# grep -i ^"[a-z\[]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:ens33 [network_log] [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = true local_ip = 192.168.0.43 l2_population = true [root@node03 ~]#
確保br_netfilter模組是否載入,若未載入,載入模組
[root@node03 ~]# lsmod |grep br_netfilter [root@node03 ~]# modprobe br_netfilter [root@node03 ~]# lsmod |grep br_netfilter br_netfilter 22209 0 bridge 136173 1 br_netfilter [root@node03 ~]#
編輯/etc/sysctl.conf配置核心引數
[root@node03 ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 [root@node03 ~]#
配置nova服務使用neutron服務
編輯/etc/nova/nova.conf配置檔案,在【neutron】配置段配置neutron服務相關配置
[neutron] url = http://controller:9696 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron
重啟nova-compute服務
[root@node03 ~]# systemctl restart openstack-nova-compute.service [root@node03 ~]#
啟動neutron-linuxbridge-agent服務,並將其設定為開機啟動
[root@node03 ~]# systemctl start neutron-linuxbridge-agent.service [root@node03 ~]# systemctl enable neutron-linuxbridge-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@node03 ~]#
到此,計算節點的網路服務就安裝配置完成;
驗證:在控制節點上匯出admin環境變數,列出載入的擴充套件,以驗證成功啟動neutron伺服器程式
[root@node01 ~]# openstack extension list --network +-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Name | Alias | Description | +-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default. | | Availability Zone | availability_zone | The availability zone extension. | | Network Availability Zone | network_availability_zone | Availability zone support for network. | | Auto Allocated Topology Services | auto-allocated-topology | Auto Allocated Topology Services. | | Neutron L3 Configurable external gateway mode | ext-gw-mode | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway | | Port Binding | binding | Expose port bindings of a virtual port to external application | | agent | agent | The agent management extension. | | Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool | | L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among l3 agents | | Neutron external network | external-net | Adds external network attribute to network resource. | | Tag support for resources with standard attribute: subnet, trunk, router, network, policy, subnetpool, port, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. | | Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. | | Network MTU | net-mtu | Provides MTU attribute for a network resource. | | Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. | | Quota management support | quotas | Expose functions for quotas management per tenant | | If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. | | Availability Zone Filter Extension | availability_zone_filter | Add filter parameters to AvailabilityZone resource | | HA Router extension | l3-ha | Adds HA capability to routers. | | Filter parameters validation | filter-validation | Provides validation on filter parameters. | | Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks | | Quota details management support | quota_details | Expose functions for quotas usage statistics per project | | Address scope | address-scope | Address scopes extension. | | Neutron Extra Route | extraroute | Extra routes configuration for L3 router | | Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. | | Empty String Filtering Extension | empty-string-filtering | Allow filtering by attributes with empty string value | | Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field | | Neutron Port MAC address regenerate | port-mac-address-regenerate | Network port MAC address regenerate | | Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. | | Provider Network | provider | Expose mapping of virtual networks to physical networks | | Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services | | Router Flavor Extension | l3-flavors | Flavor support for routers. | | Port Security | port-security | Provides port security | | Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) | | Port filtering on security groups | port-security-groups-filtering | Provides security groups filtering when listing ports | | Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. | | Pagination support | pagination | Extension that indicates that pagination is enabled. | | Sorting support | sorting | Extension that indicates that sorting is enabled. | | security-group | security-group | The security groups extension. | | DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents | | Floating IP Port Details Extension | fip-port-details | Add port_details attribute to Floating IP resource | | Router Availability Zone | router_availability_zone | Availability zone support for router. | | RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. | | standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes | | IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing ports | | Neutron L3 Router | router | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway. | | Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs | | Port Bindings Extended | binding-extended | Expose port bindings of a virtual port to external application | | project_id field enabled | project-id | Extension that indicates that project_id field is enabled. | | Distributed Virtual Router | dvr | Enables configuration of Distributed Virtual Routers. | +-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ [root@node01 ~]#
提示:如果能夠看上面顯示的內容,則表示neutron服務啟動了很多程式,服務沒有問題;
進一步驗證:列出網路agent列表,看看各agent是否都是up狀態?
[root@node01 ~]# source admin.sh [root@node01 ~]# openstack network agent list +--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+ | 749a9639-e85c-4cfd-a936-6c379ee85aac | L3 agent | node01.test.org | nova | :-) | UP | neutron-l3-agent | | ab400ecf-0488-4710-87ff-9405e84ba444 | Linux bridge agent | node01.test.org | None | :-) | UP | neutron-linuxbridge-agent | | b08152c8-c3ef-4230-ac50-c7ab445dade2 | DHCP agent | node01.test.org | nova | :-) | UP | neutron-dhcp-agent | | bea011e8-4302-44c5-9b91-56fbc282e990 | Metadata agent | node01.test.org | None | :-) | UP | neutron-metadata-agent | | ec25d21e-f197-4eb1-95aa-bd9ec0d1d43f | Linux bridge agent | node03.test.org | None | :-) | UP | neutron-linuxbridge-agent | +--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+ [root@node01 ~]#
提示:能夠看到node01上有4個agent,node03有一個agent都處於up狀態,說明我們配置neutron agent沒有問題,都正常執行著;
ok,到此neutron網路服務在控制節點和計算節點的安裝配置驗證就完成了;