有了之前元件(keystone、glance、nova、neutron)的安裝後,那麼就可以在命令列建立並啟動instance了
照著官網來安裝openstack pike之environment設定
照著官網來安裝openstack pike之keystone安裝
照著官網來安裝openstack pike之glance安裝
照著官網來安裝openstack pike之neutron安裝
建立並啟動例項需要進行如下操作:
1、建立一個虛擬網路(使用的是網路選項1:provider networks)
Create virtual networks for the networking option that you chose when configuring Neutron. If you chose option 1, create only the provider network. If you chose option 2, create the provider and self-service networks.
# source admin-openrc
Create the network:
# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
The --share option allows all projects to use the virtual network.
The --external option defines the virtual network to be external. If you wish to create an internal network, you can use --internal instead. Default value is internal.
The --provider-physical-network provider and --provider-network-type flat options connect the flat virtual network to the flat (native/untagged) physical network on the eth1 interface on the host using information from the following files:
由於此次搭建的環境的本地網路卡裝置名
所以這裡配置將eth1改為ens33
Create a subnet on the network:
# openstack subnet create --network provider --allocation-pool start=192.168.101.100,end=192.168.101.200 --dns-nameserver 192.168.101.2 --gateway 192.168.101.2 --subnet-range 192.168.101.0/24 provider
Create m1.nano flavor:
# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
Generate a key pair
Most cloud images support public key authentication rather than conventional password authentication. Before launching an instance, you must add a public key to the Compute service.
Source the demo project credentials:
# source demo-openrc
Generate a key pair and add a public key:
# ssh-keygen -q -N "" # openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
Verify addition of the key pair:
# openstack keypair list
Add security group rules
By default, the default security group applies to all instances and includes firewall rules that deny remote access to instances. For Linux images such as CirrOS, we recommend allowing at least ICMP (ping) and secure shell (SSH).
Add rules to the default security group:
Permit ICMP (ping):
# openstack security group rule create --proto icmp default
Permit secure shell (SSH) access:
# openstack security group rule create --proto tcp --dst-port 22 default
Launch an instance
If you chose networking option 1, you can only launch an instance on the provider network. If you chose networking option 2, you can launch an instance on the provider network and the self-service network.
啟動例項之前需要指定the flavor, image name, network, security group, key, and instance name.
On the controller node, source the demo credentials to gain access to user-only CLI commands:
# source demo-openrc
A flavor specifies a virtual resource allocation profile which includes processor, memory, and storage.
List available flavors:
# openstack flavor list
List available images:
# openstack image list
List available networks:
# openstack network list
由於選擇的網路為provider networks所以這裡顯示為上面,如果選擇的option 2則為:
List available security groups:
# openstack security group list
現在啟動一個例項:
Launch the instance:
Replace PROVIDER_NET_ID with the ID of the provider provider network.
# openstack server create --flavor m1.nano --image cirros --nic net-id=PROVIDER_NET_ID --security-group default --key-name mykey provider-instance
If you chose option 1 and your environment contains only one network, you can omit the --nic option because OpenStack automatically chooses the only network available.
將上面的PROVIDER_NET_ID改為上面的7ccde909-94fa-4315-81e6-aa2652166c5b
# openstack server create --flavor m1.nano --image cirros --nic net-id=7ccde909-94fa-4315-81e6-aa2652166c5b --security-group default --key-name mykey provider-instance
最後面為例項名稱,可以隨便命名
Check the status of your instance:
# openstack server list
The status changes from BUILD to ACTIVE when the build process successfully completes.
可以看出已經有了ip
Access the instance using the virtual console
Obtain a Virtual Network Computing (VNC) session URL for your instance and access it from a web browser:
# openstack console url show provider-instance(後面是例項名稱)
現在進行瀏覽器訪問:
一直在grub這裡卡住了,解決辦法:
將計算節點的的配置檔案/etc/nova/nova.conf做如下修改:
[libvirt] virt_type = qemu cpu_mode = none
重啟服務:
# systemctl restart libvirtd.service openstack-nova-compute.service
控制節點重啟nova服務:
# systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
然後在控制節點重新進行建立一個虛擬機器:
# source demo-openrc # openstack server create --flavor m1.nano --image cirros001 --nic net-id=7ccde909-94fa-4315-81e6-aa2652166c5b --security-group default --key-name mykey instance002
# openstack server list
# openstack console url show instance002
計算節點檢視虛擬機器資訊:
這裡的虛擬機器id和控制節點上面的openstack server list顯示的虛擬機器id一致
各部分日誌:
grep 'ERROR' /var/log/nova/* grep 'ERROR' /var/log/neutron/* grep 'ERROR' /var/log/glance/* grep 'ERROR' /var/log/keystone/*
檢視節點instance:後面又建立了虛擬機器
提示:在openstack環境下,所有計算節點主機的橋接網路卡名稱都一樣。
對應三個虛擬機器:
[root@node2 instances]# cd 10456257-2678-4f81-b72c-8de42872675e/ [root@node2 10456257-2678-4f81-b72c-8de42872675e]# ll 總用量 2736 -rw------- 1 root root 38149 10月 21 19:12 console.log -rw-r--r-- 1 qemu qemu 2752512 10月 21 19:12 disk -rw-r--r-- 1 nova nova 79 10月 21 18:52 disk.info
- console.log 控制檯日誌
- disk 虛擬磁碟
- disk.info 虛擬磁碟資訊
上圖中_base下面的 是映象(上傳的兩個映象)
[root@node2 10456257-2678-4f81-b72c-8de42872675e]# ls -lh 總用量 2.7M -rw------- 1 root root 38K 10月 21 19:12 console.log -rw-r--r-- 1 qemu qemu 2.7M 10月 21 19:12 disk
雲主機metadata使用以及原理(在控制節點上檢視)
上面顯示的結果是一個namespace(名稱空間)
然後在namespace中執行ip:
ip netns exec qdhcp-7ccde909-94fa-4315-81e6-aa2652166c5b ip ad li
可以在域名空間執行一些命令
可以看出來多了幾個ip
- 雲主機如何從dhcp獲取這些資訊?
根據etc/neutron/dhcp_agent.ini配置檔案enable_isolated_metadata = true 實現
同時我們可以檢視到namespace上啟動80埠,用於雲主機訪問metadata,獲取資訊
可以看見在namespace中啟動了80和53埠