Tungsten Fabric知識庫丨關於OpenStack、K8s、CentOS安裝問題的補充

TF中文社群發表於2020-09-22

作者:Tatsuya Naganawa 譯者:TF編譯組

多kube-master部署

3個Tungsten Fabric控制器節點:m3.xlarge(4 vcpu)-> c3.4xlarge(16 vcpu)(由於schema-transformer需要cpu資源進行acl計算,因此我需要新增資源) 100 kube-master, 800 workers: m3.medium

在下面這個內容中,tf-controller安裝和first-containers.yaml是相同的。更多詳細內容,找TF中文社群。

Ami也相同(ami-3185744e),但是核心版本透過yum -y update kernel(轉換為映像,並用於啟動例項)更新

/tmp/aaa.pem是ec2例項中指定的金鑰對

(在其中一個Tungsten Fabric控制器節點鍵入命令)yum -y install epel-release
yum -y install parallel
aws ec2 describe-instances --query 'Reservations[*].Instances[*].PrivateIpAddress' --output text | tr '\t' '\n' > /tmp/all.txt
head -n 100 /tmp/all.txt > masters.txt
tail -n 800 /tmp/all.txt > workers.txt
ulimit -n 4096cat all.txt | parallel -j1000 ssh -i /tmp/aaa.pem -o StrictHostKeyChecking=no centos@{} id
cat all.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo sysctl -w net.bridge.bridge-nf-call-iptables=1
 -cat -n masters.txt | parallel -j1000 -a - --colsep '\t' ssh -i /tmp/aaa.pem centos@{2} sudo kubeadm init --token aaaaaa.aaaabbbbccccdddd --ignore-preflight-errors=NumCPU --pod-network-cidr=10.32.{1}.0/24 --service-cidr=10.96.{1}.0/24 --service-dns-domain=cluster{1}.local-vi assign-kube-master.py
computenodes=8with open ('masters.txt') as aaa:
 with open ('workers.txt') as bbb:
  for masternode in aaa.read().rstrip().split('\n'):
   for i in range (computenodes):
    tmp=bbb.readline().rstrip()
    print ("{}\t{}".format(masternode, tmp))
 -cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo cp /etc/kubernetes/admin.conf /tmp/admin.conf
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo chmod 644 /tmp/admin.conf
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' scp -i /tmp/aaa.pem centos@{2}:/tmp/admin.conf kubeconfig-{1}-cat -n masters.txt | parallel -j1000 -a - --colsep '\t' kubectl --kubeconfig=kubeconfig-{1} get node-cat -n join.txt | parallel -j1000 -a - --colsep '\t' ssh -i /tmp/aaa.pem centos@{3} sudo kubeadm join {2}:6443 --token aaaaaa.aaaabbbbccccdddd --discovery-token-unsafe-skip-ca-verification-(modify controller-ip in cni-tungsten-fabric.yaml)cat -n masters.txt | parallel -j1000 -a - --colsep '\t' cp cni-tungsten-fabric.yaml cni-{1}.yaml
cat -n masters.txt | parallel -j1000 -a - --colsep '\t' sed -i -e "s/k8s2/k8s{1}/" -e "s/10.32.2/10.32.{1}/" -e "s/10.64.2/10.64.{1}/" -e "s/10.96.2/10.96.{1}/"  -e "s/172.31.x.x/{2}/" cni-{1}.yaml-cat -n masters.txt | parallel -j1000 -a - --colsep '\t' kubectl --kubeconfig=kubeconfig-{1} apply -f cni-{1}.yaml-sed -i 's!kubectl!kubectl --kubeconfig=/etc/kubernetes/admin.conf!' set-label.sh 
cat masters.txt | parallel -j1000 scp -i /tmp/aaa.pem set-label.sh centos@{}:/tmp
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo bash /tmp/set-label.sh-cat -n masters.txt | parallel -j1000 -a - --colsep '\t' kubectl --kubeconfig=kubeconfig-{1} create -f first-containers.yaml

在OpenStack上巢狀安裝Kubernetes

可以在all-in-one的openstack節點上嘗試巢狀安裝kubernetes。

在ansible-deployer安裝了該節點之後,

此外,需要手動建立為vRouter TCP/9091連線本地服務

此配置將建立DNAT/SNAT,例如從src: 10.0.1.3:xxxx, dst-ip: 10.1.1.11:9091到src: compute's vhost0 ip:xxxx dst-ip: 127.0.0.1:9091,因此openstack VM中的CNI可以與計算節點上的vrouter-agent直接通訊,併為容器選擇埠/ip資訊。

  • IP地址可以來自子網,也可以來自子網外部。

在該節點上,將建立兩個Centos7(或ubuntu bionic)節點,並將使用相同的程式安裝kubernetes叢集,

當然yaml檔案需要巢狀安裝。

./resolve-manifest.sh contrail-nested-kubernetes.yaml > cni-tungsten-fabric.yaml
KUBEMANAGER_NESTED_MODE: "{{ KUBEMANAGER_NESTED_MODE }}" ## this needs to be "1"KUBERNESTES_NESTED_VROUTER_VIP: {{ KUBERNESTES_NESTED_VROUTER_VIP }} ## this parameter needs to be the same IP with the one defined in link-local service (such as 10.1.1.11)

如果coredns接收到ip,則說明巢狀安裝正常。

vRouter ml2外掛

我嘗試了vRouter neutron外掛的ml2功能。

使用AWS上的三個CentOS7.5(4 cpu,16 GB記憶體,30 GB磁碟,ami: ami-3185744e)。

隨附基於本檔案的步驟。

openstack-controller: 172.31.15.248tungsten-fabric-controller (vRouter): 172.31.10.212nova-compute (ovs): 172.31.0.231(命令在tungsten-fabric-controller上,使用centos使用者(不是root使用者))sudo yum -y remove PyYAML python-requests
sudo yum -y install git patch
sudo easy_install pip
sudo pip install PyYAML requests ansible==2.8.8ssh-keygen
 add id_rsa.pub to authorized_keys on all three nodes (centos user (not root))git clone https://opendev.org/x/networking-opencontrail.git
cd networking-opencontrail
patch -p1 < ml2-vrouter.diff 
cd playbooks
cp -i hosts.example hosts
cp -i group_vars/all.yml.example group_vars/all.yml(ssh to all the nodes once, to update known_hosts)ansible-playbook main.yml -i hosts - devstack日誌位於/opt/stack/logs/stack.sh.log中 - openstack程式日誌寫在/var/log/messages中 - 'systemctl list-unit-files | grep devstack'顯示openstack程式的systemctl條目(openstack控制器節點)
  一旦devstack因mariadb登入錯誤而失敗,請鍵入此命令進行修復。(對於openstack控制器的ip和fqdn,需要修改最後兩行)
  命令將由“centos”使用者(不是root使用者)鍵入.
   mysqladmin -u root password admin
   mysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''%'\'' identified by '\''admin'\'';'
   mysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''172.31.15.248'\'' identified by '\''admin'\'';'
   mysql -uroot -padmin -h127.0.0.1 -e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''ip-172-31-15-248.ap-northeast-1.compute.internal'\'' identified by '\''admin'\'';'

下附hosts、group_vars/all和patch的程式碼。(有些僅是錯誤修正,但有些會更改預設行為)

[centos@ip-172-31-10-212 playbooks]$ cat hosts
controller ansible_host=172.31.15.248 ansible_user=centos
# 此host應為計算主機組中的一個.# 單獨部署Tungsten Fabtic計算節點的playbook尚未準備好.contrail_controller ansible_host=172.31.10.212 ansible_user=centos local_ip=172.31.10.212[contrail]contrail_controller[openvswitch]other_compute ansible_host=172.31.0.231 local_ip=172.31.0.231 ansible_user=centos[compute:children]contrail
openvswitch[centos@ip-172-31-10-212 playbooks]$ cat group_vars/all.yml---# IP address for OpenConrail (e.g. 192.168.0.2)contrail_ip: 172.31.10.212# Gateway address for OpenConrail (e.g. 192.168.0.1)contrail_gateway:# Interface name for OpenConrail (e.g. eth0)contrail_interface:# IP address for OpenStack VM (e.g. 192.168.0.3)openstack_ip: 172.31.15.248# 在VM上使用的OpenStack分支.openstack_branch: stable/queens
# 也可以選擇使用其它外掛版本(預設為OpenStack分支)
networking_plugin_version: master
# Tungsten Fabric docker image tag for contrail-ansible-deployer
contrail_version: master-latest
# 如果為true,則使用Tungsten Fabric驅動程式安裝networking_bgpvpn外掛
install_networking_bgpvpn_plugin: false# 如果true,則與裝置管理器(將啟動)和vRouter整合
# 封裝優先順序將被設定為'VXLAN,MPLSoUDP,MPLSoGRE'.dm_integration_enabled: false# 帶有DM整合拓撲的檔案的可選路徑。設定並啟用DM整合後,topology.yaml檔案將複製到此位置
dm_topology_file:# 如果為true,則將為當前ansible使用者建立的例項密碼設定為instance_password的值
change_password: false# instance_password: uberpass1
# 如果已設定,請使用此資料覆蓋docker daemon /etc config檔案
# docker_config:[centos@ip-172-31-10-212 playbooks]$ 
[centos@ip-172-31-10-212 networking-opencontrail]$ cat ml2-vrouter.diff 
diff --git a/playbooks/roles/contrail_node/tasks/main.yml b/playbooks/roles/contrail_node/tasks/main.yml
index ee29b05..272ee47 100644--- a/playbooks/roles/contrail_node/tasks/main.yml+++ b/playbooks/roles/contrail_node/tasks/main.yml
@@ -7,7 +7,6 @@       - epel-release       - gcc       - git-      - ansible-2.4.*
       - yum-utils       - libffi-devel
     state: present
@@ -61,20 +60,20 @@
     chdir: ~/contrail-ansible-deployer/
     executable: /bin/bash-- name: Generate ssh key for provisioning other nodes-  openssh_keypair:-    path: ~/.ssh/id_rsa-    state: present-  register: contrail_deployer_ssh_key--- name: Propagate generated key-  authorized_key:-    user: "{{ ansible_user }}"-    state: present-    key: "{{ contrail_deployer_ssh_key.public_key }}"-  delegate_to: "{{ item }}"-  with_items: "{{ groups.contrail }}"-  when: contrail_deployer_ssh_key.public_key+#- name: Generate ssh key for provisioning other nodes+#  openssh_keypair:+#    path: ~/.ssh/id_rsa+#    state: present+#  register: contrail_deployer_ssh_key+#+#- name: Propagate generated key+#  authorized_key:+#    user: "{{ ansible_user }}"+#    state: present+#    key: "{{ contrail_deployer_ssh_key.public_key }}"+#  delegate_to: "{{ item }}"+#  with_items: "{{ groups.contrail }}"+#  when: contrail_deployer_ssh_key.public_key - name: Provision Node before deploy contrail
   shell: |@@ -105,4 +104,4 @@
     sleep: 5
     host: "{{ contrail_ip }}"
     port: 8082-    timeout: 300\ No newline at end of file+    timeout: 300diff --git a/playbooks/roles/contrail_node/templates/instances.yaml.j2 b/playbooks/roles/contrail_node/templates/instances.yaml.j2
index e3617fd..81ea101 100644--- a/playbooks/roles/contrail_node/templates/instances.yaml.j2+++ b/playbooks/roles/contrail_node/templates/instances.yaml.j2
@@ -14,6 +14,7 @@ instances:
       config_database:
       config:
       control:+      analytics:
       webui:
 {% if "contrail_controller" in groups["contrail"] %}
       vrouter:diff --git a/playbooks/roles/docker/tasks/main.yml b/playbooks/roles/docker/tasks/main.yml
index 8d7971b..5ed9352 100644--- a/playbooks/roles/docker/tasks/main.yml+++ b/playbooks/roles/docker/tasks/main.yml
@@ -6,7 +6,6 @@       - epel-release       - gcc       - git-      - ansible-2.4.*
       - yum-utils       - libffi-devel
     state: present
@@ -62,4 +61,4 @@       - docker-py==1.10.6
       - docker-compose==1.9.0
     state: present-    extra_args: --user
\ No newline at end of file+    extra_args: --user
diff --git a/playbooks/roles/node/tasks/main.yml b/playbooks/roles/node/tasks/main.yml
index 0fb1751..d9ab111 100644--- a/playbooks/roles/node/tasks/main.yml+++ b/playbooks/roles/node/tasks/main.yml
@@ -1,13 +1,21 @@ ----- name: Update kernel+- name: Install required utilities
   become: yes
   yum:-    name: kernel-    state: latest-  register: update_kernel+    name:+      - python3-devel+      - libibverbs  ## needed by openstack controller node+    state: present-- name: Reboot the machine-  become: yes-  reboot:-  when: update_kernel.changed-  register: reboot_machine+#- name: Update kernel+#  become: yes+#  yum:+#    name: kernel+#    state: latest+#  register: update_kernel+#+#- name: Reboot the machine+#  become: yes+#  reboot:+#  when: update_kernel.changed+#  register: reboot_machine
diff --git a/playbooks/roles/restack_node/tasks/main.yml b/playbooks/roles/restack_node/tasks/main.yml
index a11e06e..f66d2ee 100644--- a/playbooks/roles/restack_node/tasks/main.yml+++ b/playbooks/roles/restack_node/tasks/main.yml
@@ -9,7 +9,7 @@
   become: yes
   pip:
     name:-      - setuptools+      - setuptools==43.0.0
       - requests
     state: forcereinstall[centos@ip-172-31-10-212 networking-opencontrail]$

完成安裝大約需要50分鐘。

儘管/home/centos/devstack/openrc可以用於“demo”使用者登入,但是需要管理員訪問許可權來指定其網路型別(vRouter為空,ovs為“vxlan”),因此需要手動建立adminrc。

[centos@ip-172-31-15-248 ~]$ cat adminrc 
export OS_PROJECT_DOMAIN_ID=defaultexport OS_REGION_NAME=RegionOneexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=adminexport OS_IDENTITY_API_VERSION=3export OS_PASSWORD=adminexport OS_AUTH_TYPE=passwordexport OS_AUTH_URL=[centos@ip-172-31-15-248 ~]$ 
openstack network create testvn
openstack subnet create --subnet-range 192.168.100.0/24 --network testvn subnet1
openstack network create --provider-network-type vxlan testvn-ovs
openstack subnet create --subnet-range 192.168.110.0/24 --network testvn-ovs subnet1-ovs - 建立了兩個虛擬網路[centos@ip-172-31-15-248 ~]$ openstack network list+--------------------------------------+------------+--------------------------------------+| ID                                   | Name       | Subnets                              |+--------------------------------------+------------+--------------------------------------+| d4e08516-71fc-401b-94fb-f52271c28dc9 | testvn-ovs | 991417ab-7da5-44ed-b686-8a14abbe46bb || e872b73e-100e-4ab0-9c53-770e129227e8 | testvn     | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 |+--------------------------------------+------------+--------------------------------------+[centos@ip-172-31-15-248 ~]$ - testvn's provider:network_type為空[centos@ip-172-31-15-248 ~]$ openstack network show testvn+---------------------------+--------------------------------------+| Field                     | Value                                |+---------------------------+--------------------------------------+| admin_state_up            | UP                                   || availability_zone_hints   |                                      || availability_zones        | nova                                 || created_at                | 2020-01-18T16:14:42Z                 || description               |                                      || dns_domain                | None                                 || id                        | e872b73e-100e-4ab0-9c53-770e129227e8 || ipv4_address_scope        | None                                 || ipv6_address_scope        | None                                 || is_default                | None                                 || is_vlan_transparent       | None                                 || mtu                       | 1500                                 || name                      | testvn                               || port_security_enabled     | True                                 || project_id                | 84a573dbfadb4a198ec988e36c4f66f6     || provider:network_type     | local                                || provider:physical_network | None                                 || provider:segmentation_id  | None                                 || qos_policy_id             | None                                 || revision_number           | 3                                    || router:external           | Internal                             || segments                  | None                                 || shared                    | False                                || status                    | ACTIVE                               || subnets                   | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 || tags                      |                                      || updated_at                | 2020-01-18T16:14:44Z                 |+---------------------------+--------------------------------------+[centos@ip-172-31-15-248 ~]$ 
 - 它是在Tungsten Fabric的資料庫中建立的(venv) [root@ip-172-31-10-212 ~]# contrail-api-cli --host 172.31.10.212 ls -l virtual-network
virtual-network/e872b73e-100e-4ab0-9c53-770e129227e8  default-domain:admin:testvn
virtual-network/5a88a460-b049-4114-a3ef-d7939853cb13  default-domain:default-project:dci-network
virtual-network/f61d52b0-6577-42e0-a61f-7f1834a2f45e  default-domain:default-project:__link_local__
virtual-network/46b5d74a-24d3-47dd-bc82-c18f6bc706d7  default-domain:default-project:default-virtual-network
virtual-network/52925e2d-8c5d-4573-9317-2c346fb9edf0  default-domain:default-project:ip-fabric
virtual-network/2b0469cf-921f-4369-93a7-2d73350c82e7  default-domain:default-project:_internal_vn_ipv6_link_local(venv) [root@ip-172-31-10-212 ~]# 
 - 另一方面,testvn-ovs's provider:network_type是vxlan,並且segmentation ID,mtu都是自動指定的[centos@ip-172-31-15-248 ~]$ openstack network show testvn+---------------------------+--------------------------------------+| Field                     | Value                                |+---------------------------+--------------------------------------+| admin_state_up            | UP                                   || availability_zone_hints   |                                      || availability_zones        | nova                                 || created_at                | 2020-01-18T16:14:42Z                 || description               |                                      || dns_domain                | None                                 || id                        | e872b73e-100e-4ab0-9c53-770e129227e8 || ipv4_address_scope        | None                                 || ipv6_address_scope        | None                                 || is_default                | None                                 || is_vlan_transparent       | None                                 || mtu                       | 1500                                 || name                      | testvn                               || port_security_enabled     | True                                 || project_id                | 84a573dbfadb4a198ec988e36c4f66f6     || provider:network_type     | local                                || provider:physical_network | None                                 || provider:segmentation_id  | None                                 || qos_policy_id             | None                                 || revision_number           | 3                                    || router:external           | Internal                             || segments                  | None                                 || shared                    | False                                || status                    | ACTIVE                               || subnets                   | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 || tags                      |                                      || updated_at                | 2020-01-18T16:14:44Z                 |+---------------------------+--------------------------------------+[centos@ip-172-31-15-248 ~]$ openstack network show testvn-ovs+---------------------------+--------------------------------------+| Field                     | Value                                |+---------------------------+--------------------------------------+| admin_state_up            | UP                                   || availability_zone_hints   |                                      || availability_zones        | nova                                 || created_at                | 2020-01-18T16:14:47Z                 || description               |                                      || dns_domain                | None                                 || id                        | d4e08516-71fc-401b-94fb-f52271c28dc9 || ipv4_address_scope        | None                                 || ipv6_address_scope        | None                                 || is_default                | None                                 || is_vlan_transparent       | None                                 || mtu                       | 1450                                 || name                      | testvn-ovs                           || port_security_enabled     | True                                 || project_id                | 84a573dbfadb4a198ec988e36c4f66f6     || provider:network_type     | vxlan                                || provider:physical_network | None                                 || provider:segmentation_id  | 50                                   || qos_policy_id             | None                                 || revision_number           | 3                                    || router:external           | Internal                             || segments                  | None                                 || shared                    | False                                || status                    | ACTIVE                               || subnets                   | 991417ab-7da5-44ed-b686-8a14abbe46bb || tags                      |                                      || updated_at                | 2020-01-18T16:14:49Z                 |+---------------------------+--------------------------------------+[centos@ip-172-31-15-248 ~]$

CentOS 8安裝過程

centos8.2ansible-deployer is used
only python3 is used (no python2)
 - 需要ansible 2.8.x1 x for tf-controller and kube-master, 1 vRouter(all nodes)yum install python3 chrony
alternatives --set python /usr/bin/python3(vRouter nodes)yum install network-scripts - 這是必需的,因為vRouter當前不支援NetworkManager(ansible node)sudo yum -y install git
sudo pip3 install PyYAML requests ansible\           
cirros-deployment-86885fbf85-tjkwn   1/1     Running   0          13s   10.47.255.249   ip-172-31-2-120.ap-northeast-1.compute.internal              
[root@ip-172-31-7-20 ~]# 
[root@ip-172-31-7-20 ~]# 
[root@ip-172-31-7-20 ~]# kubectl exec -it cirros-deployment-86885fbf85-7z78k sh/ # ip -o a1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever17: eth0    inet 10.47.255.250/12 scope global eth0\       valid_lft forever preferred_lft forever/ # ping 10.47.255.249PING 10.47.255.249 (10.47.255.249): 56 data bytes64 bytes from 10.47.255.249: seq=0 ttl=63 time=0.657 ms64 bytes from 10.47.255.249: seq=1 ttl=63 time=0.073 ms^C--- 10.47.255.249 ping statistics ---2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.365/0.657 ms/ # 
 - 為了使chrony在安裝路由器後正常工作,可能需要重新啟動chrony伺服器[root@ip-172-31-4-206 ~]#  chronyc -n sources210 Number of sources = 5MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================^? 169.254.169.123               3   4     0   906  -8687ns[  -12us] +/-  428us^? 129.250.35.250                2   7     0  1002   +429us[ +428us] +/-   73ms^? 167.179.96.146                2   7     0   937   +665us[ +662us] +/- 2859us^? 194.0.5.123                   2   6     0  1129   +477us[ +473us] +/-   44ms^? 103.202.216.35                3   6     0   933  +9662ns[+6618ns] +/-  145ms[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
· chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-06-28 16:00:34 UTC; 33min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
 Main PID: 727 (chronyd)
    Tasks: 1 (limit: 49683)
   Memory: 2.1M
   CGroup: /system.slice/chronyd.service
           └─727 /usr/sbin/chronyd
Jun 28 16:00:33 localhost.localdomain chronyd[727]: Using right/UTC timezone to obtain leap second data
Jun 28 16:00:34 localhost.localdomain systemd[1]: Started NTP client/server.Jun 28 16:00:42 localhost.localdomain chronyd[727]: Selected source 169.254.169.123Jun 28 16:00:42 localhost.localdomain chronyd[727]: System clock TAI offset set to 37 seconds
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 167.179.96.146 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 103.202.216.35 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 129.250.35.250 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 194.0.5.123 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 169.254.169.123 offline
Jun 28 16:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Can't synchronise: no selectable sources[root@ip-172-31-4-206 ~]# service chronyd restart
Redirecting to /bin/systemctl restart chronyd.service[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
· chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-06-28 16:34:41 UTC; 2s ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 25252 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 25247 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 25250 (chronyd)
    Tasks: 1 (limit: 49683)
   Memory: 1.0M
   CGroup: /system.slice/chronyd.service
           └─25250 /usr/sbin/chronyd
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal systemd[1]: Starting NTP client/server...Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: chronyd version 3.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND>Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: Frequency 35.298 +/- 0.039 ppm read from /var/lib/chrony/drift
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: Using right/UTC timezone to obtain leap second data
Jun 28 16:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal systemd[1]: Started NTP client/server.[root@ip-172-31-4-206 ~]#  chronyc -n sources210 Number of sources = 5MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================^* 169.254.169.123               3   4    17     4  -2369ns[  -27us] +/-  451us^- 94.154.96.7                   2   6    17     5    +30ms[  +30ms] +/-  148ms^- 185.51.192.34                 2   6    17     3  -2951us[-2951us] +/-  150ms^- 188.125.64.6                  2   6    17     3  +9526us[+9526us] +/-  143ms^- 216.218.254.202               1   6    17     5    +15ms[  +15ms] +/-   72ms[root@ip-172-31-4-206 ~]# 
[root@ip-172-31-4-206 ~]# contrail-status 
Pod      Service      Original Name           Original Version  State    Id            Status         
         rsyslogd                             nightly-master    running  5fc76e57c156  Up 16 minutes  
vrouter  agent        contrail-vrouter-agent  nightly-master    running  bce023d8e6e0  Up 5 minutes   
vrouter  nodemgr      contrail-nodemgr        nightly-master    running  9439a304cbcf  Up 5 minutes   
vrouter  provisioner  contrail-provisioner    nightly-master    running  1531b1403e49  Up 5 minutes   
WARNING: container with original name '' have Pod or Service empty. Pod: '' / Service: 'rsyslogd'. Please pass NODE_TYPE with pod name to container's env
vrouter kernel module is PRESENT== Contrail vrouter ==nodemgr: active
agent: active[root@ip-172-31-4-206 ~]#

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/69957171/viewspace-2723241/,如需轉載,請註明出處,否則將追究法律責任。

相關文章