雲網路效能測試流程
導讀 | 有幾個雲上的小夥伴想測測VPC網路效能,於是寫了一些dpdk程式碼在阿里雲上做了一個實驗,也適用於其它雲. |
安裝相關的庫
使用root登入,更新一下源
#備份原有的配置檔案 mkdir /etc/yum.repos.d/bak mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/ #使用阿里雲的源覆蓋 wget -O /etc/yum.repos.d/CentOS-Base.repo yum install -y sed -i 's|^#baseurl=|baseurl=|' /etc/yum.repos.d/epel* sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel* sudo dnf config-manager --set-enabled PowerTools yum makecache yum update yum groupinstall "Development tools" yum install gcc-gfortran kernel-modules-extra tcl tk tcsh terminator tmux kernel-rpm-macros elfutils-libelf-devel libnl3-devel meson createrepo numactl-devel pip3 install pyelftools
啟用iommu
sudo vi /etc/default/grub //在 GRUB_CMDLINE_LINUX 行新增"intel_iommu=on iommu=pt" //儲存退出
然後更新grub並重啟系統
sudo grub2-mkconfig -o /boot/grub2/grub.cfg sudo grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg sudo reboot
安裝DPDK
CentOS上需要新增/usr/local路徑, 主要是LD_LIBRARY_PATH PATH 和 PKG_CONFIG_PATH 以及sudo的path
sudo vi /etc/ld.so.conf.d/dpdk.conf >>新增如下path /usr/local/lib64 >>退出 sudo ldconfig vim ~/.bashrc >>新增如下path export PATH=/usr/local/bin:$PATH export PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig:${PKG_CONFIG_PATH} 儲存後source source ~/.bashrc sudo vim /etc/sudoers >>將secure_path新增/usr/local/bin Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin
然後解壓dpdk,並編譯安裝
wget tar xf dpdk-21.05.tar.xz cd dpdk-21.05 meson build -D examples=all cd build ninja sudo ninja install sudo ldconfig
設定Hugepage和bind介面
dpdk-hugepages.py --setup 4G modprobe vfio-pci dpdk-devbind.py -s Network devices using kernel driver =================================== 0000:00:05.0 'Virtio network device 1000' if=eth0 drv=virtio-pci unused=vfio-pci *Active* 0000:00:06.0 'Virtio network device 1000' if=eth1 drv=virtio-pci unused=vfio-pci *Active*
注意虛擬機器環境需要noniommu_mode
ifconfig eth1 down echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode dpdk-devbind.py -b vfio-pci 0000:00:06.0
驗證
dpdk-devbind.py -s Network devices using DPDK-compatible driver ============================================ 0000:00:06.0 'Virtio network device 1000' drv=vfio-pci unused= Network devices using kernel driver =================================== 0000:00:05.0 'Virtio network device 1000' if=eth0 drv=virtio-pci unused=vfio-pci *Active*
檢查介面支援情況
下載程式碼
cd ~ wget unzip main.zip cd learn_dpdk-main/
編譯
cd 01_port_init/devinfo/ make clean;make
檢查介面支援情況
./build/devinfo EAL: Detected 24 lcore(s) EAL: Detected 1 NUMA nodes EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: No available 1048576 kB hugepages reported EAL: VFIO support initialized EAL: Invalid NUMA socket, default to 0 EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket 0) eth_virtio_pci_init(): Failed to init PCI device EAL: Requested device 0000:00:05.0 cannot be used EAL: Invalid NUMA socket, default to 0 EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:06.0 (socket 0) EAL: Using IOMMU type 8 (No-IOMMU) TELEMETRY: No legacy callbacks, legacy socket not created ***************************************** number of available port: 1 ========================================= port: 0 Driver:net_virtio Link down MAC address: 00:16:3E:25:3F:0A PCIe:0000:00:06.0 Max RX Queue: 12 Desc: 65535 Max TX Queue: 12 Desc: 65535 Offload Capability: DEV_RX_OFFLOAD_VLAN_STRIP DEV_RX_OFFLOAD_UDP_CKSUM DEV_RX_OFFLOAD_TCP_CKSUM DEV_RX_OFFLOAD_TCP_LRO DEV_RX_OFFLOAD_JUMBO_FRAME ----------------------------------------- DEV_TX_OFFLOAD_VLAN_INSERT DEV_TX_OFFLOAD_UDP_CKSUM DEV_TX_OFFLOAD_TCP_CKSUM DEV_TX_OFFLOAD_TCP_TSO DEV_TX_OFFLOAD_MULTI_SEGS =========================================
測速
cd ~/learn_dpdk-main/02_send_recv/traffic_gen/
修改send_pkt.c的源目的地址,注意目的MAC在阿里雲上要為eeff.ffff.ffff
//init mac struct rte_ether_addr s_addr = {{0x00, 0x16, 0x3e, 0x25, 0x0b, 0xe3}}; struct rte_ether_addr d_addr = {{0xee, 0xff, 0xff, 0xff, 0xff, 0xff}}; //init IP header rte_be32_t s_ip_addr = string_to_ip("10.66.1.220"); rte_be32_t d_ip_addr = string_to_ip("10.66.1.219");
由於介面支援有限,修改 common.h
#define NUM_RX_QUEUE 1 #define NUM_TX_QUEUE 1 static const struct rte_eth_conf port_conf_default = { .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .mq_mode = ETH_MQ_RX_NONE, }, .txmode = { .mq_mode = ETH_MQ_TX_NONE, } };
修改portinit.c 關閉RX-CHECKSUM OFFLOAD, 註釋掉下面這段:
if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_CHECKSUM) { printf("port[%u] support RX cheksum offload.\n", port); port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM; }
最後測速大概3.3Mpps左右,接近官方售賣時的4Mpps
[root@iZuf64vmgrtj12kczyslhdZ traffic_gen]# ./build/run EAL: Detected 24 lcore(s) EAL: Detected 1 NUMA nodes EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: No available 1048576 kB hugepages reported EAL: VFIO support initialized EAL: Invalid NUMA socket, default to 0 EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket 0) eth_virtio_pci_init(): Failed to init PCI device EAL: Requested device 0000:00:05.0 cannot be used EAL: Invalid NUMA socket, default to 0 EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:06.0 (socket 0) EAL: Using IOMMU type 8 (No-IOMMU) TELEMETRY: No legacy callbacks, legacy socket not created initializing port 0... port[0] support TX UDP checksum offload. port[0] support TX TCP checksum offload. Port[0] MAC: 00:16:3e:25:0b:e3 Core 1 doing RX dequeue. Core 2 doing packet enqueue. RX-Queue[0] PPS: 3280464 RX-Queue[0] PPS: 3277792 RX-Queue[0] PPS: 3303116 RX-Queue[0] PPS: 3307443 RX-Queue[0] PPS: 3296451 RX-Queue[0] PPS: 3294396 RX-Queue[0] PPS: 3297737 RX-Queue[0] PPS: 3290069 RX-Queue[0] PPS: 3279720 RX-Queue[0] PPS: 3285987 RX-Queue[0] PPS: 3279424
然後把common.h 中收發都改為4個執行緒
#define NUM_RX_QUEUE 1 #define NUM_TX_QUEUE 1
測試結果和官方售賣的4Mpps一致了。
RX-Queue[0] PPS: 578918 RX-Queue[1] PPS: 866823 RX-Queue[2] PPS: 2288950 RX-Queue[3] PPS: 865335
CPU Info
[root@iZuf64vmgrtj12kczyslhdZ traffic_gen]# cat /proc/cpuinfo | grep Xeon model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz model name : Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz
原文來自: l
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/69955379/viewspace-2777433/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 效能測試流程
- IPERF 網路效能測試
- 網路效能測試-perf
- 效能測試的流程
- web效能測試流程Web
- netperf 網路效能測試
- 效能測試總結(二)---測試流程篇
- [原創]網際網路公司App測試流程APP
- 網際網路App應用程式測試流程及測試總結APP
- 測試網路效能的小工具
- 網路效能測試工具iperf的使用
- 【網路】效能指標與測試工具指標
- 效能測試工作流程淺談
- 軟體測試LR效能分析流程
- 我的效能測試工作流程
- 使用Iperf工具進行網路效能測試
- 網路遊戲效能測試的幾點想法遊戲
- 手持網路效能乙太網測試怎麼選?
- 基於滴滴雲之 Netperf 網路效能測試工具的搭建及使用
- 測評丨NXP LS系列產品網路效能測試
- 阿里雲 VPC 內網效能測試最佳實踐阿里內網
- Linux iperf 網路傳輸效能測試工具Linux
- 效能測試流程各階段的工作
- LoadRunner效能測試工具---(一)使用流程
- 基於jmeter的效能全流程測試JMeter
- 網路測試
- netperf網路效能測試工具的使用詳解
- 介面測試測試流程
- UDP網路測試UDP
- 【網路安全乾貨分享】滲透測試的完整流程!
- [網路效能測試]iperf適用於linux以及windowsLinuxWindows
- 測試流程和理論--測試流程體系
- web滲透的測試流程是什麼?網路與資訊保安Web
- PR效能測試工具升級到全鏈路效能測試與分析平臺
- 在Linux中,如何進行網路效能的峰值測試?Linux
- 網路流量測試工具
- 效能測試 —— 什麼是全鏈路壓測?
- 雲上的移動效能測試平臺