bond技術是在linux2.4以後加入核心。
一般步驟是
1.把bonding模組加入核心,
2 編輯要繫結的網路卡設定,去除地址設定
3 新增bond裝置,設定地址等配置
4
重啟網路
5 在交換機上做支援
具體資訊看 核心文件 Documentation/networking/bonding.txt
參考例項:
Linux 雙網路卡繫結一個IP地址,實質工作就是使用兩塊網路卡虛擬為一塊,使用同一個IP地址,是我們能夠得到更好的更快的服務。其實這項技術在Sun和Cisco 中早已存在,被稱為Trunking和Etherchannel技術,在Linux的2.4.x的核心中也採用這這種技術,被稱為bonding。
1、bonding 的原理:
什 麼是bonding需要從網路卡的混雜(promisc)模式說起。我們知道,在正常情況下,網路卡只接收目的硬體地址(MAC Address)是自身Mac的乙太網幀,對於別的資料幀都濾掉,以減輕驅動程式的負擔。但是網路卡也支援另外一種被稱為混雜promisc的模式,可以接 收網路上所有的幀,比如說tcpdump,就是執行在這個模式下。bonding也執行在這個模式下,而且修改了驅動程式中的mac地址,將兩塊網路卡的 Mac地址改成相同,可以接收特定mac的資料幀。然後把相應的資料幀傳送給bond驅動程式處理。
2、bonding模組工作方式:
bonding
mode=1 miimon=100。miimon是用來進行鏈路監測的。
比如:miimon=100,那麼系統每100ms監測一次鏈路連線狀態,如果有一條線路不通就轉入另一條線路;mode的值表示工作模式,他共有0-6
七種模式,常用的為0,1,6三種。
mode=0:平衡負載模式,有自動備援,但需要”Switch”支援及設定。
mode=1:自動備援模式,其中一條線若斷線,其他線路將會自動備援。
mode=6:平衡負載模式,有自動備援,不需要”Switch”支援及設定。
mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.
mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. Pre-requisites: 1. Ethtool support in the base drivers for retrieving
the speed and duplex of each slave. 2. A switch that supports IEEE 802.3ad Dynamic link
aggregation. Most switches will require some type of configuration to enable 802.3ad mode.
mode=5 (balance-tlb)
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.
mode=6 (balance-alb)
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
3、debian系統的安裝配置
3.1、安裝ifenslave
- apt-get install ifenslave
3.2、讓系統開機自動載入模組bonding
- sudo sh -c "echo bonding mode=1 miimon=100 >> /etc/modules"
3.3、網路卡配置
- sudo vi /etc/network/interfaces
- #例項內容如下:
- auto lo
- iface lo inet loopback
- auto bond0
- iface bond0 inet static
- address 192.168.1.110
- netmask 255.255.255.0
- gateway 192.168.1.1
- dns-nameservers 192.168.1.1
- post-up ifenslave bond0 eth0 eth1
- pre-down ifenslave -d bond0 eth0 eth1
3.4、重啟網路卡,完成配置
- #如果安裝ifenslave後你沒有重啟計算機,必須手動載入bonding模組。
- sudo modprobe bonding mode=1 miimon=100
- #重啟網路卡
- sudo /etc/init.d/networking restart
4、redhat系統的安裝配置
4.1、安裝ifenslave
redhat預設一般已經安裝。未安裝的要先安裝。
- yum install ifenslave
4.2、讓系統開機自動載入模組bonding
- sudo sh -c "echo alias bond0 bonding >> /etc/modprobe.conf"
- sudo sh -c "echo options bond0 miimon=100 mode=1 >> /etc/modprobe.conf"
4.3、網路卡配置
- sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
- #eth0配置如下
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=none
- sudo vi /etc/sysconfig/network-scripts/ifcfg-eth1
- #eth1配置如下
- DEVICE=eth1
- ONBOOT=yes
- BOOTPROTO=none
- sudo vi /etc/sysconfig/network-scripts/ifcfg-bond0
- #bond0配置如下
- DEVICE=bond0
- ONBOOT=yes
- BOOTPROTO=static
- IPADDR=192.168.1.110
- NETMASK=255.255.255.0
- GATEWAY=192.168.1.1
- SLAVE=eth0,eth1
- TYPE=Ethernet
- #系統啟動時繫結雙網路卡
- sudo sh -c "echo ifenslave bond0 eth0 eth1 >> /etc/rc.local"
4.4、重啟網路卡,完成配置
- #如果安裝ifenslave後你沒有重啟計算機,必須手動載入bonding模組。
- sudo modprobe bonding mode=1 miimon=100
- #重啟網路卡
- sudo /etc/init.d/network restart
5、交換機etherChannel配置
使用mode=0時,需要交換機配置支援etherChannel。
- Switch# configure terminal
- Switch(config)# interface range fastethernet 0/1 - 2
- Switch(config-if-range)# channel-group 1 mode on
- Switch(config-if-range)# end
- Switch#copy run start
參考
1 http://sapling.me/unixlinux/linux_two_nic_one_ip_bonding.html
2 http://www.linux-corner.info/bonding.html
Linux 網路卡繫結技術
保持伺服器的高可用性是企業級 IT 環境的重要因素。其中最重要的一點是伺服器網路連線的高可用性。網路卡(NIC)繫結技術有助於保證高可用性特性並提供其它優勢以提高網路效能。
我 們在這介紹的Linux雙網路卡繫結實現就是使用兩塊網路卡虛擬成為一塊網路卡,這個聚合起來的裝置看起來是一個單獨的乙太網介面裝置,通俗點講就是兩塊網路卡具 有相同的IP地址而並行連結聚合成一個邏輯鏈路工作。其實這項技術在Sun和Cisco中早已存在,被稱為Trunking和Etherchannel技 術,在Linux的2.4.x的核心中也採用這這種技術,被稱為bonding。bonding技術的最早應用是在叢集——beowulf上,為了提高集 群節點間的資料傳輸而設計的。下面我們討論一下bonding 的原理,什麼是bonding需要從網路卡的混雜(promisc)模式說起。我們知道,在 正常情況下,網路卡只接收目的硬體地址(MAC Address)是自身Mac的乙太網幀,對於別的資料幀都濾掉,以減輕驅動程式的負擔。但是網路卡也支援另 外一種被稱為混雜promisc的模式,可以接收網路上所有的幀,比如說tcpdump,就是執行在這個模式下。bonding也執行在這個模式下,而且 修改了驅動程式中的mac地址,將兩塊網路卡的Mac地址改成相同,可以接收特定mac的資料幀。然後把相應的資料幀傳送給bond驅動程式處理。
說了半天理論,其實配置很簡單,一共四個步驟:
實驗的作業系統是 Redhat Linux Enterprise 3.0
繫結的前提條件: 晶片組型號相同,而且網路卡應該具備自己獨立的BIOS晶片
1.編輯虛擬網路介面配置檔案,指定網路卡IP
- vi /etc/sysconfig/ network-scripts/ ifcfg-bond0
- [root@rhas-13 root]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0
- 將第一行改成 DEVICE=bond0
- # cat ifcfg-bond0
- DEVICE=bond0
- BOOTPROTO=static
- IPADDR=172.31.0.13
- NETMASK=255.255.252.0
- BROADCAST=172.31.3.254
- ONBOOT=yes
- TYPE=Ethernet
這裡要主意,不要指定單個網路卡的IP 地址、子網掩碼或網路卡 ID。將上述資訊指定到虛擬介面卡(bonding)中即可。[root@rhas-13 network-scripts]# cat ifcfg-eth0
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=dhcp
- [root@rhas-13 network-scripts]# cat ifcfg-eth1
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=dhcp
3 # vi /etc/modules.conf
編輯 /etc/modules.conf 檔案,加入如下一行內容,以使系統在啟動時載入bonding模組,對外虛擬網路介面裝置為 bond0加入下列兩行
- alias bond0 bonding
- options bond0 miimon=100 mode=1
說明:miimon是用來進行鏈路監測的。 比如:miimon=100,那麼系統每100ms監測一次鏈路連線狀態,如果有一條線路不通就轉入另一條線路;mode的值表示工作模式,他共有0,1,2,3四種模式,常用的為0,1兩種。
- mode=0表示load balancing (round-robin)為負載均衡方式,兩塊網路卡都工作。
- mode=1表示fault-tolerance (active-backup)提供冗餘功能,工作方式是主備的工作方式,也就是說預設情況下只有一塊網路卡工作,另一塊做備份.
bonding只能提供鏈路監測,即從主機到交換機的鏈路是否接通。如果只是交換機對外的鏈路down掉了,而交換機本身並沒有故障,那麼bonding會認為鏈路沒有問題而繼續使用
加入兩行
- ifenslave bond0 eth0 eth1
- route add -net 172.31.3.254 netmask 255.255.255.0 bond0
到這時已經配置完畢重新啟動機器.
重啟會看見以下資訊就表示配置成功了
................
Bringing up interface bond0 OK
Bringing up interface eth0 OK
Bringing up interface eth1 OK
................
下面我們討論以下mode分別為0,1時的情況
mode=1
工作在主備模式下,這時eth1作為備份網路卡是no arp的
- [root@rhas-13 network-scripts]# ifconfig 驗證網路卡的配置資訊
- bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
- RX packets:18495 errors:0 dropped:0 overruns:0 frame:0
- TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:1587253 (1.5 Mb) TX bytes:89642 (87.5 Kb)
- eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:9572 errors:0 dropped:0 overruns:0 frame:0
- TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:833514 (813.9 Kb) TX bytes:89642 (87.5 Kb)
- Interrupt:11
- eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:8923 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:753739 (736.0 Kb) TX bytes:0 (0.0 b)
- Interrupt:15
那也就是說在主備模式下,當一個網路介面失效時(例如主交換機掉電等),不回出現網路中斷,系統會按照cat /etc/rc.d/rc.local裡指定網路卡的順序工作,機器仍能對外服務,起到了失效保護的功能.
mode=0
負載均衡工作模式,他能提供兩倍的頻寬,下我們來看一下網路卡的配置資訊
- [root@rhas-13 root]# ifconfig
- bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
- RX packets:2817 errors:0 dropped:0 overruns:0 frame:0
- TX packets:95 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:226957 (221.6 Kb) TX bytes:15266 (14.9 Kb)
- eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:1406 errors:0 dropped:0 overruns:0 frame:0
- TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:113967 (111.2 Kb) TX bytes:7268 (7.0 Kb)
- Interrupt:11
- eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:1411 errors:0 dropped:0 overruns:0 frame:0
- TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:112990 (110.3 Kb) TX bytes:7998 (7.8 Kb)
- Interrupt:15
在這種情況下出現一塊網路卡失效,僅僅會是伺服器出口頻寬下降,也不會影響網路使用.
通過檢視bond0的工作狀態查詢能詳細的掌握bonding的工作狀態
- [root@rhas-13 bonding]# cat /proc/net/bonding/bond0
- bonding.c:v2.4.1 (September 15, 2003)
- Bonding Mode: load balancing (round-robin)
- MII Status: up
- MII Polling Interval (ms): 0
- Up Delay (ms): 0
- Down Delay (ms): 0
- Multicast Mode: all slaves
- Slave Interface: eth1
- MII Status: up
- Link Failure Count: 0
- Permanent HW addr: 00:0e:7f:25:d9:8a
- Slave Interface: eth0
- MII Status: up
- Link Failure Count: 0
- Permanent HW addr: 00:0e:7f:25:d9:8b
參考文件:
/usr/share/doc/kernel-doc-2.4.21/networking/bonding.txt
Finally, today I had implemented NIC bounding (bind both NIC so that it works as a single device). Bonding is nothing but Linux kernel feature that allows to aggregate multiple like interfaces (such as eth0, eth1) into a single virtual link such as bond0. The idea is pretty simple get higher data rates and as well as link failover. The following instructions were tested on:
- RHEL v4 / 5 / 6 amd64
- CentOS v5 / 6 amd64
- Fedora Linux 13 amd64 and up.
- 2 x PCI-e Gigabit Ethernet NICs with Jumbo Frames (MTU 9000)
- Hardware RAID-10 w/ SAS 15k enterprise grade hard disks.
- Gigabit switch with Jumbo Frame
Say
Hello To bounding DriverThis server act as an heavy duty ftp, and nfs
file server. Each, night a perl script will transfer lots of data from
this box to a backup server. Therefore, the network would be setup on a
switch using dual network cards. I am using Red Hat enterprise Linux
version 4.0. But, the inductions should work on RHEL 5 and 6 too.
Linux allows binding of multiple network interfaces into a single channel/NIC using special kernel module called bonding. According to official bonding documentation:
Step #1: Create a Bond0 Configuration FileThe Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.
Red
Hat Enterprise Linux (and its clone such as CentOS) stores network
configuration in /etc/sysconfig/network-scripts/ directory. First, you
need to create a bond0 config file as follows:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append the following linest:
- DEVICE=bond0
- IPADDR=192.168.1.20
- NETWORK=192.168.1.0
- NETMASK=255.255.255.0
- USERCTL=no
- BOOTPROTO=none
- ONBOOT=yes
You need to replace IP address with your actual setup. Save and close the file.
Step #2: Modify eth0 and eth1 config filesOpen both configuration using a text editor such as vi/vim, and make sure file read as follows for eth0 interface
- # vi /etc/sysconfig/network-scripts/ifcfg-eth0
- DEVICE=eth0
- USERCTL=no
- ONBOOT=yes
- MASTER=bond0
- SLAVE=yes
- BOOTPROTO=none
Open eth1 configuration file using vi text editor, enter:
- # vi /etc/sysconfig/network-scripts/ifcfg-eth1
- DEVICE=eth1
- USERCTL=no
- ONBOOT=yes
- MASTER=bond0
- SLAVE=yes
- BOOTPROTO=none
Save and close the file.
Step # 3: Load bond driver/moduleMake sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:
- # vi /etc/modprobe.conf
- alias bond0 bonding
- options bond0 mode=balance-alb miimon=100
First, load the bonding module, enter:
- # modprobe bonding
- # service network restart
- # cat /proc/net/bonding/bond0
- Bonding Mode: load balancing (round-robin)
- MII Status: up
- MII Polling Interval (ms): 100
- Up Delay (ms): 200
- Down Delay (ms): 200
- Slave Interface: eth0
- MII Status: up
- Link Failure Count: 0
- Permanent HW addr: 00:0c:29:c6:be:59
- Slave Interface: eth1
- MII Status: up
- Link Failure Count: 0
- Permanent HW addr: 00:0c:29:c6:be:63
To kist all network interfaces, enter:
- # ifconfig
- bond0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
- inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
- inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
- UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
- RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
- TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:250825 (244.9 KiB) TX bytes:244683 (238.9 KiB)
- eth0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
- inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
- inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
- TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:251161 (245.2 KiB) TX bytes:180289 (176.0 KiB)
- Interrupt:11 Base address:0x1400
- eth1 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
- inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
- inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:4 errors:0 dropped:0 overruns:0 frame:0
- TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:258 (258.0 b) TX bytes:66516 (64.9 KiB)
- Interrupt:10 Base address:0x1480
Read the official bounding howto which covers the following additional topics:
- VLAN Configuration
- Cisco switch related configuration
- Advanced routing and troubleshooting
- Red Hat (RHEL/CentOS) Linux Bond or Team Multiple Network Interfaces (NIC) into a Single Interface
- Debian / Ubuntu Linux Configure Bonding [ Teaming / Aggregating NIC ]
--------------------------------------------------
======================================================
- http://www.chinaunix.net/jh/4/371049.html
- http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html
http://os.51cto.com/art/200911/165875.htm