Linux 系統雙網路卡繫結配置實現負載均衡和故障轉移
系統僅涉及RHEL、CentOS、Oracle Linux
系統核心kernels 2.4.12及以後的版本均供bonding模組,以前的版本可以通過patch實現
英文名稱:Channel Bonding Interfaces
1、簡介
Linux上雙網路卡繫結實現就是使用兩塊網路卡虛擬成為一塊網路卡,這個結合起來的裝置看起來是一個單獨的乙太網介面裝置,通俗點講就是兩塊網路卡具有相同的IP地址而並行連結聚合成一個邏輯鏈路工作。其實這項技術在Sun和Cisco中早已存在,被稱為Trunking和Etherchannel 技術,在Linux的2.4.x的核心中也採用這這種技術,被稱為bonding。
bonding技術的最早應用是在叢集——beowulf上,為了提高叢集節點間的資料傳輸而設計的。下面我們討論一下bonding 的原理,什麼是bonding需要從網路卡的混雜(promisc)模式說起。我們知道,在正常情況下,網路卡只接收目的硬體地址(MAC Address)是自身Mac的乙太網幀,對於別的資料幀都濾掉,以減輕驅動程式的負擔。但是網路卡也支援另外一種被稱為混雜promisc的模式,可以接收網路上所有的幀,比如說tcpdump,就是執行在這個模式下。bonding也執行在這個模式下,而且修改了驅動程式中的mac地址,將兩塊網路卡的 Mac地址改成相同,可以接收特定MAC的資料幀。然後把相應的資料幀傳送給bond驅動程式處理。
2、RHEL 5 配置
2.1 備份網路卡配置檔案
[root@redhat5 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 ~/ifcfg-eth0.bak
[root@redhat5 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth1 ~/ifcfg-eth1.bak
2.2 配置繫結
虛擬網路介面配置檔案,指定網路卡IP,ifcfg-bond0:
[root@redhat5 ~]#vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.1.11 #根據實際需要,填寫需要繫結的ip地址掩碼閘道器
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=1 miimon=100" //設定網路卡的執行模式,也可以在配置檔案中設定
miimon是用來進行鏈路監測的。比如:miimon=100,那麼系統每100ms監測一次鏈路連線狀態,如果有一條線路不通就轉入另一條線路;模式1為主備模式。
配置真實的網路卡,eth0 and eth1
[root@redhat5 ~]#vi /etc/sysconfig/network-scripts/ifcfg-eth<N>
DEVICE=eth<N>
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
2.3 載入模組,讓系統支援bonding
修改/etc/modprobe.conf配置
alias bond0 bonding
2.4 重啟網路服務
[root@redhat5 ~]#service network restart
3、 RHEL 6 配置
3.1 備份網路卡配置檔案
[root@redhat6 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 ~/ifcfg-eth0.bak
[root@redhat6 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth1 ~/ifcfg-eth1.bak
3.2 配置繫結
虛擬網路介面配置檔案,指定網路卡IP,ifcfg-bond0:
[root@redhat6 ~]#vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.1.111
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
NM_CONTROLLED=no
BONDING_OPTS="mode=1 miimon=100"
配置真實的網路卡,eth0 and eth1
[root@redhat6 ~]#vi /etc/sysconfig/network-scripts/ifcfg-eth<N>
DEVICE=eth<N>
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
NM_CONTROLLED=no
NM_CONTROLLED=no NetworkManager is not permitted to configure this device. USERCTL=no Non-root users are not allowed to control this device.
BOOTPROTO=none No boot-time protocol should be used
3.3 重啟網路服務
[root@redhat6 ~]#service network restart
4、測試網路
拔下一根網線看ping資料,確認沒問題之後插上該網線連通之後,再拔下另外一根網線,觀察雙網路卡繫結效果。
5、Bonding的7種模式
- define BOND_MODE_ROUNDROBIN 0 (balance-rr模式)網路卡的負載均衡模式
- define BOND_MODE_ACTIVEBACKUP 1 (active-backup模式)網路卡的容錯模式
- define BOND_MODE_XOR 2 (balance-xor模式)需要交換機支援
- define BOND_MODE_BROADCAST 3 (broadcast模式)
- define BOND_MODE_8023AD 4 (IEEE 802.3ad動態鏈路聚合模式)需要交換機支援
- define BOND_MODE_TLB 5 自適應傳輸負載均衡模式
- define BOND_MODE_ALB 6 網路卡虛擬化方式
mode的值共有0-6 七種模式,常用的為0,1,6三種。
- mode=0:平衡負載模式,有自動備援,但需要”Switch”支援及設定。
- mode=1:自動備援模式,其中一條線若斷線,其他線路將會自動備援。
- mode=6:平衡負載模式,有自動備援,不需要”Switch”支援及設定。
linux系統下bond mode引數說明:(mode=4 在交換機支援LACP時推薦使用,其能提供更好的效能和穩定性)
0-輪詢模式,所繫結的網路卡會針對訪問以輪詢演算法進行平分。
1-高可用模式,執行時只使用一個網路卡,其餘網路卡作為備份,在負載不超過單塊網路卡頻寬或壓力時建議使用。
2-基於HASH演算法的負載均衡模式,網路卡的分流按照xmit_hash_policy的TCP協議層設定來進行HASH計算分流,使各種不同處理來源的訪問都儘量在同一個網路卡上進行處理。
3-廣播模式,所有被繫結的網路卡都將得到相同的資料,一般用於十分特殊的網路需求,如需要對兩個互相沒有連線的交換機傳送相同的資料。
4-802.3ab負載均衡模式,要求交換機也支援802.3ab模式,理論上伺服器及交換機都支援此模式時,網路卡頻寬最高可以翻倍(如從1Gbps翻到2Gbps)
5-介面卡輸出負載均衡模式,輸出的資料會通過所有被繫結的網路卡輸出,接收資料時則只選定其中一塊網路卡。如果正在用於接收資料的網路卡發生故障,則由其他網路卡接管,要求所用的網路卡及網路卡驅動可通過ethtool命令得到speed資訊。
6-介面卡輸入/輸出負載均衡模式,在”模式5″的基礎上,在接收資料的同時實現負載均衡,除要求ethtool命令可得到speed資訊外,還要求支援對網路卡MAC地址的動態修改功能。
ModesIt's possible to assign the mode number or the mode name when selecting the mode in the kernel module option.
0 or balance-rr
Round-robin policy: Transmit packets in sequential
order from the first available slave through the
last. This mode provides load balancing and fault
tolerance. (This is the default mode if no mode specified)
1 or active-backup
Active-backup policy: Only one slave in the bond is
active. A different slave becomes active if, and only
if, the active slave fails. The bond's MAC address is
externally visible on only one port (network adapter)
to avoid confusing the switch.
In bonding version 2.6.2 or later, when a failover
occurs in active-backup mode, bonding will issue one
or more gratuitous ARPs on the newly active slave.
One gratutious ARP is issued for the bonding master
interface and each VLAN interfaces configured above
it, provided that the interface has at least one IP
address configured. Gratuitous ARPs issued for VLAN
interfaces are tagged with the appropriate VLAN id.
This mode provides fault tolerance. The primary
option, documented below, affects the behavior of this
mode.
2 or balance-xor
XOR policy: Transmit based on the selected transmit
hash policy. The default policy is a simple [(source
MAC address XOR'd with destination MAC address) modulo
slave count]. Alternate transmit policies may be
selected via the xmit_hash_policy option, described
below.
This mode provides load balancing and fault tolerance.
3 or broadcast
Broadcast policy: transmits everything on all slave
interfaces. This mode provides fault tolerance.
4 or 802.3ad
IEEE 802.3ad Dynamic link aggregation. Creates
aggregation groups that share the same speed and
duplex settings. Utilizes all slaves in the active
aggregator according to the 802.3ad specification.
Slave selection for outgoing traffic is done according
to the transmit hash policy, which may be changed from
the default simple XOR policy via the xmit_hash_policy
option, documented below. Note that not all transmit
policies may be 802.3ad compliant, particularly in
regards to the packet mis-ordering requirements of
section 43.2.4 of the 802.3ad standard. Differing
peer implementations will have varying tolerances for
noncompliance.
Prerequisites:
1. Ethtool support in the base drivers for retrieving
the speed and duplex of each slave.
2. A switch that supports IEEE 802.3ad Dynamic link
aggregation.
Most switches will require some type of configuration
to enable 802.3ad mode.
5 or balance-tlb
Adaptive transmit load balancing: channel bonding that
does not require any special switch support. The
outgoing traffic is distributed according to the
current load (computed relative to the speed) on each
slave. Incoming traffic is received by the current
slave. If the receiving slave fails, another slave
takes over the MAC address of the failed receiving
slave.
Prerequisite:
Ethtool support in the base drivers for retrieving the
speed of each slave.
6 or balance-alb
Adaptive load balancing: includes balance-tlb plus
receive load balancing (rlb) for IPV4 traffic, and
does not require any special switch support. The
receive load balancing is achieved by ARP negotiation.
The bonding driver intercepts the ARP Replies sent by
the local system on their way out and overwrites the
source hardware address with the unique hardware
address of one of the slaves in the bond such that
different peers use different hardware addresses for
the server.
Receive traffic from connections created by the server
is also balanced. When the local system sends an ARP
Request the bonding driver copies and saves the peer's
IP information from the ARP packet. When the ARP
Reply arrives from the peer, its hardware address is
retrieved and the bonding driver initiates an ARP
reply to this peer assigning it to one of the slaves
in the bond. A problematic outcome of using ARP
negotiation for balancing is that each time that an
ARP request is broadcast it uses the hardware address
of the bond. Hence, peers learn the hardware address
of the bond and the balancing of receive traffic
collapses to the current slave. This is handled by
sending updates (ARP Replies) to all the peers with
their individually assigned hardware address such that
the traffic is redistributed. Receive traffic is also
redistributed when a new slave is added to the bond
and when an inactive slave is re-activated. The
receive load is distributed sequentially (round robin)
among the group of highest speed slaves in the bond.
When a link is reconnected or a new slave joins the
bond the receive traffic is redistributed among all
active slaves in the bond by initiating ARP Replies
with the selected mac address to each of the
clients. The updelay parameter (detailed below) must
be set to a value equal or greater than the switch's
forwarding delay so that the ARP Replies sent to the
peers will not be blocked by the switch.
Prerequisites:
1. Ethtool support in the base drivers for retrieving
the speed of each slave.
2. Base driver support for setting the hardware
address of a device while it is open. This is
required so that there will always be one slave in the
team using the bond hardware address (the
curr_active_slave) while having a unique hardware
address for each slave in the bond. If the
curr_active_slave fails its hardware address is
swapped with the new curr_active_slave that was
chosen.
相關文章
- inux 雙網路卡繫結(bonding)實現負載均衡或故障轉移(轉)UX負載
- Linux配置雙網路卡繫結實現負載均衡和高可用性配置Linux負載
- Linux雙網路卡繫結實現負載均衡和失效保護Linux負載
- Linux下雙網路卡繫結技術實現負載均衡和失效保護(轉)Linux負載
- Linux雙網路卡負載均衡Linux負載
- 配置 RAC 負載均衡與故障轉移負載
- Amoeba實現MySQL的負載均衡、故障轉移MySql負載
- Linux 雙網路卡繫結實踐Linux
- [Linux] Linux bond 網路卡繫結配置教程(轉載)Linux
- linux雙網路卡繫結Linux
- Linux 繫結雙網路卡Linux
- Linux 雙網路卡繫結Linux
- 【轉】redhat 雙網路卡繫結Redhat
- Linux負載均衡雙機實現文件Linux負載
- linux redhat 雙網路卡繫結LinuxRedhat
- 雙網路卡繫結
- Oracle 11gR2 RAC 單網路卡轉雙網路卡繫結配置Oracle
- (轉)linux 實現多網路卡繫結BondingLinux
- linux6.5 雙網路卡繫結Linux
- Linux 雙網路卡繫結技術Linux
- RedHat Linux 5 雙網路卡繫結RedhatLinux
- Linux下雙網路卡繫結bond配置例項詳解Linux
- Linux上使用HAProxy配置HTTP負載均衡系統LinuxHTTP負載
- Redhat Linux網路卡配置與繫結RedhatLinux
- Linux網路卡繫結實現頻寬翻倍Linux
- redhat 6.3 雙網路卡繫結Redhat
- Redhat AS 5.4 雙網路卡繫結Redhat
- centos 6.5 雙網路卡繫結CentOS
- liunx下雙網路卡繫結
- 分離mysql和儲存實現雙web負載均衡MySqlWeb負載
- Linux下雙網路卡繫結bond0Linux
- SUSE linux雙網路卡繫結一個IPLinux
- Redhat linux雙網路卡繫結一個IPRedhatLinux
- CentOS 5.4上雙網路卡(多網路卡)繫結CentOS
- Redhat Linux網路卡配置與繫結(zt)RedhatLinux
- RAC_TNS故障轉移負載均衡、SCAN IP、VIP、PUBLIC IP負載
- 【RAC】RAC中的負載均衡和故障切換--TAF配置負載
- RHEL6 雙網路卡繫結