Linux 雙網路卡繫結技術

weixin_34391854發表於2015-09-04

 bond技術是在linux2.4以後加入核心。
 一般步驟是
1.把bonding模組加入核心,

2 編輯要繫結的網路卡設定,去除地址設定

3 新增bond裝置,設定地址等配置

重啟網路 

5 在交換機上做支援

具體資訊看  核心文件 Documentation/networking/bonding.txt 

參考例項:

Linux 雙網路卡繫結一個IP地址,實質工作就是使用兩塊網路卡虛擬為一塊,使用同一個IP地址,是我們能夠得到更好的更快的服務。其實這項技術在Sun和Cisco 中早已存在,被稱為Trunking和Etherchannel技術,在Linux的2.4.x的核心中也採用這這種技術,被稱為bonding。

1、bonding 的原理:

什 麼是bonding需要從網路卡的混雜(promisc)模式說起。我們知道,在正常情況下,網路卡只接收目的硬體地址(MAC Address)是自身Mac的乙太網幀,對於別的資料幀都濾掉,以減輕驅動程式的負擔。但是網路卡也支援另外一種被稱為混雜promisc的模式,可以接 收網路上所有的幀,比如說tcpdump,就是執行在這個模式下。bonding也執行在這個模式下,而且修改了驅動程式中的mac地址,將兩塊網路卡的 Mac地址改成相同,可以接收特定mac的資料幀。然後把相應的資料幀傳送給bond驅動程式處理。

2、bonding模組工作方式:

bonding mode=1 miimon=100。miimon是用來進行鏈路監測的。 比如:miimon=100,那麼系統每100ms監測一次鏈路連線狀態,如果有一條線路不通就轉入另一條線路;mode的值表示工作模式,他共有0-6 七種模式,常用的為0,1,6三種。
mode=0:平衡負載模式,有自動備援,但需要”Switch”支援及設定。
mode=1:自動備援模式,其中一條線若斷線,其他線路將會自動備援。
mode=6:平衡負載模式,有自動備援,不需要”Switch”支援及設定。
mode=0 (balance-rr)

Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)

Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor)

XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. Pre-requisites: 1. Ethtool support in the base drivers for retrieving

the speed and duplex of each slave. 2. A switch that supports IEEE 802.3ad Dynamic link

aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)

Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb)

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

3、debian系統的安裝配置

3.1、安裝ifenslave

  1. apt-get install ifenslave  

3.2、讓系統開機自動載入模組bonding

  1. sudo sh -c "echo bonding mode=1 miimon=100 >> /etc/modules"  

3.3、網路卡配置

  1. sudo vi /etc/network/interfaces  
  2. #例項內容如下:  
  3. auto lo  
  4. iface lo inet loopback  
  5.   
  6. auto bond0  
  7. iface bond0 inet static  
  8. address 192.168.1.110  
  9. netmask 255.255.255.0  
  10. gateway 192.168.1.1  
  11. dns-nameservers 192.168.1.1  
  12. post-up ifenslave bond0 eth0 eth1  
  13. pre-down ifenslave -d bond0 eth0 eth1  

3.4、重啟網路卡,完成配置

  1. #如果安裝ifenslave後你沒有重啟計算機,必須手動載入bonding模組。  
  2. sudo modprobe bonding mode=1 miimon=100  
  3. #重啟網路卡  
  4. sudo /etc/init.d/networking restart  

4、redhat系統的安裝配置

4.1、安裝ifenslave
redhat預設一般已經安裝。未安裝的要先安裝。

  1. yum install ifenslave  

4.2、讓系統開機自動載入模組bonding

  1. sudo sh -c "echo alias bond0 bonding >> /etc/modprobe.conf"  
  2. sudo sh -c "echo options bond0 miimon=100 mode=1 >> /etc/modprobe.conf"  

4.3、網路卡配置

  1. sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0  
  2. #eth0配置如下  
  3. DEVICE=eth0  
  4. ONBOOT=yes  
  5. BOOTPROTO=none  
  6.   
  7. sudo vi /etc/sysconfig/network-scripts/ifcfg-eth1  
  8. #eth1配置如下  
  9. DEVICE=eth1  
  10. ONBOOT=yes  
  11. BOOTPROTO=none  
  12.   
  13. sudo vi /etc/sysconfig/network-scripts/ifcfg-bond0  
  14. #bond0配置如下  
  15. DEVICE=bond0  
  16. ONBOOT=yes  
  17. BOOTPROTO=static  
  18. IPADDR=192.168.1.110  
  19. NETMASK=255.255.255.0  
  20. GATEWAY=192.168.1.1  
  21. SLAVE=eth0,eth1  
  22. TYPE=Ethernet  
  23.   
  24. #系統啟動時繫結雙網路卡  
  25. sudo sh -c "echo ifenslave bond0 eth0 eth1 >> /etc/rc.local"  

4.4、重啟網路卡,完成配置

  1. #如果安裝ifenslave後你沒有重啟計算機,必須手動載入bonding模組。  
  2. sudo modprobe bonding mode=1 miimon=100  
  3. #重啟網路卡  
  4. sudo /etc/init.d/network restart  

5、交換機etherChannel配置

使用mode=0時,需要交換機配置支援etherChannel。

  1. Switch# configure terminal  
  2. Switch(config)# interface range fastethernet 0/1 - 2  
  3. Switch(config-if-range)# channel-group 1 mode on  
  4. Switch(config-if-range)# end  
  5. Switch#copy run start  

參考
1 http://sapling.me/unixlinux/linux_two_nic_one_ip_bonding.html
2 http://www.linux-corner.info/bonding.html

 

 

 

 

 

 

Linux 網路卡繫結技術

簡單的說就是幾個網路卡綁在一起成為一個虛擬的網路卡,這個網路卡一般命名為bond0,1,2...用到的技術是bonding,有下面的帖子總結的好,貼之
 
===================================
Linux下雙網路卡繫結技術實現負載均衡和失效保護

     保持伺服器的高可用性是企業級 IT 環境的重要因素。其中最重要的一點是伺服器網路連線的高可用性。網路卡(NIC)繫結技術有助於保證高可用性特性並提供其它優勢以提高網路效能。 

      我 們在這介紹的Linux雙網路卡繫結實現就是使用兩塊網路卡虛擬成為一塊網路卡,這個聚合起來的裝置看起來是一個單獨的乙太網介面裝置,通俗點講就是兩塊網路卡具 有相同的IP地址而並行連結聚合成一個邏輯鏈路工作。其實這項技術在Sun和Cisco中早已存在,被稱為Trunking和Etherchannel技 術,在Linux的2.4.x的核心中也採用這這種技術,被稱為bonding。bonding技術的最早應用是在叢集——beowulf上,為了提高集 群節點間的資料傳輸而設計的。下面我們討論一下bonding 的原理,什麼是bonding需要從網路卡的混雜(promisc)模式說起。我們知道,在 正常情況下,網路卡只接收目的硬體地址(MAC Address)是自身Mac的乙太網幀,對於別的資料幀都濾掉,以減輕驅動程式的負擔。但是網路卡也支援另 外一種被稱為混雜promisc的模式,可以接收網路上所有的幀,比如說tcpdump,就是執行在這個模式下。bonding也執行在這個模式下,而且 修改了驅動程式中的mac地址,將兩塊網路卡的Mac地址改成相同,可以接收特定mac的資料幀。然後把相應的資料幀傳送給bond驅動程式處理。 
    說了半天理論,其實配置很簡單,一共四個步驟:
實驗的作業系統是 Redhat Linux Enterprise 3.0
繫結的前提條件: 晶片組型號相同,而且網路卡應該具備自己獨立的BIOS晶片

1.編輯虛擬網路介面配置檔案,指定網路卡IP 
  1. vi /etc/sysconfig/ network-scripts/ ifcfg-bond0
  2. [root@rhas-13 root]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0
2 #vi ifcfg-bond0 
  1. 將第一行改成 DEVICE=bond0
  2. # cat ifcfg-bond0
  3. DEVICE=bond0
  4. BOOTPROTO=static
  5. IPADDR=172.31.0.13
  6. NETMASK=255.255.252.0
  7. BROADCAST=172.31.3.254
  8. ONBOOT=yes
  9. TYPE=Ethernet
這裡要主意,不要指定單個網路卡的IP 地址、子網掩碼或網路卡 ID。將上述資訊指定到虛擬介面卡(bonding)中即可。
[root@rhas-13 network-scripts]# cat ifcfg-eth0 
  1. DEVICE=eth0
  2. ONBOOT=yes
  3. BOOTPROTO=dhcp
  4. [root@rhas-13 network-scripts]# cat ifcfg-eth1
  5. DEVICE=eth0
  6. ONBOOT=yes
  7. BOOTPROTO=dhcp

3 # vi /etc/modules.conf 
編輯 /etc/modules.conf 檔案,加入如下一行內容,以使系統在啟動時載入bonding模組,對外虛擬網路介面裝置為 bond0 
 
加入下列兩行 
  1. alias bond0 bonding
  2. options bond0 miimon=100 mode=1
說明:miimon是用來進行鏈路監測的。 比如:miimon=100,那麼系統每100ms監測一次鏈路連線狀態,如果有一條線路不通就轉入另一條線路;mode的值表示工作模式,他共有0,1,2,3四種模式,常用的為0,1兩種。
  1.    mode=0表示load balancing (round-robin)為負載均衡方式,兩塊網路卡都工作。
  2.    mode=1表示fault-tolerance (active-backup)提供冗餘功能,工作方式是主備的工作方式,也就是說預設情況下只有一塊網路卡工作,另一塊做備份.  
bonding只能提供鏈路監測,即從主機到交換機的鏈路是否接通。如果只是交換機對外的鏈路down掉了,而交換機本身並沒有故障,那麼bonding會認為鏈路沒有問題而繼續使用
4 # vi /etc/rc.d/rc.local 
加入兩行 
  1. ifenslave bond0 eth0 eth1
  2. route add -net 172.31.3.254 netmask 255.255.255.0 bond0

到這時已經配置完畢重新啟動機器.
重啟會看見以下資訊就表示配置成功了
................ 
Bringing up interface bond0 OK 
Bringing up interface eth0 OK 
Bringing up interface eth1 OK 
................


下面我們討論以下mode分別為0,1時的情況

mode=1
工作在主備模式下,這時eth1作為備份網路卡是no arp的
    1. [root@rhas-13 network-scripts]# ifconfig 驗證網路卡的配置資訊
    2. bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    3.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    4.           UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
    5.           RX packets:18495 errors:0 dropped:0 overruns:0 frame:0
    6.           TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
    7.           collisions:0 txqueuelen:0
    8.           RX bytes:1587253 (1.5 Mb) TX bytes:89642 (87.5 Kb)
    9.   
    10. eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    11.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    12.           UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
    13.           RX packets:9572 errors:0 dropped:0 overruns:0 frame:0
    14.           TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
    15.           collisions:0 txqueuelen:1000
    16.           RX bytes:833514 (813.9 Kb) TX bytes:89642 (87.5 Kb)
    17.           Interrupt:11
    18.   
    19. eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    20.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    21.           UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1
    22.           RX packets:8923 errors:0 dropped:0 overruns:0 frame:0
    23.           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
    24.           collisions:0 txqueuelen:1000
    25.           RX bytes:753739 (736.0 Kb) TX bytes:0 (0.0 b)
    26.           Interrupt:15
    那也就是說在主備模式下,當一個網路介面失效時(例如主交換機掉電等),不回出現網路中斷,系統會按照cat /etc/rc.d/rc.local裡指定網路卡的順序工作,機器仍能對外服務,起到了失效保護的功能.

mode=0    
負載均衡工作模式,他能提供兩倍的頻寬,下我們來看一下網路卡的配置資訊
  1. [root@rhas-13 root]# ifconfig
  2. bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
  3. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
  4. UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
  5. RX packets:2817 errors:0 dropped:0 overruns:0 frame:0
  6. TX packets:95 errors:0 dropped:0 overruns:0 carrier:0
  7. collisions:0 txqueuelen:0
  8. RX bytes:226957 (221.6 Kb) TX bytes:15266 (14.9 Kb)
  9. eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
  10. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
  11. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
  12. RX packets:1406 errors:0 dropped:0 overruns:0 frame:0
  13. TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
  14. collisions:0 txqueuelen:1000
  15. RX bytes:113967 (111.2 Kb) TX bytes:7268 (7.0 Kb)
  16. Interrupt:11
  17. eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
  18. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
  19. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
  20. RX packets:1411 errors:0 dropped:0 overruns:0 frame:0
  21. TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
  22. collisions:0 txqueuelen:1000
  23. RX bytes:112990 (110.3 Kb) TX bytes:7998 (7.8 Kb)
  24. Interrupt:15
 
      在這種情況下出現一塊網路卡失效,僅僅會是伺服器出口頻寬下降,也不會影響網路使用.




通過檢視bond0的工作狀態查詢能詳細的掌握bonding的工作狀態
  1. [root@rhas-13 bonding]# cat /proc/net/bonding/bond0
  2. bonding.c:v2.4.1 (September 15, 2003)
  3. Bonding Mode: load balancing (round-robin)
  4. MII Status: up
  5. MII Polling Interval (ms): 0
  6. Up Delay (ms): 0
  7. Down Delay (ms): 0
  8. Multicast Mode: all slaves
  9. Slave Interface: eth1
  10. MII Status: up
  11. Link Failure Count: 0
  12. Permanent HW addr: 00:0e:7f:25:d9:8a
  13. Slave Interface: eth0
  14. MII Status: up
  15. Link Failure Count: 0
  16. Permanent HW addr: 00:0e:7f:25:d9:8b
     Linux下通過網路卡邦定技術既增加了伺服器的可靠性,又增加了可用網路頻寬,為使用者提供不間斷的關鍵服務。用以上方法均在redhat的多個版本測試成功,而且效果良好.心動不如行動,趕快一試吧!

參考文件:
/usr/share/doc/kernel-doc-2.4.21/networking/bonding.txt
 
 
-----------------------------

Finally, today I had implemented NIC bounding (bind both NIC so that it works as a single device). Bonding is nothing but Linux kernel feature that allows to aggregate multiple like interfaces (such as eth0, eth1) into a single virtual link such as bond0. The idea is pretty simple get higher data rates and as well as link failover. The following instructions were tested on:

  1. RHEL v4 / 5 / 6 amd64
  2. CentOS v5 / 6 amd64
  3. Fedora Linux 13 amd64 and up.
  4. 2 x PCI-e Gigabit Ethernet NICs with Jumbo Frames (MTU 9000)
  5. Hardware RAID-10 w/ SAS 15k enterprise grade hard disks.
  6. Gigabit switch with Jumbo Frame

 


Say Hello To bounding DriverThis server act as an heavy duty ftp, and nfs file server. Each, night a perl script will transfer lots of data from this box to a backup server. Therefore, the network would be setup on a switch using dual network cards. I am using Red Hat enterprise Linux version 4.0. But, the inductions should work on RHEL 5 and 6 too.

Linux allows binding of multiple network interfaces into a single channel/NIC using special kernel module called bonding. According to official bonding documentation:

The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.

Step #1: Create a Bond0 Configuration File

Red Hat Enterprise Linux (and its clone such as CentOS) stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create a bond0 config file as follows:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append the following linest:

  1. DEVICE=bond0
  2. IPADDR=192.168.1.20
  3. NETWORK=192.168.1.0
  4. NETMASK=255.255.255.0
  5. USERCTL=no
  6. BOOTPROTO=none
  7. ONBOOT=yes

You need to replace IP address with your actual setup. Save and close the file.

Step #2: Modify eth0 and eth1 config files

Open both configuration using a text editor such as vi/vim, and make sure file read as follows for eth0 interface

  1. # vi /etc/sysconfig/network-scripts/ifcfg-eth0
Modify/append directive as follows:
  1. DEVICE=eth0
  2. USERCTL=no
  3. ONBOOT=yes
  4. MASTER=bond0
  5. SLAVE=yes
  6. BOOTPROTO=none

Open eth1 configuration file using vi text editor, enter:

  1. # vi /etc/sysconfig/network-scripts/ifcfg-eth1
Make sure file read as follows for eth1 interface:
  1. DEVICE=eth1
  2. USERCTL=no
  3. ONBOOT=yes
  4. MASTER=bond0
  5. SLAVE=yes
  6. BOOTPROTO=none

Save and close the file.

Step # 3: Load bond driver/module

Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:

  1. # vi /etc/modprobe.conf
Append following two lines:
  1. alias bond0 bonding
  2. options bond0 mode=balance-alb miimon=100
Save file and exit to shell prompt. You can learn more about all bounding options by clicking here).Step # 4: Test configuration

First, load the bonding module, enter:

  1. # modprobe bonding
Restart the networking service in order to bring up bond0 interface, enter:
  1. # service network restart
Make sure everything is working. Type the following cat command to query the current status of Linux kernel bounding driver, enter:
  1. # cat /proc/net/bonding/bond0
Sample outputs:
  1. Bonding Mode: load balancing (round-robin)
  2. MII Status: up
  3. MII Polling Interval (ms): 100
  4. Up Delay (ms): 200
  5. Down Delay (ms): 200
  6. Slave Interface: eth0
  7. MII Status: up
  8. Link Failure Count: 0
  9. Permanent HW addr: 00:0c:29:c6:be:59
  10. Slave Interface: eth1
  11. MII Status: up
  12. Link Failure Count: 0
  13. Permanent HW addr: 00:0c:29:c6:be:63

To kist all network interfaces, enter:

  1. # ifconfig
Sample outputs:
  1. bond0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
  2. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
  3. inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
  4. UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
  5. RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
  6. TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
  7. collisions:0 txqueuelen:0
  8. RX bytes:250825 (244.9 KiB) TX bytes:244683 (238.9 KiB)
  9. eth0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
  10. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
  11. inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
  12. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
  13. RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
  14. TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
  15. collisions:0 txqueuelen:1000
  16. RX bytes:251161 (245.2 KiB) TX bytes:180289 (176.0 KiB)
  17. Interrupt:11 Base address:0x1400
  18. eth1 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
  19. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
  20. inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
  21. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
  22. RX packets:4 errors:0 dropped:0 overruns:0 frame:0
  23. TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
  24. collisions:0 txqueuelen:1000
  25. RX bytes:258 (258.0 b) TX bytes:66516 (64.9 KiB)
  26. Interrupt:10 Base address:0x1480

Read the official bounding howto which covers the following additional topics:

  • VLAN Configuration
  • Cisco switch related configuration
  • Advanced routing and troubleshooting
This blog post is 1 of 2 in the "Linux NIC Interface Bonding (aggregate multiple links) Tutorial" series. Keep reading the rest of the series:
Table of Contents:
  1. Red Hat (RHEL/CentOS) Linux Bond or Team Multiple Network Interfaces (NIC) into a Single Interface
  2. Debian / Ubuntu Linux Configure Bonding [ Teaming / Aggregating NIC ]

--------------------------------------------------

 
在學習Suse的時候,我們會遇到很多的情況,我研究了一些Suse的問題,今天所要講的就是怎樣進行Suse雙網路卡繫結,通過本文希望你能過學習記住Suse雙網路卡繫結的過程。
 
1, 比較簡單的方法
---------------------------------------------------------- 
 
將兩塊Fabric網路卡繫結為bond1
# vi /etc/sysconfig/network/ifcfg-bond1
-------------------- 
BOOTPROTO='static'
IPADDR='10.69.16.102'
NETMASK='255.255.255.0'
STARTMODE='onboot'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=1 miimon=200'
BONDING_SLAVE0='eth1'
BONDING_SLAVE1='eth2'
-------------------- 
 
刪掉原來的網路卡配置檔案,重啟網路服務
cd /etc/sysconfig/network/
rm ifcfg-eth1
rm ifcfg-eth2
rcnetwork restart
使用ifconfig命令檢查網路卡繫結是否成功。如果已經啟用bond0的IP地址,而且原來的兩個網路卡沒有附著IP,而且mac地址一致,則說明繫結成功。
 
2,比較正規的方法
---------------------------------------------------------- 
 
步驟 1     進入到網路配置目錄下:
# cd /etc/sysconfig/network
 
步驟 2     建立ifcfg-bond0配置檔案。
# vi ifcfg-bond0
 
在ifcfg-bond0配置檔案中新增如下內容。
#suse 9 kernel 2.6 ifcfg-bond0
BOOTPROTO='static'
device='bond0'
IPADDR='10.71.122.13'
NETMASK='255.255.255.0'
NETWORK='10.71.122.0'
BROADCAST='10.71.122.255'
STARTMODE='onboot'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=1 miimon=200'
BONDING_SLAVE0='eth0'
BONDING_SLAVE2='eth1'
 
步驟 3     配置完成,儲存該檔案並退出。
 
步驟 4     建立ifcfg-eth0配置檔案。
(裝完SUSE9作業系統後/etc/sysconfig/network會有兩塊網路卡MAC地址命名的檔案,直接把下面的ifcfg-eth0檔案內容覆蓋那兩個配置檔案,不用新建ifcfg-eth0,ifcfg-eth1,SUSE10下則按下面操作)
# vi ifcfg-eth0
 
在ifcfg-eth0配置檔案中新增如下內容。
DEVICE='eth0'
BOOTPROTO='static'
STARTMODE='onboot'
 
步驟 5     儲存該檔案並退出。
 
步驟 6     建立ifcfg-eth1配置檔案。
# vi ifcfg-eth1
 
在ifcfg-eth1配置檔案中新增如下內容。
DEVICE='eth1'
BOOTPROTO='static'
STARTMODE='onboot'
 
步驟 7     儲存該檔案並退出。
 
步驟 8     重啟系統網路配置,使配置生效。
# rcnetwork restart
 
3,SUSE廠家主流推薦的方法,個人也比較推崇!
----------------------------------------------------------
 
一、配置加在網路卡驅動
 
在/etc/sysconfig/kernel中的
MODULES_LOADED_ON_BOOT引數加上網路卡的驅動,例如
MODULES_LOADED_ON_BOOT=”tg3 e1000”
 
注意:大多數情況下不需要配置這一步驟,只有某些網路卡不能在啟動過程中驅動初始較慢沒有識別導致繫結不成功,也就是有的slave裝置沒有加入繫結,才需要配置。
 
二、建立要繫結的網路卡配置檔案
/etc/sysconfig/network/ifcfg-eth*,其中*為數字,例如ifcfg-eth0 , ifcfg-eth1等等。
 
每個檔案的內容如下:
BOOTPROTO='none'
STARTMODE='off'
 
三、建立bond0的配置檔案
/etc/sysconfig/network/ifcfg-bond0
 
內容如下:
-------------------- 
BOOTPROTO='static'
BROADCAST='192.168.1.255'
IPADDR='192.168.1.1'
NETMASK='255.255.255.0'
NETWORK='192.168.1.0'
STARTMODE='onboot'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=1 miimon=100 use_carrier=1 '
-------------------- 
 
#其中mode=1為active-backup模式,mode=0為balance_rr模式
BONDING_SLAVE0='eth0'
BONDING_SLAVE1='eth1'
 
四、對於active-backup模式,需要在BONDING_MODULE_OPTS引數中加上制定主裝置的引數,例如:
 
BONDING_MODULE_OPTS='mode=1 miimon=100 use_carrier=1 primary=eth0'
 
五、重新啟動networkf服務
 
rcnetwork restart
 
六、注意事項
 
(1)在某些情況下網路卡驅動的初始化的時間可能會比較長,從而導致bonding不成功,那麼可以修改
 
/etc/sysconfig/network/config配置檔案的WAIT_FOR_INTERFACES引數,將其值改成30。
 
(2)配置完bonding之後,可以通過在客戶端ping,然後在伺服器端拔插網線來驗證是否已經正常工作。
 
(3)cat /proc/net/bonding/bond0可以檢視bonding的狀態。這樣你就完成了Suse雙網路卡繫結。
 
 
from:http://os.51cto.com/art/200911/165875.htm


======================================================
 
參考:
  1. http://www.chinaunix.net/jh/4/371049.html
  2. http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html
    http://os.51cto.com/art/200911/165875.htm

相關文章