Configuring Linux for the Oracle 10g VIP or private interconnect using bonding driver

jacksonkingdom發表於2010-12-07
Subject: Configuring Linux for the Oracle 10g VIP or private interconnect using bonding driver
Doc ID: 298891.1 Type: BULLETIN
Modified Date : 02-APR-2008 Status: PUBLISHED


PURPOSE
-------
In order to avoid the public LAN from being a single point of failure, Oracle highly recommends
configuring a redundant set of public network interface cards (NIC's) on each cluster node.
Network redundancy can be achieved On Linux platforms using NIC Teaming (configuring multiple
interfaces to a team or using the Linux kernel bonding module).

This note will go over the two possible choices for achieving redundancy on Linux.

As inter-node IP address failover is achieved by using the Oracle managed VIP, 3rd party
clusterware based inter-node IP address failover technologies should not be configured on the
same set of NIC's that are used by the Oracle VIP. Only intra-node IP address failover
functionalities should be used in conjunction with the Oracle VIP.


SCOPE & APPLICATION
-------------------
This article is intended for experienced DBAs and Support Engineers.


1. NIC TEAMING BY CONFIGURING MULTIPLE INTERFACES TO A TEAM
-----------------------------------------------------------
Various hardware vendors provide network interface drivers and utilities to achieve NIC teaming.
Please consult your hardware vendor for details on how to configure your system for NIC teaming


2. NIC TEAMING USING THE LINUX KERNEL BONDING MODULE
----------------------------------------------------
The Linux kernel includes a bonding module that can be used to achieve software level NIC
teaming. The kernel bonding module can be used to team multiple physical interfaces to a single
logical interface, which is used to achieve fault tolerance and load balancing. The bonding
driver is available as part of the Linux kernel version 2.4.12 or newer versions. Since the
bonding module is delivered as part of the Linux kernel, it can be configured independently
from the interface driver vendor (different interfaces can constitute a single logical
interface).

The configuration steps are different among Linux distributions. This note will go over the
steps required to configure the bonding module in RedHat Enterprise Linux 3.0.

In the following example, two physical interfaces (eth0 and eth1) will be bonded together to a
single logical interface (bond0), and the VIP will run on top of the single logical interface.

A sample network configuration is as follows:

Default Gateway:
192.168.1.254

Netmask:
255.255.255.0

Interface configuration before bonding:
eth0: IP Address 192.168.1.1
eth1: IP Address 192.168.1.2

After configuring the bonding driver, a logical interface named bondX (where X is a number
higher than zero) representing the team of interfaces.

Interface configuration after bonding:
bond0: IP Address 192.168.1.10


2-1 CONFIGURING THE BONDING DRIVER
----------------------------------
Since the bonding driver is delivered as a kernel module, the following lines need to be added
to /etc/modules.conf as root.

alias bond0 bonding
options bond0 miimon=100

For details on the "options" parameter, please refer to the documents referred to in section
2.8. In the above configuration, the MII link monitoring interval is set to 100ms. MII is used
to monitor the interface link status, and this is a typical configuration for mission critical
systems that require fast failure detection.

Note: MII is an abbreviation of "Media Independent Interface". Many popular fast ethernet
adapters use MII to autonegotiate the link speed and duplex mode.

By default, the bonding driver will transmit outgoing packets in a round-robin fashion using
each "slave" interface. The above example uses this default behavior. For details on changing
this behavior, please also refer to the documents referred to in section 2.8.

If you want to use muitlple bonding interfaces you should modify /etc/modules.conf like below example.

alias bond0 bonding
alias bond1 bonding
options bond0 miimon=100 max_bonds=2
options bond1 miimon=100 max_bonds=2

(In this example we have 2 bonding interfaces.)

The "max_bonds" parameter defines how many bonding interfaces we are
going to have.
For details on the "max_bonds" parameter, please refer to the documents
referred to in section 2.8.

2-2. CONFIGURING THE bond0 INTERFACE
------------------------------------
On RHEL 3.0, network interface parameters are configured in configuration files named
"ifcfg-", found in the /etc/sysconfig/network-scripts directory. In order to
enable the bonding driver, a configuration file "ifcfg-bond0" needs to be created with
appropriate parameters. As root, create the file "/etc/sysconfig/network-scripts/ifcfg-bond0"
as shown below.

DEVICE=bond0
IPADDR=192.168.1.10
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
(Please change the IP address, Netmask, Broadcast to match your network configuration)


2-3. CHANGING THE CONFIGURATION FOR THE EXISTING INTERFACES
-----------------------------------------------------------
As root, please change the configuration file "/etc/sysconfig/network-scripts/ifcfg-eth0" as
shown below:

DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

Please also change the configuration file "/etc/sysconfig/network-scripts/ifcfg-eth1" as shown
below:

DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

These steps are necessary to associate the bond0 interface to its slave interfaces (eth0 and eth1).


2-4. RESTART THE NETWORK
------------------------
Execute the following commands as root to reflect the changes.

# service network stop
# service network start

If the configuration is correct, the above commands should both return [ OK ].


2-5. CONFIRMING THE NEW CONFIGURATION
-------------------------------------
The following messages should appear in your syslog (/var/log/messages).

Jan 28 16:00:09 rac01 kernel: bonding: MII link monitoring set to 100 ms
Jan 28 16:00:09 rac01 kernel: ip_tables: (C) 2000-2002 Netfilter core team
Jan 28 16:00:11 rac01 ifup: Enslaving eth0 to bond0
Jan 28 16:00:11 rac01 kernel: bonding: bond0: enslaving eth0 as a backup interface with a down link.
Jan 28 16:00:11 rac01 kernel: e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
Jan 28 16:00:11 rac01 kernel: bonding: bond0: link status definitely up for interface eth0.
Jan 28 16:00:11 rac01 kernel: bonding: bond0: making interface eth0 the new active one.
Jan 28 16:00:11 rac01 ifup: ENslaving eth1 to bond0
Jan 28 16:00:11 rac01 kernel: bonding: bond0: enslaving eth1 as a backup interface with a down link.
Jan 28 16:00:11 rac01 kernel: e1000: eth1: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
Jan 28 16:00:11 rac01 kernel: bonding: bond0: link status definitely up for interface eth1.
Jan 28 16:00:11 rac01 network: Bringing up interface bond0: succeeded

The "ifconfig -a" command should return the following output.

bond0 Link encap:Ethernet HWaddr 00:0C:29:DC:83:E8
inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:27 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3462 (3.3 Kb) TX bytes:42 (42.0 b)

eth0 Link encap:Ethernet HWaddr 00:0C:29:DC:83:E8
inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:13 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1701 (1.6 Kb) TX bytes:42 (42.0 b)
Interrupt:10 Base address:0x1424

eth1 Link encap:Ethernet HWaddr 00:0C:29:DC:83:E8
inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:14 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1761 (1.7 Kb) TX bytes:0 (0.0 b)
Interrupt:11 Base address:0x14a4

(Note that other interfaces will also appear in a typical RAC installation)


2-6. CONSIDERATIONS FOR CRS INSTALLATION
-----------------------------------------
During CRS installation, choose "Public" for the "bond0" interface in the "Specify Network
Interface Usage" OUI page. If the "eth0" and "eth1" interfaces appear in OUI, then make sure
to choose "Do not use" for their types.

2-6a. IF YOU WANT TO CONFIGURE BONDING DEVICES AFTER INSTALLATION OF CRS.
-----------------------------------------
You can change your interconnect/public interface configuration using oifcfg command.
Please refer Note 283684.1

2-7. VIPCA CONFIGURATION
------------------------
The single interface name (i.e. "bond0") representing the redundant set of NIC's is the
interfaces that should be specified in the second screen in VIPCA (VIP Configuration Assistant,
1 of 2). Make sure not to select any of the underlying non-redundant NIC names in VIPCA, as
they should not be used by Oracle in a NIC teaming configuration.


2-8. OPTIONS FOR THE BONDING DRIVER
-----------------------------------
Various advanced interface, driver and switch configurations are available for achieving a
highly available network configuration. Please refer to the "Linux Ethernet Bonding Driver
mini-howto" for more details.

[url]http://www.kernel.org/pub/linux/kernel/people/marcelo/linux-2.4/Documentation/networking/bonding.txt[/url]


RELATED DOCUMENTS
-----------------
Linux Ethernet Bonding Driver mini-howto:
[url]http://www.kernel.org/pub/linux/kernel/people/marcelo/linux-2.4/Documentation/networking/bonding.txt[/url]

Red Hat Enterprise Linux 3: Reference Guide -> Appendix A. General Parameters and Modules -> A.3 Ethernet Parameters:
[url][/url]

Note 283684.1 - How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster
Note 291962.1 - Setting Up Bonding on SLES 9
Note 291958.1 - Setting Up Bonding in Suse SLES8[@more@]

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/23590362/viewspace-1042741/,如需轉載,請註明出處,否則將追究法律責任。

相關文章