Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP

jacksonkingdom發表於2010-12-07
主題: How to Setup IPMP as Cluster Interconnect
文件 ID: 368464.1 型別: HOWTO
上次修訂日期: 14-JAN-2009 狀態: MODERATED

In this Document
Goal
Solution
References

This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.

Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.7
Sun Solaris SPARC (64-bit)
Solaris Operating System (SPARC 64-bit)


Updated on 14-Jan-2009
Goal
The goal is to show how IPMP can be used as cluster interconnect in a RAC environment.
Solution

To start with the IPMP setup use Note 283107.1.
The IP addresses have been changed to show how it works:

- Physical IP : 192.168.0.99
- Test IP for ce0 : 192.168.0.65
- Test IP for ce1 : 192.168.0.66

Oifcfg requires an interface to be used to configure the private interface. This can not be done with IPMP, because you always have two interfaces and the physical IP will be switched to the active interface.

The recommended solution is not to configure any private interface.

The following steps need to done to use IPMP for the cluster interconnect:
1. If the private interface has already been configured delete the interface with 'oifcfg delif'
2. Set the CLUSTER_INTERCONNECTS parameter in the spfile/init.ora to the physical IP which is swapped by IPMP.
3. Set the CLUSTER_INTERCONNECTS also for your ASM instances

ATTENTION:

Oracle Clusterware must also use the same physical interface, otherwise an interface down will only be recognized by the instances and an instance is evicted after 10 minutes (the mechanism is called IMR). Oracle Clusterware uses the private hostname for communication, so the private hostname in /etc/hosts must be set to the physical IP (192.168.0.99) that is switched from one interface to the other. The same private hostname must also be used in the Oracle Clusterware configuration during installation.

主題: Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP
文件 ID: 283107.1 型別: BULLETIN
上次修訂日期: 20-MAR-2009 狀態: PUBLISHED


PURPOSE
-------
In order to avoid the public LAN from being a single point of failure, Oracle highly recommends
configuring a redundant set of public network interface cards (NIC's) on each cluster node.
On Sun Solaris platforms, it is our recommendation to take advantage of the Solaris IP Multipathing (IPMP)
to achieve redundancy, and to configure the Oracle 10g Virtual IP (VIP) on the redundant set of NIC's
assigned to the same IPMP group.


What is the difference between VIP and IPMP ?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IPMP can failover an address to another interface, but not failover to the other node.
Oracle VIP can failover to another interface on the same node or to another host in the cluster.


This note will go over the basic configuration steps required to configure IPMP, plus the steps
to configure the Oracle 10g VIP over of the redundant set of NIC's.

Note: Sun Trunking, an interface teaming functionality provided by Sun may also be used to achieve
redundancy for the public interfaces.


SCOPE & APPLICATION
-------------------
This article is intended for experienced DBAs and Support Engineers.


0. PRE-CONFIGURATION NOTES
------------------------------------------------------------------------------
For those using Oracle 10g Release 1, you would need to apply either Oracle patch set 10.1.0.4
or patch #3714210 (on top of 10.1.0.3), in order for the VIP to be able to take advantage of
IPMP on the public network. Please also note that this note doesn't cover redundancy in the
private interconnect network, which we also recommend making redundant using 3rd party
technology like Solaris IPMP.


1. HARDWARE CONFIGURATION
------------------------------------------------------------------------------
In order to make the public network redundant, a minimum of two NIC's need to be installed and
cabled correctly on each cluster node. In a standard IPMP configuration, one of the NIC's will
be used as the primary link where all communications will go through. Upon failure of the
primary link, IPMP will automatically fail the physical & virtual (Oracle VIP) IP addresses to
the standby NIC.


+----------------+ +----------------+
| Server | | Server |
+--+----------+--+ +--+----------+--+
ce0 ce1 ce0 ce1
|(primary) |(standby) ==========> |(failed) |(primary)
| | | |
(vip) | | (vip)
| | | |


In the example above, a server has two public NIC's named ce0 and ce1, each configured and
cabled correctly.


2. SERVER FIRMWARE CONFIGURATION
------------------------------------------------------------------------------
In order to avoid MAC address conflicts between the primary and standby NIC's, a unique
ethernet MAC address must be assigned to each network interface (NIC) on the server. On
Solaris, this can be done by setting the "local-mac-address?" PROM variable to TRUE (the
default value is FALSE) on each cluster node.


3. NETWORK CONFIGURATION
------------------------------------------------------------------------------
In order to configure the VIP over Solaris IPMP, a minimum of four public IP addresses must be
prepared for each server within the cluster.

- One physical IP address bound to the primary interface (the static IP address of the server)
- One unused IP address, which will be configured by Oracle as the VIP for client access
- One test IP address bound to each interface, used by IPMP for failure detection (both primary and standby)

All four public IP addresses need to reside on the same network subnet. The following is the
list of IP addresses that will be used in the following example.

- Physical IP : 146.56.77.30
- Test IP for ce0 : 146.56.77.31
- Test IP for ce1 : 146.56.77.32
- Oracle VIP : 146.56.78.1


4. SOLARIS IP MULTIPATHING (IPMP) CONFIGURATION
------------------------------------------------------------------------------
All NIC's that are to be used by the VIP must be assigned to the same IPMP group. This is to
ensure that IPMP will automatically relocate the VIP whenever the primary group member (NIC)
experiences a failure. The following is an example configuration for two NIC's (ce0 and ce1),
configured in the same IPMP group used for Oracle client connection. In this example both
NIC's belong to the same IPMP group "orapub".


/etc/hostname.ce0 configuration (Primary NIC, where physical IP 146.56.77.30 is configured on)
146.56.77.30 netmask + broadcast + group orapub up addif 146.56.77.31 deprecated -failover netmask + broadcast + up

/etc/hostname.ce1 configuration (Standby NIC)
146.56.77.32 netmask + broadcast + deprecated group orapub -failover standby up


With the above configuration, the "ifconfig -a" output should look like the following (NOTE: the ether value is only visible when ifconfig is ran as root):


root@jpsun1580[ / ]%>
lo0: flags=1000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ce0: flags=1000843 mtu 1500 index 3
inet 146.56.77.30 netmask fffffc00 broadcast 146.56.79.255
groupname orapub
ether 8:0:20:ee:c5:74
ce0:1: flags=9040843 mtu 1500 index 3
inet 146.56.77.31 netmask fffffc00 broadcast 146.56.79.255
ce1: flags=69040843 mtu 1500 index 4
inet 146.56.77.32 netmask fffffc00 broadcast 146.56.79.255
groupname orapub
ether 8:0:20:ee:c5:75
ce2: flags=1008843 mtu 1500 index 5
inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
ether 8:0:20:ee:c5:77
ce3: flags=1008843 mtu 1500 index 6
inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
ether 8:0:20:ee:c5:76
clprivnet0: flags=1009843 mtu 1500 index 7
inet 172.16.193.1 netmask ffffff00 broadcast 172.16.193.255
ether 0:0:0:0:0:1
root@jpsun1580[ / ]%>


Note that the physical IP address 146.56.77.30 is configured on the primary interface ce0,
while the test IP addresses for the two NIC's (marked as NOFAILOVER) are configured on ce0:1
and ce1 respectively. Also note that both ce0 and ce1 belong to the same IPMP group "orapub",
which means that the physical IP address 146.56.77.30 will automatically relocate to an
available NIC (ce1) whenever the current NIC (ce0) experiences a failure. The three entries
ce2, ce3 and clprivnet0 are private network paths used by Sun Cluster and RAC for internode
cluster communications.

5. ORACLE VIRTUAL IP CONFIGURATION
------------------------------------------------------------------------------
Having configured IPMP correctly, the Oracle VIP can now take advantage of IPMP for public
network redundancy. The VIP should now be configured to use all NIC's assigned to the same
public IPMP group. By doing this Oracle will automatically choose the primary NIC within the
group to configure the VIP, and IPMP will be able to fail over the VIP within the IPMP group
upon a single NIC failure.


o New 10g RAC installation
^^^^^^^^^^^^^^^^^^^^^^^^^^
At the second screen in VIPCA (VIP Configuration Assistant, 1 of 2), select all NIC's
within the same IPMP group where the VIP should run at.


o Existing 10g RAC installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For existing 10g RAC installations, use srvctl to modify the VIP to use all the NIC's
within the same IPMP group. The following example is configuring the VIP for jphp1580,
to use the two NIC's specified in the command line.

# srvctl stop nodeapps -n jpsun1580
# srvctl modify nodeapps -n jpsun1580 -o /u01/app/oracle/product/10gdb -A 146.56.78.1/255.255.252.0/ce0|ce1
# srvctl start nodeapps -n jpsun1580



6. VIP + IPMP BASIC BEHAVIOR (SINGLE FAILURES AND TOTAL FAILURES)
------------------------------------------------------------------------------
Once started, the VIP should run on the primary member of the IPMP group.
In the following example, the VIP 146.56.78.1 is configured on top of ce0, as a logical interface named ce0:2.
The physical IP address 146.56.77.30 is also configured on ce0.


ce0: flags=1000843 mtu 1500 index 3
inet 146.56.77.30 netmask fffffc00 broadcast 146.56.79.255
groupname orapub
ether 8:0:20:ee:c5:74
ce0:1: flags=9040843 mtu 1500 index 3
inet 146.56.77.31 netmask fffffc00 broadcast 146.56.79.255
ce0:2: flags=9040843 mtu 1500 index 3
inet 146.56.78.1 netmask fffffc00 broadcast 146.56.79.255
ce1: flags=69040843 mtu 1500 index 4
inet 146.56.77.32 netmask fffffc00 broadcast 146.56.79.255
groupname orapub
ether 8:0:20:ee:c5:75


Upon failure of the primary interface (ce0), IPMP will automatically relocate the physical &
virtual IP addresses to the next available NIC within the same IPMP group. In the following
example, the physical IP and the VIP have both automatically relocated to ce1:1 and ce1:2.
Note that the test IP addresses on both NIC's do not relocate, as they are used exclusively by
IPMP for failure detection purposes.


ce0: flags=1000843 mtu 1500 index 3
inet 0.0.0.0 netmask 0
groupname orapub
ether 8:0:20:ee:c5:74
ce0:1: flags=9040843 mtu 1500 index 3
inet 146.56.77.31 netmask fffffc00 broadcast 146.56.79.255
ce1: flags=69040843 mtu 1500 index 4
inet 146.56.77.32 netmask fffffc00 broadcast 146.56.79.255
groupname orapub
ether 8:0:20:ee:c5:75
ce1:1: flags=29040843 mtu 1500 index 4
inet 146.56.77.30 netmask fffffc00 broadcast 146.56.79.255
ce1:2: flags=29040843 mtu 1500 index 4
inet 146.56.78.1 netmask fffffc00 broadcast 146.56.79.255


Once the failure on ce0 is repaired, IPMP will automatically fail back the physical and Oracle
virtual IP addresses to the original primary interface (ce0). All inter-node VIP
failovers/fallbacks are handled by IPMP and not by Oracle.


ce0: flags=1000843 mtu 1500 index 3
inet 146.56.77.30 netmask fffffc00 broadcast 146.56.79.255
groupname orapub
ether 8:0:20:ee:c5:74
ce0:1: flags=9040843 mtu 1500 index 3
inet 146.56.77.31 netmask fffffc00 broadcast 146.56.79.255
ce0:2: flags=9040843 mtu 1500 index 3
inet 146.56.78.1 netmask fffffc00 broadcast 146.56.79.255
ce1: flags=69040843 mtu 1500 index 4
inet 146.56.77.32 netmask fffffc00 broadcast 146.56.79.255
groupname orapub
ether 8:0:20:ee:c5:75



Upon failure of all public NIC's (total failure), Oracle CRS will relocate the VIP to the next
available node within the cluster.



RELATED DOCUMENTS
-----------------
For details on Solaris IPMP, please refer to the following Solaris documentation available at Sun.com:
o Solaris IP Multipathing Data Sheet
o Solaris 9 9/04 System Administrator Collection >> System Administration Guide: IP Services
o Solaris 10 System Administrator Collection >> System Administration Guide: IP Services

For information on how to configure IPMP for the RAC Cluster Interconnect, please refer to Note 368464.1, "How to Setup IPMP as Cluster Interconnect".
[@more@]

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/23590362/viewspace-1042742/,如需轉載,請註明出處,否則將追究法律責任。

相關文章