Configure GC Agents to Monitor Virtual Hostname in HA environments_406014.1

rongshiyuan發表於2014-08-18

How to Configure Grid Control Agents to Monitor Virtual Hostname in HA environments (文件 ID 406014.1)


In this Document

Abstract
History
Details
  Overview and Requirements
  Installation and Configuration
Summary
References


APPLIES TO:

Enterprise Manager Base Platform - Version 10.2.0.1 to 11.1.0.1 [Release 10.2 to 11.1]
Information in this document applies to any platform.
Checked for relevance on Feb-18-2013
Checked for relevance on 23-Jul-2014

ABSTRACT

Scope and Application:

Grid Control 10.2.0.1 +
All Unix platforms; for Windows use Note 464191.1
This document provides a general reference for Grid Control administrators on configuring 10g Grid Control agents in Cold Failover Cluster (CFC) environments

HISTORY

Last Updated 04-SEP-2010
Expire Date 03-SEP-2010

DETAILS

Overview and Requirements

In order for Grid Control agent to fail over to a different host, the following conditions must be met:

1. Installations must be done using a Virtual Hostname associated with a unique IP address.
2. The virtual hostname used for the group, must be used for any service that runs inside this virtual group. For example: listeners, HTTP servers, iAS etc must use the virtual hostname.
3. Install on a shared disk/volume, which holds the binaries, configuration and runtime data.**
4. Configuration data and metadata must also failover to the surviving node
5. Inventory location must failover to the surviving node
6. Software owner and timezone parameters must be the same on all cluster member nodes that will host this agent

 

** Note: Any reference to "shared" could also be true for non-shared failover volumes, which can be mounted on active hosts after failover.

 
An agent must be installed on each physical node in the cluster to monitor the local services. 

An alternate solution for CFC deployments, Grid Control 10.2.0.4 offers a "relocate_target" feature using EMCLI, where the physical agent monitors all virtual services hosted by the physical cluster member.  See Note 577443.1 for details.

 

 

 

Installation and Configuration

 

A. Setup the Virtual Hostname/Virtual IP Address

The virtual hostname must be static and resolveable consistently on the network.  All nodes participating in the setup must resolve the virtual IP address to the same hostname.  Standard TCP tools such as "nslookup" and "traceroute" can be used to verify the hostname.  Validate using the commands listed below

nslookup
-> returns the virtual IP address and fully qualified hostname

nslookup
-> returns the virtual IP address and fully qualified hostname


Make sure to try these commands on every node of the cluster, and verify that the correct information is returned.



B. Setup Shared Storage

This can be storage managed by the clusterware that is in use, or you can use any shared file system (FS) volume as long as it is not an unsupported type, such as OCFS V1. The most common shared FS is NFS.

You can also use non-shareable volumes that are mounted upon failover to the succeeding host.  (Such is the case in Windows environments)



C. Setup the Environment

Before you launch the installer, certain environment variables need to be verified. Each of these variables must be set identical for the account installing the software on ALL machines participating in the cluster:

 

  • OS variable TZ (All cluster member nodes must be time synchronized)
  • Timezone setting. It is recommended to unset this variable prior to installation
  • PERL variables
  • Variables like PERL5LIB should also be unset to prevent from picking up the wrong set of PERL libraries


D. Synchronize OS User ID's

The user and group of the software owner should be defined identically on all nodes of the cluster. This can be verified using the "id" command:

$ id -a
uid=550(oracle) gid=50(oinstall) groups=501(dba)


The agent software owner must be a member of the target's primary group if you are using a different user for each software home.



E. Setup Inventory

Each failover group or virtual hostname package should have its own inventory.  Use the same inventory when you install the agent inside each virtual group.  This is accomplished by pointing the installer to the group's oraInst.loc file using "-invPtrLoc" parameter.

F. Install the Software:

1. Run the installer using the inventory location file oraInst.loc as well as specifying the hostname of the virtual group. For example:

runInstaller -invPtrLoc /app/oracle/share1/oraInst.loc ORACLE_HOSTNAME=lxdb.acme.com -debug
(-debug parameter is optional)
(For 11g agents, you must use a response file for silent installations)


2. Continue the rest of the installation normally.

 

Note:  Agents will be installed using the default port 3872.   By default, each agent is configured to listen on all NIC's.  This may cause failure for each subsequent agent startup.  To avoid this problem, edit the local agent's emd.properties file and change:


AgentListenOnAllNICs=TRUE

to 

AgentListenOnAllNICs=FALSE



Then bounce the local agent.

You can set this value for each additional agent to avoid startup failures.



G. Startup of Services:

Ensure that you start your services in the proper order:

Note: ORACLE_HOSTNAME must be set when running any emctl command to control the virtual agent.


- establish IP address on the active node
- start all services normally except the agent
- unset all environment variables such as ORACLE_HOME, *LIB*, ORACLE_SID, PERL5LIB etc.
- cd to the agent home / bin directory
- start the agent


In case of failover:

- Establish IP on failover node
- start all services normally except the agent
- unset all environment variables such as ORACLE_HOME, *LIB*, ORACLE_SID, PERL5LIB etc.
- cd to the agent home / bin directory
- start the agent

SUMMARY

If you have a large number of virtual groups to monitor on each cluster node, you may want to consider using only one agent on each cluster node (physical host) and use EMCLI to relocate targets as they failover to other cluster nodes.  

Note 577443.1 How to Setup and Configure Target Relocate using EMCLI

REFERENCES

NOTE:330072.1 - How To Configure Enterprise Manager for High Availability

NOTE:405642.1 - How to Configure Grid Control OMS in Active/Passive CFC Environments failover / HA
NOTE:405979.1 - How to Configure Grid Control Repository in Active/Passive HA environments
NOTE:464191.1 - How to Configure Grid Control Agents in Windows HA - Failover Cluster Environments
NOTE:549270.1 - How to configure Grid Control 10.2.0.4 or 10.2.0.5 Management Servers behind a Server Load Balancer (SLB)
NOTE:577443.1 - Setup and Configure Target Relocate Using EMCLI 10.2 or 11.1

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/17252115/viewspace-1252977/,如需轉載,請註明出處,否則將追究法律責任。

相關文章