Oracle Clusterware: Components installed. (Doc ID 556976.1)

rongshiyuan發表於2014-05-07

Oracle Clusterware: Components installed. (Doc ID 556976.1)


In this Document
  Purpose
  Scope
  Oracle Clusterware: Components installed.
     What is the Oracle Clusterware?
     Components.
     Components creation.
     Conclusion.
  References


Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.3 to 11.1.0.6 - Release: 10.2 to 11.1
Information in this document applies to any platform.
***Checked for relevance on 03-Sep-2010***

Purpose

This document is aimed to explain the different parts of the Oracle Clusterware (sometimes also referred to as Cluster Ready Services or CRS) created during a typical installation on Unix.  This note can be considered as an introduction for other available notes that deal with Oracle Clusterware.

Scope

This document is meant as a supplement to and not a replacement of the installation documentation for Oracle Clusterware.

Oracle Clusterware: Components installed.

What is the Oracle Clusterware?

The Oracle Clusterware was introduced in 10.1 (initially called CRS ) and is the product that lies under RAC. Oracle Clusterware provides different services to RAC including:

- Group Services
- Node Monitor
- Locking services
- HA Resource management
- Event framework, etc

Oracle Clusterware is different compared to RAC (or RDBMS) and thus its components are different as well.

Components.

Several components are needed to have CRS running on Unix machine.   Here is a brief description of each:

1. Daemons and init.* scripts

Oracle Clusterware is formed of several daemons, each one of which have a special function inside the stack. The daemons are located inside the directory $CRS_HOME/bin. Here is a list of the
daemons, for 10.2.0.3 and later, note that depending on the platform and whether or not there is a 3rd-party vendor clusterware installed, some of the following processes may not be present:

- ocssd.bin 
- crsd.bin 
- evmd.bin 
- ocslsvmon.bin 
- oclsomon.bin 
- oprocd 


When the daemons are running, we can say that CRS is fully started. Daemons are executed via the init.* scripts (init.cssd, init.crsd and init.evmd). Note that we do not have as many init.* scripts as we have daemons, this is because init.cssd starts more than one daemon:

- init.cssd starts ocssd.bin, olcsomon, oclsvmon and oprocd (CSS family)
- init.crsd starts crsd.bin 
- init.evmd starts evmd.bin 


Note that some links for the scripts exist under rc.d which are used to manipulate the control files (see point 3), which at the end will help to start and stop the whole stack.

2.  Oracle Cluster Registry (OCR) and Voting Disk (VD)

The OCR contains the configuration information for the clusterware, like the network endpoints where the daemons (ocssd.bin, crsd.bin,etc) will be listening, cluster interconnect information for RAC, location for VD, etc.

The VD is a communication mechanism where every node reads and writes its heartbeat information. The VD is also used to kill the node(s) when the network communication is lost between one or several nodes in the cluster to prevent a split-brain and protect the database information.

3. Control files (also know as SCLS_SRC files)

 These files are used to control some aspects of Oracle Clusterware like:

- enable/disable processes from the CSSD family (Eg. oprocd, oslsvmon)
- stop the daemons (ocssd.bin, crsd.bin, etc).
- prevent Oracle Clusterware from being started when the machine boots.
- etc.

In a Linux installation those files are located as follows:

[oracle@mbrac1 scls_scr]$ ls -lR 
.: 
total 4 
drwxr-xr-x 4 root root 4096 Oct 28 10:17 mbrac1 

./mbrac1: 
total 8 
drwxr-xr-x 2 oracle root 4096 Oct 28 10:17 oracle 
drwxr-xr-x 2 root root 4096 Oct 28 10:19 root 

./mbrac1/oracle: 
total 4 
-rw-r--r-- 1 oracle root 7 Oct 28 10:32 cssfatal 

./mbrac1/root: 
total 12 
-rw-r--r-- 1 root root 39 Oct 28 10:19 crsdboot 
-rw-r--r-- 1 root root 7 Oct 28 10:17 crsstart 
-rw-r--r-- 1 root root 39 Oct 28 10:17 cssrun 
-rw-r--r-- 1 root root 0 Oct 28 10:19 noclsmon 
-rw-r--r-- 1 root root 0 Oct 28 10:19 nooprocd 


The control files must not be manipulated manually, some files are changed via init.cssd or crsctl and the location can differ depending on the Unix OS (Linux, Solaris, etc).

4. inittab entries

In order to start the Oracle Clusterware daemons, the init.* scripts first need to be run.  These scripts are executed by the daemon init. To accomplish this some entries must be created in the file /etc/inittab.

These are:  

h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1  h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1  h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 

 

Note:  Check the manual for a detailed description of each field


Some important clarifications:

a. Stopping Oracle Clusterware will stop the daemons (ocssd.bin, crsd.bin, etc) but the init.* scripts will remain running.
b. The init.* are needed to start Oracle Clusterware manually (this was introduced in 10.1.0.4)
c. if the scripts init.* are not running then the daemons will not be started.
d. it's not supported to execute the scripts init.* manually. 

 

5. Wrappers.

The wrappers are shell scripts created under the $CRS_HOME/bin, which are used to set the correct environment variables such as:  CRS_HOME, LD_LIBRARY_PATH, etc, and run the real executables.

The daemons and other tools are executed in such a way. One example is ocssd:

 

#!/bin/sh

ORA_CRS_HOME=/u01/64bit/A203/crs
ORACLE_HOME=$ORA_CRS_HOME
export ORA_CRS_HOME ORACLE_HOME 

case `/bin/uname` in
Linux) LD_LIBRARY_PATH=$ORA_CRS_HOME/lib
       export LD_LIBRARY_PATH 
       ;;
..
case $0 in
*.bin) exec $ORA_CRS_HOME/bin/`basename $0 .bin` "$@" ;;
*)     exec $0.bin "$@" ;;
esac

As can be seen, ocssd sets the environment and will call ocssd.bin. During the installation, these scripts are parsed and the correct values for the variables are set.

Components creation.

When exactly during the installation process are these components created? The Installation manual contains a complete list of installation requirements that must be fulfilled even before invoking the Installer
(runInstaller).   This can prevent a lot of potential issues with installation.

After fulfilling the pre-installation requirements,  the basic installation steps to follow are:

1. Invoke the Oracle Universal Installer (OUI)

2. Enter the different information for some components like:

- name of the cluster
- public and private node names
- location for OCR and Voting Disk
- network interfaces used for RAC instances
-etc.

3. After the Summary screen, OUI will start copying under the $CRS_HOME (this is the $ORACLE_HOME for Oracle Clusterware) in the local node the libraries and executables.

- here we will have the daemons and scripts init.* created and configured properly.

- note that for CRS only some client libraries are recreated, but not all the executables (as for the RDBMS).

4. Later the software is propagated to the rest of the nodes in the cluster and the oraInventory is updated.

5. The installer will ask to execute root.sh on each node. Until this step the software for Oracle Clusterware is inside the $CRS_HOME. Running root.sh will create several components outside the $CRS_HOME:

- OCR and VD will be formated

- control files (or SCLS_SRC files ) will be created with the correct contents to start Oracle Clusterware.

- /etc/inittab will be updated and the init process is notified

- the different processes init.* (init.cssd, init.crsd, etc) will start the daemons (ocssd.bin, crsd.bin, etc). When all the daemons are running then we can say that the installation was successful

- On 10.2 and later, running root.sh on the last node in the cluster also will create the nodeapps (VIP, GSD and ONS). On 10.1, VIPCA is executed as part of the RAC installation.

Note that root.sh MUST be executed on each node at a time and after the installation is finished successfully root.sh must not be executed again. If an error is produced and
Oracle Clusterware is not fully started then is valid to execute root.sh again.

6. After running root.sh on each node, we need to continue with the OUI session. After pressing the 'OK' button OUI will include the information for the public and cluster_interconnect interfaces. Also CVU (Cluster Verification Utility) will be executed.

Conclusion.

Oracle Clusterware is different than the RDBMS and as such it's important to keep in mind the different parts needed and created during a typical installation on a Unix OS. Note that this document is not a replacement for the Installation manual, which includes the complete information for the installation of the product. Anyone preparing to install Oracle Clusterware must follow the Installation Manual and ensure that the requirements are in place, before starting the OUI.


References

NOTE:557934.1 - Oracle Clusterware: Patch installation

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/17252115/viewspace-1155634/,如需轉載,請註明出處,否則將追究法律責任。

相關文章