Using Oracle 12cR1 RAC with Oracle E-Business Suite Release 12_1490850.1

rongshiyuan發表於2015-01-15

Using Oracle 12c Release 1 Real Application Clusters with Oracle E-Business Suite

Release 12 (Doc ID 1490850.1)


Oracle E-Business Suite 12 has numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This document describes how to migrate Oracle E-Business Suite 12 running on a single database instance to an Oracle Real Application Clusters (RAC) environment running on Oracle Database 12c Release 1 or Release 2. This document and the associated manuals are applicable for both Oracle Database 12c Release 1 (12.1.0.1) and Oracle Database 12c Release 2 (12.1.0.2).

Note: This document applies to UNIX and Linux platforms only. If you are using Windows and want to migrate to Oracle RAC or ASM, you must follow the procedures described in the Oracle Real Application Clusters Administration and Deployment Guide 12c Release 1 (12.1) and the Oracle Database Administrator's Guide 12c Release 1 (12.1)

The most current version of this document can be obtained in My Oracle Support Knowledge  Document 1490850.1.

There is a change log at the end of this document.

Note: Most documentation links point to the generic Oracle Database 12c Release 1 documentation for Linux. Refer to the appropriate installation documentation for your platform.

A number of conventions are used in describing the Oracle Applications architecture:

Convention Meaning
Application tier Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier.
Database tier Machines (nodes) running the Oracle Applications database.
oracle User account that owns the database file system (database ORACLE_HOME and files).
CONTEXT_NAME The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is _.
CONTEXT_FILE This specified the full path to the Applications context file on the application tier and database tier. The default locations are as follows.
Application tier context file: /admin/.xml
Database tier context file: /appsutil/.xml
APPSpwd Oracle Applications database user password.
Monospace Text Represents command line text. Type such a command exactly as shown.
< > Text enclosed in angle brackets represents a variable. Substitute a value for the variable text. Do not type the angle brackets.
\ On UNIX or Linux, the backslash character can be entered to indicate continuation of the command line on the next screen line.

This document is divided into the following sections:

Section 1: Overview

You should be familiar with Oracle Database 12c Release 1, and have a good knowledge of Oracle Real Application Clusters. Refer to Oracle Real Application Clusters Administration and Deployment Guide12c Release 1 (12.1)  when planning to set up Oracle Real Application Clusters and shared devices.

1.1 Cluster Terminology

You should understand the terminology used in a cluster environment. Key terms include the following.
  • Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically.
  • Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The crs process manages designated cluster resources, such as databases, instances, services, and listeners.
  • Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an Oracle RAC environment, maximizing throughput and providing resilience to node failure.
  • Real Application Clusters (Oracle RAC) is an Oracle database technology that allows multiple machines to work on the same data in parallel, thereby significantly reducing processing time. An Oracle RAC environment also offers resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.

1.2 Configuration Prerequisites

The prerequisites for using Oracle RAC with Oracle E-Business Suite 12 are as follows:

  1. If you do not already have an existing single instance environment, install Oracle E-Business Suite using Rapid Install.
     
    Note: If you are not planning ASM as part of your Oracle RAC conversion ensure that all your data files, control files, and redo log files of the existing single instance database are located on a shared disk. If your data files, control files, and redo log files currently reside on a local disk, move them to a shared disk and recreate the control files. Refer to Oracle Database Administrator's Guide 12c Release 1 (12.1) for further information on recreating the control files.

  2. Set up the target cluster hardware and interconnect.

Before proceeding, check that you meet the following prerequisites and apply the relevant patches as necessary:

  • For Oracle E-Business Suite 12.0.x you must be on Oracle E-Business Suite 12.0.2 Release Update Pack (RUP2 - Patch 5484000 ), or higher such as Oracle E-Business Suite Release 12.0.4 Release Update Pack (RUP4 -Patch 6435000.
     
    • Ensure that you have applied the latest AutoConfig patches, following the relevant instructions in Section 6 of My Oracle Support Knowledge Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12.
    • To use the named db listener feature of AutoConfig, you must have applied R12.TXK.A.delta.7 Patch 9386653 or higher.
Note: Apply Patch 6636108 on the application tier(s), which delivers the adbldxml utility that is used to generate the context file on the database tier.
  • For Oracle E-Business Suite 12.1 apply the Oracle E-Business Suite Release 12.1.1 Maintenance Pack Patch 7303030; this is included in Oracle E-Business Suite Release 12.1.1 Rapid Install).
     
    • To use the named db listener feature of AutoConfig, apply R12.TXK.B.delta 3 Patch 8919489 or higher.
    • To use the SCAN listener feature of AutoConfig, apply either R12.TXK.B.delta 3 Patch 8919489 and R12.ATG_PF.B.delta.3 Patch 8919491,or Patch 9239090 for Oracle E-Business Suite 12.1.3. In addition apply Patch 9926448 to fix a known issue with FND_FS/SM alias generation with SCAN enabled.
    • Apply patch Patch 16982914 to correct a problem where autoconfig creates the wrong listener name.

Section 2: Environment

2..1 Software and Hardware Configuration

Refer to the relevant platform installation guides for supported hardware configurations. For example, Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux and Oracle Real Application Clusters Administration and Deployment Guide12c Release 1 (12.1) . The minimum software versions are as follows:

Component Version
Oracle E-Business Suite 12 12.0.4+
Oracle Database 12.1.0 or higher
Oracle Cluster Ready Services 12.1.0 or higher

You can obtain the latest Oracle Database 12c Release 1 software from: http://www.oracle.com/technology/software/products/database/index.html

Note: The Oracle Cluster Ready Services must be at a release level equal to, or greater than, the Oracle Database version.

2.2 ORACLE_HOME Nomenclature

This document refers to various ORACLE_HOME's as follows:

ORACLE_HOME Purpose
SOURCE_ORACLE_HOME Database ORACLE_HOME used by Oracle E-Business Suite 12; it can be any supported version.
12c_ORACLE_HOME Database ORACLE_HOME installed for Oracle RAC Database 12c Release 1.
12c_CRS ORACLE_HOME ORACLE_HOME installed for Oracle Database 12c Release 1 Cluster Ready Services (Infrastructure home).
OracleAS 10.1.2 ORACLE_HOME ORACLE_HOME installed on Application Tier for Oracle Forms and Oracle Reports
OracleAS 10.1.3 ORACLE_HOME  ORACLE_HOME installed on Application Tier for HTTP server

Section 3: Database Installation and Oracle RAC Migration

The configuration steps you need to perform are divided into a number of stages:

3.1 Install Oracle Clusterware 12c Release1
3.2 Install Oracle Database Software 12c Release 1 and Upgrade the Oracle E-Business Suite Database
3.3 Listener Configuration in Oracle Database 12c Release 1
3.4 Configure Shared Storage
3.5 Convert Oracle Database 12c Release 1 to Oracle RAC
3.6 Post Migration Steps
3.7 Enable AutoConfig on the Database Tier
3.8 Establish the Oracle E-Business Suite Environment for Oracle RAC
3.9 Configure Parallel Concurrent Processing

Note: Take full backups of your environment prior to executing these procedures, and after each stage of the migration. These procedures should be validated on a test environment prior to being carried out in production. Users must be logged off the system during these procedures.

3.1 Install Oracle Clusterware 12c Release 1

Note: The installation of Oracle Clusterware 12c Release 1 is now part of the Grid Infrastructure install. This task requires an understanding of the specific type of cluster and infrastructure that are to be deployed; the selection is outside the scope of this document. For convenience, the general steps are outlined below, but you should use the Infrastructure documentation set as the primary reference.

3.1.1 Check the Network Requirements

In Oracle Database 12c Release 1, the Infrastructure install can be configured to specify address management via node addresses, names (as used in older releases), or via Grid Naming Services.  Regardless of the choice here, nodes must satisfy the following requirements:
  • Each node must have at least two network adapters: one for the public network interface, and one for the private network interface (interconnect).
  • For the public network, each network adapter must support the TCP/IP protocol.
  • For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters, and switches that support TCP/IP (Gigabit Ethernet or better is highly recommended). For performance consider using Jumbo frames.
  • Backup public and private network adapters can be configured for each node in order to improve fault tolerance.
  • The interface names associated with the network adapter(s) for each network must be the same on all nodes.

If the Grid Naming Services is not used, the following addresses must also be configured:

  • An IP address and associated host name for each public network interface that are registered in the DNS.
  • One unused virtual IP address (VIP) and associated virtual host name (that is registered in the DNS and is also resolved in the hosts file) are configured for the primary public network interface. The virtual IP address must be in the same subnet as the associated public interface. After installation, clients can be configured to use either the virtual host name or virtual IP address. If a node fails, its virtual IP address will fail over to another node.
  • A private IP address (and optionally a host name) for each private interface. Oracle recommends that you use private network IP addresses for these interfaces.
  • An additional virtual IP address (VIP) and associated virtual host name for the Scan Listener, which is also registered in the DNS.

For further information, refer to the pre-installation requirements and checklist in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux.

Note: A common mistake is to not set up ntpd correctly. Refer to the Setting Network Time Protocol for Cluster Time Synchronization section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux.

3.1.2 Verify the Kernel Parameters

As part of the Infrastructure install, the pre-installation process checks the kernel parameters and, if necessary, creates a "fixup" script that corrects most of the common kernel parameter issues. Follow the installation instructions for running this script.

Detailed hardware and OS requirements are listed in the Oracle Grid Infrastructure Installation Server Hardware Checklist section of Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux.

3.1.3 Set up Shared Storage

The available shared storage options are either ASM or shared file system (clustered or NFS). Use of raw disk devices is only supported for upgrades.

These storage options are described in the Configuring Storage for Oracle Grid Infrastructure and Oracle RAC section of Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux.

3.1.4 Check the Account Setup

3.1.5 Configure Secure Shell on All Cluster Nodes

Secure Shell configuration is covered in detail in both the Oracle Real Application Clusters Installation Guide and Oracle Grid Infrastructure Installation Guide. Unlike previous releases where you would have to manually set up Secure Shell, the Oracle Database 12c Release 1 installer now provides an option to automatically setup passwordless SSH connectivity.

If you have system restrictions that require you to set up SSH manually, such as using DSA keys, or for further information on passwordless SSH, refer to the Configuring SSH Manually on All Cluster Nodes section of Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux.

3.1.6 Run Cluster Verification Utility (CVU)

The installer will automatically run the Cluster Verify tool and provide fix up scripts for OS issues.  However, you can also run the CVU prior to installation to check for potential issues:
  1. Install the cvuqdisk package as detailed in the Installing the cvuqdisk RPM for Linux section in Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux.
     
  2. Use the following command to determine which pre-installation steps have been completed, and which need to be performed:
    $ <12c Grid Software Stage>/runcluvfy.sh stage -pre crsinst -n
    Substitute with the names of the nodes in your cluster, separated by commas.  To fix issues at this stage rather than during install, consider adding the following options to the above command: -fixup -verbose

  3. Use the following command to check the networking setup with CVU:
    $ <12c Grid Software Stage>/runcluvfy.sh comp nodecon -n [-verbose]
  4. Use the following command to check the operating system requirements with CVU:
    $ <12c Software Stage>/runcluvfy.sh comp sys -n -p {crs|database} -osdba osdba_group -orainv orainv_group -verbose
    Substitute with a comma-separated list of the names of the nodes in your cluster.

3.1.7 Install Oracle Clusterware 12c Release 1

  1. Use the same oraInventory location that was created during the installation of Oracle E-Business Suite 12; make a backup of oraInventory prior to starting the installation.

  2. Start runInstaller from the Oracle Clusterware 12c Release 1 staging area, and install as per your requirements. For further information refer to the Installing Oracle Grid Infrastructure for a Cluster section of Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux.
     

    Note: Customers who have an existing Grid Infrastructure install tailored to your requirements can skip this step. Those who do not, require further information, or who are perhaps doing a test install, should refer to Appendix B for an example walk through.


  3. Confirm the Oracle Clusterware function:

    1. After installation, log in as root and use the following command to confirm that your Oracle Clusterware installation is running correctly:
      $ /bin/crs_stat -t -v
    2. Successful Oracle Clusterware operation can also be verified using the following command:
      $ /bin/crsctl check crs

      CRS-4638: Oracle High Availability Services is online
      CRS-4537: Cluster Ready Services is online
      CRS-4529: Cluster Synchronization Services is online
      CRS-4533: Event Manager is online

3.2 Install Oracle Database Software 12c Release 1 and Upgrade the Oracle E-Business Suite Database

Note:  Take a full backup of the oraInventory directory before starting this stage, during which you will run the Oracle Universal Installer (runInstaller) to carry out an Oracle Database Installation with Oracle RAC. In the Cluster Nodes Window, verify the cluster nodes shown for the installation. Select all nodes included in your Oracle RAC cluster.

To install Oracle Database 12c Release 1 software and upgrade an existing database to 12c Release 1, refer to the interoperability note Document 1524398.1 and follow all instructions and steps listed there except the following:
  • Start the new database listener (Conditional)
  • Implement and run AutoConfig
  • Restart Applications server processes (Conditional)

Note: Installing the Example CD results in a problem where OPatch does not detect all the nodes in the RAC cluster. To resolve this problem, once you have installed the Oracle Database Software 12c on all RAC nodes, it is essential that you apply the latest OPatch Patch 6880880 in all Oracle Database Home on all of the RAC nodes.

3.3 Listener Configuration in Oracle Database 12c Release 1

The listener configuration can often be confusing when converting an Oracle E-Business Suite database to use Oracle RAC.

There are two types of listener in Oracle Database 12c Release 1 Clusterware: the Scan listener and general database listeners. The Scan listener provides a single named access point for clients, and replaces the use of Virtual IP addresses (VIP) in client connection requests (tnsnames.ora aliases). However, connection requests can still be routed via the VIP name, as both access methods are fully supported.

To start or stop a listener using srvctl, the following three configuration components are required:

  • An Oracle Home from which to run lsncrtl
  • The listener.ora file under the TNS_ADMIN network directory
  • The listener name (defined in listener.ora) to start and stop

The Oracle Home can either be the Infrastructure home or an Oracle Database home. The TNS_ADMIN directory can be any accessible directory. The listener name must be unique within the listener.ora file. For further information, refer to the Listener Configuration for an Oracle RAC Database section of Oracle Real Application Clusters Administration and Deployment Guide12c Release 1 (12.1).

There are three Listener issues to be considered, which are as follows:

  • Listener configuration in Oracle Database 12c Release 1 Clusterware
  • Listener requirements for converting to Oracle RAC
  • Listener requirements for AutoConfig
Refer to Appendix E for a more detailed explanation of how instances interact with listeners.

3.3.1 Listener Configuration in Oracle Database 12c Release 1 Clusterware

3.3.1.1 General Database Listeners

In Oracle Database 12c Release 1, listeners are configured at the cluster level, and all nodes inherit the port and environment settings. This means that the TNS_ADMIN directory path will be the same on all nodes. So, to create a new listener, listener_ebs on port , running from the database ORACLE_HOME and with a user defined TNS_ADMIN directory, you would execute commands based on the following:

$ srvctl add listener -l listener_ebs  -o <12c Release 1 ORACLE_HOME> -p
$ srvctl setenv listener -l listener_ebs  -T TNS_ADMIN= $TNS_ADMIN

When the listener starts, it will run from the database ORACLE_HOME. srvctl manages the listener.ora file across all the nodes.

3.3.1.2 Scan Listener

The scan listener runs from the infrastructure home and all the configuration files, such as the listener.ora, are handled by cluster services. You only need to register the listener as in the following example command:

$ srvctl add scan_listener -l listener_scan  -p 1521

To use the scan listener, additional host addresses need to be assigned and configured - refer to the IP Name and Address Requirements for Standard Cluster Manual Configuration section of Oracle Grid Infrastructure Installation Guide Oracle Database 12c Release 1 (12.1) for Linux

Note: The Configuration Prerequisites patches enable AutoConfig support for the scan listener. If you have not applied the prerequisite patches you will only be able to use the listener if you customize both the tnsnames.ora file and DBC connection strings.

3.3.2 Listener Requirements for Converting to Oracle RAC

Tools such as rconfig, dbca, dbua impose additional restrictions on the choice of listener. The listener must be the default listener, and it must run from the Grid Infrastructure home. So if the default listener is not set up for rconfig, the example in 3.3.1 above would need to be changed to:

$ srvctl modify listener -l LISTENER -p [ if default LISTENER exists ]
  or
$ srvctl add listener -p
After conversion, you can reconfigure the listener as required.

3.3.3 Listener Requirements for AutoConfig

This section describes the general database and scan listeners.

3.3.3.1 General Database Listener

Prior to named db listener support (detailed in the Configuration Prerequisites section), AutoConfig created the listener names in the form listener_, i.e. there are different listener names on each node in the cluster. If the named DB listener patch has not been applied, you will need to execute the manual steps listed in section 3.7.4 Update SRVCTL for the new listener.ora in order to use srvctl for the new listener.ora.

3.3.3.2 Scan Listener Starting with Oracle E-Business Suite 12.1.3 AutoConfig supports the scan listener as detailed in Configuration Prerequisites.

3.4 Configure Shared Storage

This document does not discuss the setup of shared storage as there are no Oracle E-Business Suite specific tasks in setting up ASM, NFS (NAS) or clustered storage. Further information is available from the following documents:

There are no specific tasks required when setting up ASM for Oracle E-Business Suite, however, the following section (optional) details the steps for those who want to use rconfig for the conversion to RAC as well as moving the database to ASM storage.

3.4.1 Migrating your Oracle Database to ASM while converting to RAC (Optional)

ASM is integrated with the Oracle Grid infrastructure and can be installed and configured during the Grid Infrastructure installation. It can be configured later using asmca.

If you plan to use ASM for the Oracle database, perform the following steps prior to installing the Grid Infrastructure:

3.4.1.1 Create Users and Groups

Oracle recommends using a separate user for the ASM instance rather than an Oracle user. For example, to create a "grid" user, create the groups asmdba, asmadmin, asmoper, and assign them to the "grid" user as in the following example:

$ useradd -m -u -g oinstall -G asmadmin,asmdba,asmoper -d
3.4.1.2 Create ASM Disks

ASM requires unformatted (raw) disk partitions. Once selected, they are collated into ASM diskgroups.

Before starting, you need to install the appropriate asmlib rpms for your Linux version. For example, the following three rpms would be used on OEL 5:

  • oracleasm-support-2.1.3-1.el5
  • oracleasmlib-2.0.4-1.el5
  • oracleasm-2.6.18-164.0.0.0.1.el5xen-2.0.5-1.el5

Partition the disks and label them as follows:

  1. Partition the disks using fdisk.
  2. Use oracleasm to label ASM disks.

1. As the root user, determine the partitions available. The following command shows all the partitions known to the OS.

$ cat /proc/partitions

2. Consider a system with two raw unformatted disks /dev/sda1 and /dev/sda2. Run the command fdisk to partition each of the disks.

3. To label the disks for use by ASM, perform the following steps:

  1. As the root user, use the following command to configure oracleasm:
    $ oracleasm configure -i
  2. Initialize the asmlib with the oracleasm init command, which loads the oracleasm module and mounts the oracleasm filesystem:
    $ oracleasm init
  3. Use oracleasm to create the ASM disk label for each disk:
    $ oracleasm createdisk DATA1 /dev/sda1
    $ oracleasm createdisk DATA2 /dev/sda2
  4. Check that the disk are visible using the following command:
    $ oracleasm listdisks
  5. Check the the disks are mounted in the oracleasm filesystem using the following command:
    $ ls -l /dev/oracleasm/disks
Note: Prior to using rconfig to convert the database to RAC ASM, update the shared storage option setting in the XML file to ASM. For example, .

3.5 Convert Oracle Database 12c Release 1 to Oracle RAC

Note : If you are planning to use the same storage location for the RAC conversion using rconfig, take a note of the temporary files, locations and sizes or alternatively make a temporary copy the v$tempfile table.

There are three options for converting to Oracle RAC, which are detailed in the Converting Single-Instance Oracle Databases to Oracle RAC section of the Oracle Real Application Clusters Administration and Deployment Guide. These are as follows:

  • DBCA
  • rconfig
  • Enterprise Manager

All these will convert an Oracle E-Business Suite database to Oracle RAC; which to use is a matter of personal choice. 

The conversion prerequisites are as follows:

  • A clustered Grid Infrastructure install with at least one scan listener address (Section 3.1.1).
  • The default listener running from the Grid Infrastructure home (Section 3.3.2).
    • The port can either be left as the default, or specified during the Grid Infrastructure install.
  • An Oracle Database 12c_ORACLE_HOME installed on all nodes in the cluster (Section 3.2).
  • Shared storage: the database files can already be on shared storage (CFS or ASM) or moved to ASM as part of the conversion (Section 3.4).
As an example, the steps involved for the Admin Managed rconfig conversion are as follows:
  1. As the oracle user, navigate to $12c_ORACLE_HOME/assistants/rconfig/sampleXMLs and open the sample file ConvertToRAC_AdminManaged.xml using an editor such as vi. This XML sample file contains comment lines that provide instructions on how to edit the file for your specific configuration.

  2. Make a copy of the sample ConvertToRAC.xml file and modify the parameters as necessary. Keep a note of the name of your modified copy.
     
    Note: Study the example file and associated notes in Appendix A before you edit your own file and run rconfig.

  3. Execute rconfig using the convert option: convert verify="ONLY" prior to performing the actual conversion. Although this is optional, it is highly recommended as the test validates the parameters and identifies any issues that need to be corrected before the conversion takes place.
     
    Note: Specify the 'SourceDBHome' variable in the ConvertToRAC_AdminManaged.xml as the non-RAC Oracle Home (). If you wish to specify the NEW_ORACLE_HOME, start the database from new Oracle Home.

  4. Shut down the database instance.

  5. If you are not using an spfile for database startup, you must convert to spfile before running rconfig; use the following command:
    SQL>create spfile='' from pfile;
  6. Move the $SOURCE_ORACLE_HOME/dbs/spfile.ora for this instance to the shared location.

  7. Take a backup of the existing $SOURCE_ORACLE_HOME/dbs/init.ora and create a new $SOURCE_ORACLE_HOME/dbs/init.ora with the following parameter:
    $ spfile='/spfile.ora'
  8. Start the database instance.

  9. Navigate to $12c_ORACLE_HOME/bin and run rconfig:
    $ ./rconfig
    This rconfig command will perform the following tasks:
    1. Migrate the database to ASM storage (if ASM is specified as the storage option in the configuration XML file)
    2. Create database instances on all nodes in the cluster
    3. Configure listener and NetService entries
    4. Configure and register CRS resources
    5. Start the instances on all nodes in the cluster
See Appendix C for known issues with database conversion.
 
Note: Query v$tempfile table, if there are no temproary files listed, create them using the details that were taken at the begining of this section

3.6 Post Migration Steps

The conversion tools may change some of the configuration options. Most notably, your database will now be in archivelog mode, regardless of whether it was or not prior to the conversion. If you do not want to use archivelog mode, perform the following steps:

  1. Mount but do not open the database, using the startup mount command
  2. Use the command alter database noarchivelog to disable archiving
  3. Shut down the database using the shutdown immediate command
  4. Start up the database with the startup command
For further details of how to control archiving, refer to Oracle Database Administrator's Guide 12c Release 1 (12.1).

Adjust Listener Settings

The Oracle E-Business Suite applications tier connects to the Oracle RAC instances via the port specified in the Applications Tier context file. If a different listener port was chosen during the conversion, either directly or indirectly, then either change the database ports to match the Applications Tier context files, or alternatively in 3.8.1, ensure that s_dbport is updated. The value of s_dbport depends on whether the SCAN listener is chosen in 3.7.1 If SCAN is chosen, then set s_dbport to the SCAN port;  otherwise use the local listener port.

3.7 Enable AutoConfig on the Database Tier

3.7.1 Steps to Perform On All Oracle RAC Nodes

  1. Ensure that you have applied the Oracle Oracle E-Business Suite patches listed in the prerequisites section.
     
  2. Execute $AD_TOP/bin/admkappsutil.pl on the Applications Tier to generate an appsutil.zip file for the database tier. 

    Note: Ensure that the patch 10427234 has been applied in Oracle E-Business Suite Release 12.1.3 before running admkappsutil.pl in the ApplicationsTier. This fix ensure that all the instance services are registered with all the scan listeners in the RAC environment.

  3. Copy the appsutil.zip file to the database tier in the 12c_ORACLE_HOME.
     
  4. Unzip the appsutil.zip file to create the appsutil directory in the 12c_ORACLE_HOME.
     
  5. Copy the jre directory from /appsutil to <12c_ORACLE_HOME>/appsutil.
     
  6. Create a directory under <12c_ORACLE_HOME>/network/admin. Use the new instance name while creating the context directory. Normally database name and instance prefix are same, but if you want to instance prefix to be different from database name then create the directory as SID1_. For example, if your database name is VISRAC and you want to use "vis" as the instance prefix, create the directory as vis1_.
     
  7. Set the following environment variables:
    ORACLE_HOME =<12c_ORACLE_HOME>
    LD_LIBRARY_PATH = <12c_ORACLE_HOME>/lib, <12c_ORACLE_HOME>/ctx/lib
    ORACLE_SID =
    PATH= $PATH:$ORACLE_HOME/bin;
    TNS_ADMIN = $ORACLE_HOME/network/admin/
  8. Copy the tnsnames.ora file from $ORACLE_HOME/network/admin to the $TNS_ADMIN directory, and edit the aliases for SID= and _local.

  9. As the APPS user, run the following command on the primary node to deregister the current configuration:
    SQL>exec fnd_conc_clone.setup_clean;
  10. Set local_listener parameter to _local and verify that the instances are registered in EBS database listener.
    SQL>alter system set local_listener='_local' sid='';
  11. From the Oracle Database 12c Release 1 ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command:
    $ adbldxml.pl appsuser= appspass=
    Note:  If you have applied the AutoConfig SCAN listener patches as listed in Configuration Prerequisites, then you will be prompted for the scan listener name and port. If you have configured the scan listener, you will also need to supply the required name and port.  Refer to Appendix D for more information about switching to/from the scan listener.

  12. Set the value of s_virtual_hostname to point to the virtual hostname for the database host in the database context file $ORACLE_HOME/appsutil/_hostname.xml.

  13. From the Oracle Database 12c Release 1 ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script.

  14. Check the AutoConfig log file located in the <12c Release 1 ORACLE_HOME>/appsutil/log//.
Note: To ensure all AutoConfig TNS aliases are correctly configured to recognize all available nodes, re-run AutoConfig on all nodes. For more details of AutoConfig, refer to My Oracle Support Knowledge , Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12.

3.7.2 Shut Down the Listener and Database

Use the following commands to stop the listener and database:

$ srvctl stop listener
$ srvctl stop database -d

3.7.3 Update Server Parameter File Settings

After the conversion to Oracle RAC, you will have a central server parameter file (spfile).

It is important to understand the Oracle RAC specific changes brought in by AutoConfig, and to ensure that the context file is in sync with the database initialization parameters. The Oracle Database 12c Release 1 changes will already be reflected in the initialization parameters (from Step 3.2).

The affected parameters are listed in the Oracle RAC template under 12c_Release 1_ORACLE_HOME/appsutil/template/afinit_db121RAC.ora. They are also listed below. Many will have been set by the conversion, and others may have previously been set by you for non-RAC related reasons.

  • service_names
    • Oracle E-Business Suite customers may well have a variety of services already set. You must ensure that service_names includes %s_dbService% (database name) across all instances.

  • local_listener
    • If you are using SRVCTL to manage your database, the installation guide recommend leaving this unset, as it is dynamically set during instance start up. If you are using a non-default listener, then this parameter must be set to _local.

  • remote_listener
    • If you are using AutoConfig to manage your connections, then the remote_listener must be set to the _remote AutoConfig alias.

The following six parameters will all have been set as part of the conversion. The context variables should be updated to be in sync with the database.

  • cluster_database
  • cluster_database_instances
  • undo_tablespace
  • instance_name
  • instance_number
  • thread

3.7.4 Update SRVCTL for the new listener.ora

If you intend to use srvctl to manage your Oracle E-Business Suite database, you must perform the following additional steps:

Note: If you are using shared Oracle Home then TNS_ADMIN cannot be shared as the directory path must be same on all nodes.See Appendix F for an example of how to use SRVCTL to manage listeners in a shared Oracle Home.
  1. If you wish to use the port allocated to the default listener, stop and remove the default listener.

  2. Add the Oracle E-Business Suite listener:
    $ srvctl add listener -l listener_  -o <12c Release 1 ORACLE_HOME> -p
    $ srvctl setenv listener -l listener_  -T TNS_ADMIN=$ORACLE_HOME/network/admin

    Note: If registering the listener with Cluster Services fails with a CRS-0254 authorization failure error, refer to the Known Issues Section.

  3. Check that the LISTENER_ has been updated to LISTENER_  (for example LISTENER_EBS) in the listener.ora. This will have been done automatically if you applied the named db listener AutoConfig patch in the Configuration Prerequisites section.
     
  4. On each node, add the AutoConfig listener.ora as an ifile in the $ORACLE_HOME/network/admin/listener.ora.

  5. On each node, add the AutoConfig tnsnames.ora as an ifile in the $ORACLE_HOME/network/admin/tnsnames.ora.
     
  6. On each node, add the AutoConfig sqlnet.ora as an ifile in the $ORACLE_HOME/network/admin/sqlnet.ora.

  7. Add TNS_ADMIN to the database:
    $ srvctl setenv database -d -T TNS_ADMIN= $ORACLE_HOME/network/admin
  8. Start up the database instances and listeners on all nodes. The database can now be managed using srvctl.

3.8 Establish the Oracle E-Business Suite Environment for Oracle RAC

3.8.1 Preparatory Steps

Perform the following steps on all Application Tier nodes:

  1. Source the Oracle Applications environment.

  2. Edit SID= and PORT= in the $TNS_ADMIN/tnsnames.ora file, to set up connection one of the instances in the Oracle RAC environment.

  3. Confirm that you are able to connect to one of the instances in the Oracle RAC environment.
     
  4. Edit the context variable jdbc_url, adding the instance name to the connect_data parameter.

  5. Run AutoConfig using the command:
    $ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/
    For more information on AutoConfig, refer to My Oracle Support Knowledge , Using AutoConfig to Manage System Configurations with Oracle E-Business Suite 12.
     
  6. Check the $INST_TOP/admin/log/ autoconfig log file for errors.

  7. Source the environment using the latest environment files.

  8. Verify the tnsnames.ora and listener.ora files located in $INST_TOP/ora/10.1.2/network/admin and $INST_TOP/ora/10.1.3/network/admin. Check the files to ensure that the correct TNS aliases have been generated for load balancing and failover, and that all the aliases are defined using the virtual hostnames.

  9. Verify the dbc file located at $FND_SECURE. Ensure that the parameter APPS_JDBC_URL is configured with all instances in the environment, and that load_balance is set to YES.

3.8.2 Set Up Load Balancing

Implement load balancing across the Oracle E-Business Suite database connections:

  1. Using the context editor (via the Oracle Applications Manager interface), modify the variables as follows:

    1. To load-balance the Oracle Forms database connections, set the value of "Tools OH TWO_TASK" (s_tools_twotask) to point to the _BALANCE alias in the tnsnames.ora file.

    2. To load-balance the Self-Service (HTML-based) database connections, set the value of "iAS OH TWO_TASK" (s_weboh_twotask) and "Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias) to point to the _balance alias in the tnsnames.ora file.

  2. Execute AutoConfig by running the command:
    $ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/
  3. Restart the Oracle E-Business Suite processes, using the new scripts generated by AutoConfig.

  4. Ensure that the value of the profile option "Application Database ID" is set to the dbc file name generated in $FND_SECURE.
     
    Note: Repeat all of the steps above when setting up load balancing on a new Application Tier node.

3.9 Configure Parallel Concurrent Processing

3.9.1 Check prerequisites for setting up Parallel Concurrent Processing

To set up Parallel Concurrent Processing (PCP), you must have more than one Concurrent Processing node in your environment. If you need to add another node, follow the appropriate instructions in My Oracle Support Knowledge Cloning Oracle Applications Release 12 with Rapid Clone.

Note:  If you are planning to implement a shared Application tier file system, refer to My Oracle Support Knowledge Document 384248.1, Sharing the Application Tier File System in Oracle E-Business Suite Release 12, for the necessary configuration steps. If you are adding a new Concurrent Processing node to the Application Tier, you will need to set up load balancing on the new Application Tier by repeating steps 1-6 in Section 3.8.2.

3.9.2 Set Up PCP

  1. Use Oracle Applications Manager to edit the applications context file and set the value of the variable APPLDCP to ON.

  2. Execute AutoConfig by running the following command on all concurrent processing nodes:
    $ $INST_TOP/admin/scripts/adautocfg.sh
  3. Source the Applications environment.

  4. Check the tnsnames.ora and listener.ora configuration files located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes.

  5. Restart the Applications listener processes on each Applications tier.

  6. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility. Navigate to Install > Nodes screen, and ensure that each node in the cluster is registered.

  7. Verify that the Internal Monitor for each node is defined with the correct primary node specification and work shift details. For example, Internal Monitor: Host1 must have primary node as host1. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator.

  8. Set the $APPLCSF environment variable on all Concurrent Processing nodes to point to a log directory on a shared file system.

  9. Set the $APPLPTMP environment variable on all Concurrent Processing nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. (This value should be pointing to a directory on a shared file system.)

  10. Set profile option 'Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required. If you set it to 'ON', a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable.

3.9.3 Set up the Transaction Managers

  1. Shut down the application services on all nodes

  2. Shut down all the database instances cleanly in the Oracle RAC environment, using the command:
    SQL>shutdown immediate;
  3. Add the following parameters to the $ORACLE_HOME/dbs/_ifile.ora:
    _lm_global_posts=TRUE
    _immediate_commit_propagation=TRUE
  4. Start each of the instances.

  5. Start up the application services on all nodes.

  6. Log on to Oracle E-Business Suite Release 12 as SYSADMIN and select the System Administrator responsibility. Navigate to Profile > System, change the profile option ?Concurrent: TM Transport Type' to ?QUEUE'. Verify that the transaction manager works across the Oracle RAC instances.

  7. Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for the transaction managers.

  8. Restart the concurrent managers.

  9. Check the status the transaction managers (using Concurrent > Manager > Administrator) and activate them if necessary.

3.9.4 Set Up Load Balancing on the Concurrent Processing Nodes

  1. Edit the applications context file through the Oracle Applications Manager interface, and set the value of Concurrent Manager TWO_TASK (s_cp_twotask) to the load balancing alias (_balance).
     
  2. Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on each of the concurrent nodes.

Section 4: References

This section lists the most commonly referenced documents.

  • My Oracle Support Knowledge Document 745759.1 Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap
  • My Oracle Support Knowledge Document 384248.1 Sharing The Application Tier file system in Oracle E-Business Suite Release 12
  • My Oracle Support Knowledge Document 387859.1 Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12
  • My Oracle Support Knowledge Document 406982.1 Cloning Oracle Applications Release 12 with Rapid Clone
  • My Oracle Support Knowledge Document 240575.1 RAC on Linux Best Practices
  • My Oracle Support Knowledge Document 265633.1 Automatic Storage Management Technical Best Practices
  • My Oracle Support Knowledge Document 1524398.1 Oracle Applications Release R12 with Oracle 12c Release 1

Appendix A: Sample Config XML file

This appendix shows example contents of an rconfig XML input file. <!-- Comments like this --&gt have been added to the code, and notes have been inserted between sections of code.

RConfig xsi:schemaLocation="http://www.oracle.com/rconfig">
-

-
<!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY --&gt
-

Note: The Convert verify option in the ConvertToRAC.xml file can take one of three values YES/NO/ONLY:

1. YES: rconfig performs prerequisites check and then starts conversion.

2. NO: rconfig does not perform prerequisites check priot to starting the conversion.

3. ONLY: rconfig only performs prerequisites check and does not start the conversion.

In order to validate and test the settings specified for converting to Oracle RAC with rconfig, it is advisable to execute rconfig using Convert verify="ONLY" prior to carrying out the actual conversion.

<!-- Specify current OracleHome of non-RAC database for SourceDBHome --&gt


/oracle/product/12.1.0/db_1

<!-- Specify OracleHome where the Oracle RAC database should be configured. It can be same as SourceDBHome --&gt


/oracle/product/12.1.0/db_1
-
<!-- Specify SID of non-RAC database and credential. User with sysdba role is required to perform conversion --&gt
-

-

sys
oracle
sysdba


-


sys
welcome
sysdba


-
<!-- Specify the list of nodes that should have Oracle RAC instances running. LocalNode should be the first node in this nodelist. --&gt
-




-
<!-- Specify the prefix for Oracle RAC instances. It can be same as the instance name for non-RAC database or different. The instance number will be attached to this prefix. The Instance Prefix tag is optional starting with Oracle Database 11.2. If left empty, it is derived from db_unique_name--&gt


sales
-

<!-- Listener details are no longer needed starting with Oracle Database 11.2. The database is registered with a default listener and SCAN listener running from Oracle Grid Infrastructure home. --&gt

<!-- Specify the type of storage to be used by the Oracle RAC database. Allowable values are CFS and ASM. The non-RAC database should have the same storage type. --&gt
-

Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this option, specify the ASM parameters for your environment in the XML file above.

The ASM instance name specified above is only the current node ASM instance. Ensure that ASM instances on all the nodes are running and the required diskgroups are mounted on each of them.

The ASM disk groups can be identified by issuing the following statement when connected to the ASM instance:
SQL>select name, state, total_mb, free_mb from v$asm_diskgroup;


<!-- Specify Database Area Location to be configured for Oracle RAC database.If this field is left empty, current storage will be used for Oracle RAC database. For CFS, this field will have directory path. --&gt

+ASMDG

Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this option, specify the ASM parameters as per your environment in the above XML file.

If you are using CFS for your current database files then specify "NULL" to use the same location unless you want to switch to another CFS location. If you specify the path for the TargetDatabaseArea, rconfig will convert the files to Oracle Managed Files nomenclature.


 
Note: The following text is contained in the XML file below. Specify the Flash Recovery Area to be configured for the Oracle RAC database. If this field is left empty (as in the following example), the current recovery area of the non-RAC database will be configured for the Oracle RAC database. If current database does not have a Recovery Area, the resulting Oracle RAC database will not have one either.
<!--
Specify the Flash Recovery Area to be configured for the Oracle RAC database. If this field is left empty, the current recovery area of the non-RAC database will be configured for the Oracle RAC database. If current database is not using a Recovery Area, the resulting Oracle RAC database will not have a recovery area. --&gt


+ASMDG



Appendix B: Example Grid Installation

The following instructions assume a fresh Grid install and is intended for those less experienced with Clusterware, or who may be doing a test install.
  1. Start the Installer.
  2. Choose "Install and Configure Grid Infrastructure for a Cluster". Click "Next".
  3. Choose "Advanced Configuration", This is needed when specifying a scan name that is different to the cluster name. Click "Next".
  4. Choose Languages. Click "Next".
  5. Uncheck "Configure GNS" - this is for experienced users only.
  6. Enter cluster name, scan name and scan port. Click "Next".
  7. Add Hostnames and Virtual IP names for nodes in the cluster.
  8. Click "SSH Connectivity". Click "Test". If SSH is not established, enter OS user and password and let the installer set up passwordless connectivity.  Click "Test" again, and if successful click "Next"
  9. Choose one interface as public, one as private. eth0 should be public; eth1 is usually set up as private. Click "Next".
  10. Uncheck "Grid Infrastructure manager" page "configuration repository".
  11. Choose Shared File System. Click "Next".
  12. Choose the required level of redundancy, and enter location for the OCR disk. This must be located on shared storage. Click "Next".
  13. Choose the required level of redundancy, and enter location for the voting disk. This must be located on shared storage. Click "Next".
  14. Choose the default of "Do not use" for IPMI. Click "Next".
  15. Select an operating system group for the operator and dba accounts. For the purposes of this example installation, choose the same group, such as "dba", for both. Click "Yes" in the popup window that asks you to confirm that the same group should be used for both, then click "Next".
  16. Enter Oracle Base and Oracle Home. The Oracle Home should not be located under Oracle Base. Click "Next"
  17. Enter Create Inventory location. Click "Next".
  18. In the "Root Script Execution" page either select or unselect "Automatically run configuration scripts" option (as you prefer).
  19. System checks are now performed.  Fix any errors by clicking on "Fix and Check Again", or check "Ignore All" and click "Next". If you are not familiar with the possible effects of ignoring errors, it is advisable to fix them.
  20. Save the response file for possible future use, then click "Finish" to start the install.
  21. You will be required to run various scripts as root during the install. Follow the relevant on-screen instructions.

Appendix C: Database Conversion - Known Issues

Database Upgrade Assistant (DBUA)

If DBUA is used to upgrade an existing AutoConfig-enabled Oracle RAC database, you may encounter an error about a pre-11gR2 listener existing in CRS. In such a case, copy the AutoConfig listener.ora to the <12c_ORACLE_HOME>/network/admin directory, and merge the contents in with the existing listener.ora file.

Cluster Issues

After adding new node, verify that the oracle software user and group has permissions rwx using crs_getperm for vip,ons and gsd resources. (This needs to be performed if you are unable to add listener resource with permission errors) For example:

$CRS_HOME/bin/crs_getperm ora..vip -u
If the user does not have rwx privileges then set them using crs_setperm:
$CRS_HOME/bin/crs_setperm ora..vip -u user: :rwx

Appendix D: Enabling/Disabling SCAN Listener Support in Autoconfig

Managing the scan listener is handled on the database server.  All that is required for the Applications Tier is for AutoConfig to be run again to pick up the updated connection strings.

  • Switching from SCAN to non-SCAN
    • s_scan_name=null , s_scan_port=null and s_update_scan=TRUE
    • local_listener should be _local and remote listener _remote [To allow failover aliases]
    • Run AutoConfig to create the non-SCAN aliases in the tnsnames.ora
    • Run AutoConfig on Applications Tier to create the non-SCAN aliases in the tnsnames.ora
  • Re-enabling SCAN
    • s_scan_name=, s_scan_port= and s_update_scan=TRUE
    • Modify the remote_listener to ":" using alter system set remote_listener='...' for all instances
    • Run AutoConfig on Database tiers to create SCAN aliases in the tnsnames.ora
    • Run AutoConfig on Applications Tier to create SCAN aliases in the tnsnames.ora

Appendix E: Instance and Listener Interaction

Understanding how instances and listeners interact is best done with a worked example.

Consider a 2-node Oracle RAC cluster, with nodes C1 and C2.

In this example, two local listeners are used, the default listener and an EBS listener.  There is nothing special about the EBS listener - it could equally have been called the ABC listener.

Listener Configuration

Listener Type
Node
SCAN Name
Host Name
VIP Name
Listener Host
Listener Port
Listener Address
EBS listener
C1
N/A
C1
C1-VIP
C1
1531
C1 and C1-VIP

C2
N/A
C2
C2-VIP
C2
1531
C2 and C2-VIP








Default listener
C1 N/A C1 C1-VIP C1 1521
C1 and C1-VIP

C2 N/A C2 C2-VIP C2 1521 C2 and C2-VIP








SCAN
Either C1 or C2
C-SCAN
N/A
N/A
Either C1 or C2
1521
C-SCAN









Note the following:
  • The SCAN and local listener can use the same port as they listen on different addresses.
  • The SCAN listener can run on either C1 or C2.
  • Listeners have no intrinsic relationship to specific instances.

SRVCTL configuration

Listener Type
Listener Name
Listener Port
Listener Host
Listener Address
General [Local] listener
1521
C1
C1 and C1-VIP


1521
C2
C2 and C2-VIP






ebs_listener
1531
C1
C1 and C1-VIP


1531 C2
C2 and C2-VIP





SCAN
SCAN [ name doesn't matter and can be default ] 1521
Either C1 or C2 C-SCAN

Instance to Listener Assignment

The relationship between instances and listeners is established by the local_listener and remote_listener init.ora parameters (or spfile):

Local_Listener
  • The instance broadcasts to the address list, informing the listeners that the instance is now available. The local listener must be running on the same node as the instance, as the listener spawns the oracle processes. The default value comes from the cluster.
Remote_Listener
  • The instances broadcast to the address list informing the listeners that the instance is now available to accept requests, and that the requests are to be handled by the local_listener address. The remote hosts can be on any  machine. There is no default value for this parameter.

Database
Instance
Node
Local_Listener
Remote_Listener
Default Listener Status
EBS Listener Status
SCAN Listener Status
D1
I1
C1
Set to C1 & C1-VIP on 1531
C-SCAN/1521
I1 is unavailable I1 is available
I1 is available via redirect to EBS Listener for C1



Set to C1 & C1-VIP on 1531 C1/C1-VIP on 1531,
C2/C2-VIP on 1531
I1& I2 are unavailable
I1 is available.
I2 is available via redirect to EBS Listener for C2.
I1 not available



Not set.  Instance uses cluster default listener - i.e. C1 & C1-VIP on 1521
C-SCAN/1521 I1 is available I1 is unavailable. I1 is available via redirect to Default Listener for C1

I2
C2
Set to C2 & C2-VIP on 1531
C-SCAN/1521
I2 is unavailable
I2 is available I2 is available via redirect to EBS Listener for C2



Set to C2 & C2-VIP on 1531 C1/C1-VIP on 1531,
C2/C2-VIP on 1531
I2 & I1 are unavailable
I2 is available.
I1 is available via redirect to EBS Listener for C1
I2 not available



Not set.  Instance uses cluster default listener - i.e. C2 & C2-VIP on 1521 C-SCAN/1521 I2 is available I2 is unavailable I2 is available via redirect to Default Listener for C2

 

Appendix F: Shared ORACLE_HOME and TNS_ADMIN

In Oracle Database 12c Release 1, listeners are configured at the cluster level, and all nodes inherit the port and environment settings. This means that the TNS_ADMIN directory path will be the same on all nodes. In a shared ORACLE_HOME configuration, the TNS_ADMIN directory must be a local, non-shared directory, in order to be able to use AutoConfig generated network files. These network files will be included as ifiles.

The following is an example of setting up TNS_ADMIN for a shared in a two node cluster, C1 and C2, with respective instances I1 and I2.

  1. Modify the s_db_listener context parameter and set this to a common listener name - e.g. . Repeat this for all instance context files.
  2. If you have applied the AutoConfig db listener patch listed in the Configuration Prerequisites section, perform step 2a, otherwise perform step 2b.
    • 2a. Run AutoConfig on both nodes. This will create listener.ora and tnsnames.ora under the node network directories  - i.e. /network/admin/ and
    • 2b. Edit the AutoConfig listener.ora files and change LISTENER_ to the listener common name - .
  3. Create a . e.g.  /etc/local/network_admin
  4. Create a listener.ora under on each node
    node

    ifile=/listener.ora

    node

    ifile=/listener.ora

  5. Create a tnsnames.ora under the on each node.

    node

    ifile=/tnsnames.ora

    node

    ifile=/tnsnames.ora
  6. Add the common listener name  to the cluster and set TNS_ADMIN to the non shared directory:

    srvctl add listener -l   -o -p
    srvctl setenv listener -l -t TNS_ADMIN=

Appendix G: Known Issues

  1. After adding new node verify that the oracle software user and group has permissions rwx using crs_getperm for vip,ons and gsd resources. (This needs to be performed if you are unable to add listener resource with permission errors):
    For example: $CRS_HOME/bin/crs_getperm ora..vip -u
    If the user does not have rwx privileges then set them using crs_setperm as follows:
    $CRS_HOME/bin/crs_setperm ora..vip -u user::rwx
  2. Remove the ":" tns alias from tnsnames.ora files manually after running AutoConfig.
     
  3. For NFS servers that restrict port range, use the insecure option to enable clients other than root to connect. For example, this is found in /etc/exports.
    /sharedirectory *(rw,insecure)
    Alternatively, you can disable Direct NFS Client as described in Section 7.3.9, "Disabling Direct NFS Client Oracle Disk Management Control of NFS".

Change Log

Date Description
10-Oct-2014
  • Added 6880880 patch in 3.2 section.
27-Sept-2014
  • Updated for Oracle Database 12.1.0.2
12-Feb-2014
  • Corrected Navigation
03-Oct-2013
  • Added ASM.
20-Sep-2013
  • Published Externally
17-Jul-2013
  • Restructure and flow. Updates throughout.
13-Sep-2012
  • Initial creation.

Knowledge Document 1490850.1  Oracle E-Business Suite Development
Copyright  © 2013, Oracle.

References

NOTE:384248.1 - Sharing The Application Tier File System in Oracle E-Business Suite Release 12
NOTE:745759.1 - Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap
NOTE:881506.1 - Oracle Applications Release 12 with Oracle 11g Release 2
NOTE:265633.1 - ASM Technical Best Practices For 10g and 11gR1 Release
BUG:10427234 - AUTOCONFIG TO OPTIONALLY GENERATE ADDITIONAL ALIASES FOR SCAN LISTENER

NOTE:387859.1 - Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12
 

Document Details

 
Rate this document Email link to this documentOpen document in new windowPrintable Page
Type:
Status:
Last Major Update:
Last Update:
WHITE PAPER
PUBLISHED
Oct 10, 2014
Oct 10, 2014
     
 

Related Products

 
Oracle Applications Technology Stack
     
 

Document References

 
     
 

Recently Viewed

 
     

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/17252115/viewspace-1402906/,如需轉載,請註明出處,否則將追究法律責任。

相關文章