Step-By-Step Installation of 9i RAC on IBM AIX
而這些clusterware其實完成的作用是基本上相同的。
Note: This note was created for 9i RAC. The 10g Oracle documentation provides installation instructions for 10g RAC. These instructions can be found on OTN: Purpose
This document will provide the reader with step-by-step instructions on how to install a cluster, install Oracle Real Application Clusters (RAC) and start a cluster database on IBM AIX HACMP/ES (CRM) 4.4.x. For additional explanation or information on any of these steps, please see the references listed at the end of this document. This note does not cover IBM SP2 platform.
[@more@]Disclaimer: If there are any errors or issues prior to step 3.3, please contact IBM Support.
The information contained here is as accurate as possible at the time of writing.
1. Configuring the Clusters Hardware
1.1 Minimal Hardware list / System Requirements
For a two node cluster the following would be a minimum recommended hardware list.
Check the RAC/IBM AIX certification matrix for RAC updates on currently supported hardware/software.
1.1.1 Hardware
- IBM servers - two IBM servers capable of running AIX 4.3.3 or 5L 64bit
- For IBM or third-party storage products, Cluster interconnects, Public networks, Switch options, Memory, swap & CPU requirements consult with the operating system vendor or hardware vendor.
- Memory, swap & CPU requirements
- Each server must have a minimum of 512Mb of memory, at least 1Gb swap space or twice the physical memory whichever is greater.
- To determine system memory use:-
- $ /usr/sbin/lsattr -E -l sys0 -a realmem
- To determine swap space use:-
- $ /usr/sbin/lsps -a
- 64-bit processors are required.
- Each server must have a minimum of 512Mb of memory, at least 1Gb swap space or twice the physical memory whichever is greater.
1.1.2 Software
- When using IBM AIX 4.3.3:
- HACMP/ES CRM 4.4.x
- Only RAW Logical Volumes (Raw Devices) for Database Files supported
- Oracle Server Enterprise Edition 9i Release 1 (9.0.1) or 9i Release 2 (9.2.0)
- When using IBM AIX 5.1 (5L):
- For Database Files residing on RAW Logical Volumes (Raw Devices):
- HACMP/ES CRM 4.4.x
- For Database files residing on Parallel Filesystem (GPFS):
- HACMP/ES 4.4.x (HACMP/CRM is not required)
- GPFS 1.5
- IBM Patch PTF12 and IBM patch IY34917
or
IBM Patch PTF13- Oracle Server Enterprise Edition 9i Release 2 (9.2.0)
- Oracle Server Enterprise Edition 9i for AIX 4.3.3 and 5L are in separate CD packs
and include Real Application Cluster (RAC)
1.1.3 Patches
The IBM Cluster nodes might require patches in the following areas:
- IBM AIX Operating Environment patches
- Storage firmware patches or microcode updates
Patching considerations:
- Make sure all cluster nodes have the same patch levels
- Do not install any firmware-related patches without qualified assistance
- Always obtain the most current patch information
- Read all patch README notes carefully.
- For a list of required operating system patches check the sources in Note 211537.1 and contact IBM corporation for additional patch requirements.
To see all currently installed patches use the following command:
% /usr/sbin/instfix -i
To verify installation of a specific patch use:
% /usr/sbin/instfix -ivk
e.g.: % /usr/sbin/instfix -ivk IY30927
1.2 Installing Disk Arrays
Follow the procedures for an initial installation of the disk enclosures or arrays, prior to installing the IBM AIX operating system environment and HACMP software. Perform this procedure in conjunction with the procedures in the HACMP for AIX 4.X.1 Installation Guide and your server hardware manual.
1.3 Installing Cluster Interconnect and Public Network Hardware
The cluster interconnect and public network interfaces do not need to be configured prior to the HACMP installation but must be configured and available before the cluster can be configured.
- If not already installed, install host adapters in your cluster nodes. For the procedure on installing host adapters, see the documentation that shipped with your host adapters and node hardware. Install the transport cables (and optionally, transport junctions), depending on how many nodes are in your cluster:
- A cluster with more than two nodes requires two cluster transport junctions. These transport junctions are Ethernet-based switches (customer-supplied).
You install the cluster software and configure the interconnect after you have installed all other hardware.
2.1 IBM HACMP/ES Software Installation
The HACMP/ES 4.X.X installation and configuration process is completed in several major steps. The general process is:
- install hardware
- install the IBM AIX operating system software
- install the latest IBM AIX maintenance level and required patches
- install HACMP/ES 4.X.X on each node
- install HACMP/ES required patches
- configure the cluster topology
- synchronize the cluster topology
- configure cluster resources
- synchronize cluster resources
Follow the instructions in the HACMP for AIX 4.X.X Installation Guide for detailed instructions on insalling the required HACMP packages. The required/suggested packages include the following:
- cluster.adt.es.client.demos
- cluster.adt.es.client.include
- cluster.adt.es.server.demos
- cluster.clvm.rte HACMP for AIX Concurrent
- cluster.cspoc.cmds HACMP CSPOC commands
- cluster.cspoc.dsh HACMP CSPOC dsh and perl
- cluster.cspoc.rte HACMP CSPOC Runtime Commands
- cluster.es.client.lib ES Client Libraries
- cluster.es.client.rte ES Client Runtime
- cluster.es.client.utils ES Client Utilities
- cluster.es.clvm.rte ES for AIX Concurrent Access
- cluster.es.cspoc.cmds ES CSPOC Commands>
- cluster.es.cspoc.dsh ES CSPOC dsh and perl
- cluster.es.cspoc.rte ES CSPOC Runtime Commands
- cluster.es.hc.rte ES HC Daemon
- cluster.es.server.diag ES Server Diags
- cluster.es.server.events ES Server Events
- cluster.es.server.rte ES Base Server Runtime
- cluster.es.server.utils ES Server Utilities
- cluster.hc.rte HACMP HC Daemon
- cluster.msg.En_US.cspoc HACMP CSPOC Messages - U.S.
- cluster.msg.en_US.cspoc HACMP CSPOC Messages - U.S.
- cluster.msg.en_US.es.client
- cluster.msg.en_US.es.server
- cluster.msg.en_US.haview HACMP HAView Messages - U.S.
- cluster.vsm.es ES VSM Configuration Utility
- cluster.clvm.rte HACMP for AIX Concurrent
- cluster.es.client.rte ES Client Runtime
- cluster.es.clvm.rte ES for AIX Concurrent Access
- cluster.es.hc.rte ES HC Daemon
- cluster.es.server.events ES Server Events
- cluster.es.server.rte ES Base Server Runtime
- cluster.es.server.utils ES Server Utilities
- cluster.hc.rte HACMP HC Daemon
- cluster.man.en_US.client.data
- cluster.man.en_US.cspoc.data
- cluster.man.en_US.es.data ES Man Pages - U.S. English
- cluster.man.en_US.server.data
- rsct.basic.hacmp RS/6000 Cluster Technology
- rsct.basic.rte RS/6000 Cluster Technology
- rsct.basic.sp RS/6000 Cluster Technology
- rsct.clients.hacmp RS/6000 Cluster Technology
- rsct.clients.rte RS/6000 Cluster Technology
- rsct.clients.sp RS/6000 Cluster Technology
- rsct.basic.rte RS/6000 Cluster Technology
You can verify the installed HACMP software with the "clverify" command.
# /usr/sbin/cluster/diag/clverify
At the "clverify>" prompt enter "software" then at the "clverify.software>" prompt enter "lpp". You should see a message similar to:
Checking AIX files for HACMP for AIX-specific modifications...*/etc/inittab not configured for HACMP for AIX.If IP Address Takeover is configured, or the Cluster Manager is to be started on boot, then /etc/inittab must contain the proper HACMP for AIX entries.Command completed.--------- Hit Return To Continue ---------
Contact IBM support if there were any failure messages or problems executing the "clverify" command.
2.2 Configuring the Cluster Topology
Using the "smit hacmp" command:
# smit hacmp
Note: The following is an example of a generic HACMP configuration to be used as an example only. See the HACMP installation and planning documentation for specific examples. All questions concerning the configuration of your cluster should be directed to IBM Support. This configuration does not include an example of a IP takeover network. "smit" fastpaths are being used to navigate the "smit hacmp" configuration menus. Each one of these configuration screens are obtainable from "smit hacmp". All configuration is done from one node and then synchronized to the other participating nodes.
Add the cluster definition:
Smit HACMP -> Cluster Configuration -> Cluster Topology -> Configure Cluster -> Add a Cluster Definintion
Fastpath:
# smit cm_config_cluster.add
Add a Cluster Definition |
The "Cluster ID " and "Cluster Name " are arbitrary. The "Cluster ID " must be a valid number between 0 and 99999 and the "Cluster Name " can be any alpha string up to 32 characters in length.
Configuring Nodes:
Smit HACMP -> Cluster Configuration -> Cluster Topology -> Configure Nodes -> Add Cluster Nodes
FastPath:
# smit cm_config_nodes.add
Add Cluster Nodes Type or select values in entry fields. |
"Node Names " should be the hostnames of the nodes. They must be alpha numeric and contain no more than 32 characters. All nodes participating in the cluster must be entered on this screen separated by a space.
Next to be configured is the network adapters. This example will utilize two ethernet adapters on each node as well as one RS232 serial port on each node for heartbeat.
Node Name | address | IP Label (/etc/hosts) | Type |
node1 | 192.168.0.1 | node1srvc | service |
192.168.1.1 | node1stby | standby | |
/dev/tty0 | serial | ||
node2 | 192.168.0.2 | node2srvc | service |
192.168.1.2 | node2stby | standby | |
/dev/tty0 | serial |
The following screens are configuration settings needed to configure the above networks into the cluster configuration:
Smit HACMP -> Cluster Configuration -> Cluster Topology -> Configure Nodes -> Add an Adapter
FastPath:
# smit cm_confg_adapters.add
Add an Adapter |
It is important to note that the "Adapter IP Label " must match what is in the "/etc/hosts" file otherwise the adapter will not map to a valid IP address and the cluster will not synchronize. The "Network Name " is an arbitrary name for the network configuration. All the adapters in this ether configuration should have the same "Network Name ". This name is used to determine what adapters will be used in the event of an adapter failure.
Add an Adapter |
Add an Adapter |
Add an Adapter |
The following is the serial configuration:
Add an Adapter |
Add an Adapter |
Since this is not on the same network as the ethernet cards the "Network Name " is different. The same name is used for the network name.
Use "smit mktty" to configure the RS232 adapters:
# smit mktty
Add a TTY |
Be sure that "Enable LOGIN " is set to the default of "disable". The "PORT number " is the value that is to be used in the /dev/tt# where "# " is the port number. So if you defined this as "0 " the device would be "/dev/tty0".
2.3 Synchronizing the Cluster Topology
After the topology is configured it needs to be synchronized. The synchronization performs topology sanity checks as well as pushes the configuration data to each of the nodes in the cluster configuration. For the synchronization to work user equivalence must be configured for the root user. There is several ways to do this. One way would be to create a ".rhosts" file on each node in the "/" directory.
Example of a ".rhosts" file:
node1 root
node2 root
Be sure permissions on the "/.rhosts" file is 600.
# chmod 600 /.rhosts
Use a remote command such as "rcp" to test equivalence from each node:
From node1:
# rcp /etc/group node2:/tmp
Frome node2:
# rcp /etc/group node1:/tmpView your IBM operating system documentation for more information or contact IBM support if you have any questions or problems setting up user equivalence for the root user.
Smit HACMP -> Cluster Configuration -> Cluster Topology -> Synchronize Cluster Topology
FastPath:
# smit configchk.dialog
Synchronize Cluster Topology |
2.4 Configuring Cluster Resources
In a RAC configuration only one resource group is required. This resource group is a concurrent group for the shared volume group. The following are the steps to add a concurrent resource group for a shared volume group:
First there needs to be a volume group that is shared between the nodes.
SHARED LOGICAL VOLUME MANAGER , SHARED CONCURRENT DISKS ( NO VSD )
The two instances of the same cluster database have a concurrent access on the same external disks. This is real concurrent access and not a shared one like in the VSD environment. Because several instances access at the same time the same files and data, locks have to be managed. These locks, at the CLVM layer (including memory cache), are managed by HACMP.
1) Check if the target disks are physically linked to the two machines of the cluster, and seen by both.
Type the lspv command on both machines.
Note : the hdisk number can be different, depending on the others nodes disk configurations. Use the second field of the output (PVid) of lspv to be sure you are dealing with the same physical disk from two hosts. Although hdisk inconsistency may not be a problem IBM suggests using ghost disks to ensure hdisk numbers match between the nodes. Contact IBM for further information on this topic.
2.4.1 Create volume groups to be shared concurrently on one node
# smit vg
Select "Add a Volume Group "
Type or select values in entry fields.
Add a Volume Group |
The "PHYSICAL VOLUME names " must be physical disks that are shared between the nodes. We do not want the volume group automatically activated at system startup because HACMP activates it. Also "Auto-varyon in Concurrent Mode? " should be set to "no " because HACMP varies it on in concurrent mode.
You must choose the major number to be sure the volume groups have the same major number in all the nodes (attention, before choosing this number, you must be sure it’s free on all the nodes).
To check all defined major number, type:
% ls –al /dev/*
crw-rw---- 1 root system 57, 0 Aug 02 13:39 /dev/oracle_vg
The major number for oracle_vg volume group is 57. Ensure that 57 is available on all the other nodes and is not used by another device. If it is free then make use of the same on all nodes.
On this volume group, create all the logical volumes and file systems you need for the cluster database.
2.4.2 Create Shared RAW Logical Volumes if not using GPFS. See section for details about GPFS.
mklv -y'db_name_cntrl1_110m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_cntrl2_110m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_system_400m ' -w'n' -s'n' -r'n' usupport_vg 13 hdisk5
mklv -y'db_name_users_120m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_drsys_90m ' -w'n' -s'n' -r'n' usupport_vg 3 hdisk5
mklv -y'db_name_tools_12m ' -w'n' -s'n' -r'n' usupport_vg 1 hdisk5
mklv -y'db_name_temp_100m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_undotbs1_312m ' -w'n' -s'n' -r'n' usupport_vg 10 hdisk5
mklv -y'db_name_undotbs2_312m ' -w'n' -s'n' -r'n' usupport_vg 10 hdisk5
mklv -y'db_name_log11_120m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_log12_120m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_log21_120m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_log22_120m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_indx_70m ' -w'n' -s'n' -r'n' usupport_vg 3 hdisk5
mklv -y'db_name_cwmlite_100m' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
mklv -y'db_name_example_160m ' -w'n' -s'n' -r'n' usupport_vg 5 hdisk5
mklv -y'db_name_oemrepo_20m ' -w'n' -s'n' -r'n' usupport_vg 1 hdisk5
mklv -y'db_name_spfile_5m ' -w'n' -s'n' -r'n' usupport_vg 1 hdisk5
mklv -y'db_name_srvmconf_100m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5
Substitute your database name in place of the "db_name" value. When the volume group was created a partition size of 32 megabytes was used. The seventh field is the number of partitions that make up the file so for example if "db_name_cntrl1_110m" needs to be 110 megabytes we would need 4 partitions.
The raw partitions are created in the "/dev" directory and it is the character devices that will be used. The " mklv -y'db_name_cntrl1_110m ' -w'n' -s'n' -r'n' usupport_vg 4 hdisk5 " creates two files:
/dev/db_name_cntrl1_110m
/dev/rdb_name_cntrl1_110m
Change the permissions on the character devices so the software owner owns them:
# chown oracle:dba /dev/rdb_name*
2.4.3 Import the Volume Group on to the Other Nodes
Use "importvg" to import the oracle_vg volume group on all of the other nodes
On the first machine, type:
% varyoffvg oracle_vg
On the other nodes, import the definition of the volume group using "smit vg " :
Select "Import a Volume Group "
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
Import a Volume Group |
It is possible that the physical volume name (hdisk) could be different on each node. Check the PVID of the disk using "lspv ", and be sure to pick the hdisk that has the same PVID as the disk used to create the volume group on the first node. Also make sure the same major number is used as well.. This number has to be undefined on all the nodes. The "Make default varyon of VG Concurrent? " option should be set to "no". The volume group was created concurrent capable so the option "Make this VG Concurrent Capable? " can be left at "no". The command line for importing the volume group after varying it off on the node where the volume group was orginally created on would be:
% importvg -V
% chvg -an
% varyoffvg
After importing the volume group onto each node be sure to change the ownership of the character devices to the software owner:
# chown oracle:dba /dev/rdb_name*
2.4.4 Add a Concurrent Cluster Resource Group
The shared resource in this example is "oracle_vg". To create the concurrent resource group that will manage "oracle_vg" do the following:
Smit HACMP -> Cluster Configuration -> Cluster Resources -> Define Resource Groups -> Add a Resource Group
FastPath:
# smit cm_add_grp
Add a Resource Group |
The "Resource Group Name " is arbitrary and is used when selecting the resource group for configuration. Because we are configuring a shared resources the "Node Relationship " is "concurrent" meaning a group of nodes that will share the resource. "Participating Node Names " is a space separated list of the nodes that will be sharing the resource.
2.4.5 Configure the Concurrent Cluster Resource Group
Once the resource group is added it can then be configured with:
Smit HACMP -> Cluster Configuration -> Cluster Resources -> Change/Show Resources for a Resource Group
FastPath:
# smit cm_cfg_res.select
Configure Resources for a Resource Group |
Note that the settings for "Resource Group Name ", "Node Relationship " and "Participating Node Names " comes from the data entered in the previous menu. "Concurrent Volume groups " needs to be a pre-created volume group on shared storage. The "Raw Disk PVIDs " are the physical volumes IDs for each of the disks that make up the "Concurrent Volume groups ". It is important to note that you a resource group manage multiple concurrent resources. In such a case separate each volume group name with a space. Also, the "Raw Disk PVIDs " will be a space delimited list of all the physical volume IDs that make up the concurrent volume group list. Alternatively each volume group can be configured in its own concurrent resource group.
2.4.6 Creating Parallel Filesystems (GPFS)
With AIX 5.1 (5L) you can also place your files on GPFS (RAW Logical Volumes are not a requirement of GPFS). In this case
create GPFS capable of holding all required Database Files, Controlfiles and Logfiles.
2.5 Synchronizing the Cluster Resources
After configuring the resource group a resource synchronization is needed.
Smit HACMP -> Cluster Configuration -> Cluster Resources -> Synchronize Cluster Resources
FastPath:
# smit clsyncnode.dialog
Type or select values in entry fields. |
Just keep the defaults.
2.6 Joining Nodes Into the Cluster
After the cluster topology and resources are configured the nodes can join the cluster. It is important to start one node at a time unless using C-SPOC (Cluster-Single Poing of Control). For more information on using C-SPOC consult IBM's HACMP specific documentation. The use of C-SPOC will not be covered in this document.
Start cluster services by doing the following:
Smit HACMP -> Cluster Services -> Start Cluster Services
FastPath:
# smit clstart.dialog
Type or select values in entry fields. |
Setting "Start now, on system restart or both " to "now " will start the HACMP daemons immediately. "restart " will update the "/etc/inittab" with an entry to start the daemons at reboot and "both " will do exactly that, update the "/etc/inittab" and start the daemons immediately. "BROADCAST message at startup? " can either be "true " or "false ". If set to "true " wall type message will be displayed when the node is joining the cluster. "Startup Cluster Lock Services? " should be set to "false " for a RAC configuration. Setting this parameter to "true " will prevent the cluster from working but the added daemon is not used. If "clstat" is going to be used to to monitor the cluster the "Startup Cluster Information Daemon?" will need to be set to "true ".
View the "/etc/hacmp.out" file for startup messages. When you see something similar to the following it is safe to start the cluster services on the other nodes:
May 23 09:31:43 EVENT COMPLETED: node_up_complete node1
When joining nodes into the cluster the other nodes will report a successful join in their "/tmp/hacmp.out" files:
May 23 09:34:11 EVENT COMPLETED: node_up_complete node1
2.7 Basic Cluster Administration
The "/tmp/hacmp.out" is the best place to look for cluster information. "clstat" can also be used to verify cluster health. The "clstat" program can take a while to update with the latest cluster information and at times does not work at all. Also you must have the "Startup Cluster Information Daemon? " set to "true " when starting cluster services. Use the following command to start "clstat":
# /usr/es/sbin/cluster/clstat
clstat - HACMP for AIX Cluster Status Monitor |
One other way to check the cluster status is by querying the "snmpd" daemon with "snmpinfo":
# /usr/sbin/snmpinfo -m get -o /usr/es/sbin/cluster/hacmp.defs -v ClusterSubstate.0
This should return "32":
clusterSubState.0 = 32
If other values are returned from any node consult your IBM HACMP documentation or contact IBM support.
You can get a quick view of the HACMP specific daemons with:
Smit HACMP -> Cluster Services -> Show Cluster Services
COMMAND STATUS |
Starting & Stopping Cluster Nodes
To join and evict nodes from the cluster use:
Smit HACMP -> Cluster Services -> Start Cluster Services
See section 2.6 for more information on joining a node into the cluster.
Use the following to evict a node from the cluster:
Smit HACMP -> Cluster Services -> Stop Cluster Services
FastPath:
# smit clstop.dialog
Stop Cluster Services |
See section 2.6 "Joining Nodes Into the Cluster" for and explanation of "Stop now, on system restart or both " and "BROADCAST cluster shutdown? ". The "Shutdown mode" determines whether or not resources are going to move between nodes if a shutdown occurs. "forced " is new with 4.4.1 of HACMP and will leave applications running that are controlled by HACMP events when the shutdown occurs. "graceful " will bring everything down but cascading and rotating resources are not switched where as with "graceful with takeover " these resources will be switched at shutdown.
Log Files for HACMP/ES
All cluster reconfiguration information during cluster startup and shutdown goes into the "/tmp/hacmp.out".
3.0 Preparing for the installation of RAC
The Real Application Clusters installation process includes four major tasks.
- Configure the shared disks and UNIX preinstallation tasks.
- Run the Oracle Universal Installer to install the Oracle9i Enterprise Edition and the Oracle9i Real Application Clusters software.
- Create and configure your database.
3.1 Configure the shared disks and UNIX preinstallation tasks
3.1.1 Configure the shared disks
Real Application Clusters requires that all each instance be able to access a set of unformatted devices on a shared disk subsystem if GPFS is not being used. These shared disks are also referred to as raw devices. If your platform supports an Oracle-certified cluster file system, however, you can store the files that Real Application Clusters requires directly on the cluster file system.
Note: If you are using Parallel Filesystem (GPFS), however, you can store the files that Real Application Clusters requires
directly on the cluster file system !
The Oracle instances in Real Application Clusters write data onto the raw devices to update the control file, server parameter file, each datafile, and each redo log file. All instances in the cluster share these files.
The Oracle instances in the RAC configuration write information to raw devices defined for:
- The control file
- The spfile.ora
- Each datafile
- Each ONLINE redo log file
- Server Manager (SRVM) configuration information
It is therefore necessary to define raw devices for each of these categories of file. The Oracle Database Configuration Assistant (DBCA) will create a seed database expecting the following configuration:
Raw Volume | File Size | Sample File Name |
SYSTEM tablespace | 400 Mb | db_name_raw_system_400m |
USERS tablespace | 120 Mb | db_name_raw_users_120m |
TEMP tablespace | 100 Mb | db_name_raw_temp_100m |
UNDOTBS tablespace per instance | 312 Mb | db_name_raw_undotbsx_312m |
CWMLITE tablespace | 100 Mb | db_name_raw_cwmlite_100m |
EXAMPLE | 160 Mb | db_name_raw_example_160m |
OEMREPO | 20 Mb | db_name_raw_oemrepo_20m |
INDX tablespace | 70 Mb | db_name_raw_indx_70m |
TOOLS tablespace | 12 Mb | db_name_raw_tools_12m |
DRYSYS tablespace | 90 Mb | db_name_raw_drsys_90m |
First control file | 110 Mb | db_name_raw_controlfile1_110m |
Second control file | 110 Mb | db_name_raw_controlfile2_110m |
Two ONLINE redo log files per instance | 120 Mb x 2 | db_name_thread_lognumber_120m |
spfile.ora | 5 Mb | db_name_raw_spfile_5m |
srvmconfig | 100 Mb | db_name_raw_srvmconf_100m |
Note: Automatic Undo Management requires an undo tablespace per instance therefore you would require a minimum of 2 tablespaces as described above. By following the naming convention described in the table above, raw partitions are identified with the database and the raw volume type (the data contained in the raw volume). Raw volume size is also identified using this method.
Note: In the sample names listed in the table, the string db_name should be replaced with the actual database name, thread is the thread number of the instance, and lognumber is the log number within a thread.
On the node from which you run the Oracle Universal Installer, create an ASCII file identifying the raw volume objects as shown above. The
DBCA requires that these objects exist during installation and database creation. When creating the ASCII file content for the objects, name them
using the format:
database_object=raw_device_file_path
When you create the ASCII file, separate the database objects from the paths with equals (=) signs as shown in the example below:
system1=/dev/rdb_name_system_400m
spfile1=/dev/rdb_name_spfile_5m
users1=/dev/rdb_name_users_120m
temp1=/dev/rdb_name_emp_100m
undotbs1=/dev/rdb_name_undotbs1_312m
undotbs2=/dev/rdb_name_undotbs2_312m
example1=/dev/rdb_name_example_160m
cwmlite1=/dev/rdb_name_cwmlite_100m
indx1=/dev/rdb_name_indx_70m
tools1=/dev/rdb_name_tools_12m
drsys1=/dev/rdb_name_drsys_90m
control1=/dev/rdb_name_cntrl1_110m
control2=/dev/rdb_name_cntrl2_110m
redo1_1=/dev/rdb_name_log11_120m
redo1_2=/dev/rdb_name_log12_120m
redo2_1=/dev/rdb_name_log21_120m
redo2_2=/dev/rdb_name_log22_120m
You must specify that Oracle should use this file to determine the raw device volume names by setting the following environment variable where filename
is the name of the ASCII file that contains the entries shown in the example above:
csh:
setenv DBCA_RAW_CONFIG filename
ksh, bash or sh:
DBCA_RAW_CONFIG=filename; expor
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/22036495/viewspace-1057492/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 10g RAC on AIXAI
- IBM /AIX 環境快速部署MySQLIBMAIMySql
- IBM AIX儲存層結構分析+aix常用命令IBMAI
- RAC安裝【AIX 7 + 11.2.0.4 + ASM】AIASM
- oracle aix平臺19c rac互信不通案例OracleAI
- AIX 5.3 Install Oracle 10g RAC 錯誤集錦AIOracle 10g
- IOMESH Installation
- 【AIX】AIX程式監控工具AI
- 【資訊採集】IBM AIX系統硬體資訊檢視命令(shell指令碼)IBMAI指令碼
- 2.3.3.2 Application InstallationAPP
- AIX 5.3/6.1環境下安裝Oracle 10gR2 RAC常見報錯AIOracle 10g
- A Tomcat 8.0 installation is expectedTomcat
- AIX VGDAAI
- oracle資料庫跨平臺(AIX)從RAC恢復至(linux)下的單例項Oracle資料庫AILinux單例
- Oracle 9i, 10g, and 11g RAC on Linux所需要的Hangcheck-Timer Module介紹OracleLinuxGC
- 【AIX-PS】AIX系統ps命令詳解AI
- Oracle 11.2.0.4 rac for aix acfs異常環境的克隆環境ASM磁碟組掛載緩慢OracleAIASM
- 07-Plugin ‘scala’ is incompatible with this installationPlugin
- Include manifest for over-the-air installationAI
- Adding Drivers into VMWare ESXi Installation Image
- Install python on AIX 7PythonAI
- AIX掛載NFSAINFS
- AIX_EXT_VGAI
- aix lvm big vgAILVM
- 【AIX】記憶體AI記憶體
- aix升級opensshAI
- 記一次Oracle RAC for aix 儲存雙控鎖盤導致ASM控制檔案損壞恢復OracleAIASM
- 【AIX】AIX7.1 C編譯環境部署指導說明AI編譯
- AIX的yum安裝AI
- AIX系統日誌AI
- AIX相關管理命令AI
- AIX基礎教程(zt)AI
- Step-by-step,打造屬於自己的vue ssrVue
- Windows Server 2019 Installation 安裝.net 3.5WindowsServer
- 榮耀9i引數與真機圖賞 榮耀9i配置怎麼樣?
- oracle xtts遷移 AIX to LinuxOracleTTSAILinux
- AIX中的裸裝置AI
- After mysql installation, we need to change the password of root as belowMySql
- oracle 9i資料庫做spaOracle資料庫