Metlink:How to Configure DM-Multipathing
In this Document Run the scsi_id(8) command against Clusterware devices from one
cluster node to obtain their unique device identifiers. When running the
scsi_id(8) command with the -s argument, the device path and name passed should
be that relative to sysfs directory /sys/ i.e. /block/ Community Discussions: Storage
Management MOS Community ----------- ID:1365511.1
Solution
2. Configure LUNs for ASM:
5. To make the disk available enter the following
commands:
8. Ensure that the devices can be seen in
/dev/mapper:
10. Setup the ASM parameter
(ORACLEASM_SCANORDER), in the file for ASMLIB configuration,
/etc/sysconfig/oracleasm, for forcing ASM to bind with the multipath
devices
Before being
able to configure udev to explicitly name devices, SCSI_ID (scsi_id(8)) should
first be configured to return their device identifiers. SCSI commands are sent
directly to the device via the SG_IO ioctl interface. Modify the
/etc/scsi_id.config file - add or replace the 'option=-b' parameter/value pair
(if exists) with 'option=-g', for example:
vendor="ATA",options=-p
0x80
options=-g
1b. List all SCSI devices
Using the command SCSI_ID for each /block/a-h (for example, for
/dev/sda we type scsi_id -g -s /block/sdb) generates the output:
SATA HUA721075KLA330
GTA260P8H8893E
360060e80045b2b0000005b2b000006c4
360060e80045b2b0000005b2b000006d8
360060e80045b2b0000005b2b00001007
360060e80045b2b0000005b2b00001679
360060e80045b2b0000005b2b0000163c
The
two initial SCSI ids represent the local disks (/dev/sda and /dev/sdb). The
remaining five represent the SCSI ids of the fibre channel attached LUNs. As a
subset of the output string, scsi_id generates for the fibre LUNs matches the
World Wide Identifier (WWID). A simple example would be a disk connected to two
Fibre Channel ports. Should one controller, port or switch fail, the operating
system can route I/O through the remaining controller transparently to the
application, with no changes visible to the applications, other than perhaps
incremental latency.
1c. Obtain Clusterware device unique SCSI identifiers:
...
### sdh: 360060e80045b2b0000005b2b0000163c
###
sdh1:
### sdi: 360060e80045b2b0000005b2b0000163c
### sdi1:
...
###
sdk: 360060e80045b2b0000005b2b00001679
### sdk1:
...
### sdm:
360060e80045b2b0000005b2b000006c4
### sdm1:
### sdn:
360060e80045b2b0000005b2b000006d8
### sdn1:
### sdo:
360060e80045b2b0000005b2b00001007
### sd01:
...
### sdz:
360060e80045b2b0000005b2b00001679
### sdz1:
Another command can be used for listing the SCSI
identifiers.
lrwxrwxrwx 1 root root 9 Jun 27 07:17
scsi-3600508e000000000158d6d2169801c0e -> ../../sda
lrwxrwxrwx 1 root root
10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e-part1 ->
../../sda1
lrwxrwxrwx 1 root root 10 Jun 27 07:17
scsi-3600508e000000000158d6d2169801c0e-part2 -> ../../sda2
lrwxrwxrwx 1
root root 10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e-part3 ->
../../sda3
lrwxrwxrwx 1 root root 10 Jun 27 07:17
scsi-3600508e000000000158d6d2169801c0e-part4 -> ../../sda4
lrwxrwxrwx 1
root root 10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e-part5 ->
../../sda5
lrwxrwxrwx 1 root root 10 Jun 27 07:17
scsi-3600508e000000000158d6d2169801c0e-part6 -> ../../sda6
lrwxrwxrwx 1
root root 10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e-part7 ->
../../sda7
lrwxrwxrwx 1 root root 10 Jun 27 07:17
scsi-360060e80045b2b0000005b2b000006b0 -> ../../sdaa
lrwxrwxrwx 1 root
root 10 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006b0-part1 ->
../../sdl1
lrwxrwxrwx 1 root root 10 Jun 27 07:17
scsi-360060e80045b2b0000005b2b000006c4 -> ../../sdab
lrwxrwxrwx 1 root
root 11 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006c4-part1 ->
../../sdab1
lrwxrwxrwx 1 root root 9 Jun 27 07:17
scsi-360060e80045b2b0000005b2b000006d8 -> ../../sdn
lrwxrwxrwx 1 root root
10 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006d8-part1 ->
../../sdn1
1d. Run fdisk to create partitions for ASM
disks:
(System Administrator's Task)
Disk /dev/sdi: 590.5 GB,
590565212160 bytes
255 heads, 63 sectors/track, 71798 cylinders
Units =
cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id
System
/dev/sdi1 1 130543 1048586616 83 Linux
MUST run fdisk on respective
devicesfdisk /dev/?
Example:
Command (m for help): n
Command action
e extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First cylinder
(1-1011, default 1): [use default]
Using default value 1
Last cylinder
or +size or +sizeM or +sizeK (1-1011, default 1011): [use default]
Using
default value 1011
Command (m for help): w
The partition table has
been altered!
Calling ioctl() to re-read partition table.
Syncing
disks.
1e. Run the fdisk(8) and/or 'cat /proc/partitions' commands to
ensure devices are visible. (If Real Application Clusters (RAC), Clusterware
devices are visible on each node.) For example:
major minor #blocks name
8 0 142577664 sda
8
1 104391 sda1
8 2 52428127 sda2
8 3 33551752 sda3
8 4 1 sda4
8 5
26218048 sda5
8 6 10482381 sda6
8 7 8385898 sda7
8 16 263040 sdb
8
17 262305 sdb1
8 32 263040 sdc
8 33 262305 sdc1
8 48 263040 sdd
8 49
262305 sdd1
8 64 263040 sde
8 65 262305 sde1
8 80 263040 sdf
8 81
262305 sdf1
8 96 263040 sdg
8 97 262305 sdg1
8 112 576723840
sdh
8 113 576709402 sdh1
8 128 576723840 sdi
8 129
576709402 sdi1
8 144 576723840 sdj
8 145 576709402 sdj1
8 160
52429440 sdk
8 161 52428096 sdk1
8 176 524294400 sdl
8 177 524281275
sdl1
8 192 524294400 sdm
8 193 524281275 sdm1
...
65 208 262147200
sdad
65 209 262132605 sdad1
65 224 262147200 sdae
65 225 262132605
sdae1
253 0 524294400 dm-0
253 1 524294400 dm-1
253 2 524294400
dm-2
253 3 262147200 dm-3
253 4 262147200 dm-4
253 5 263040 dm-5
253
6 263040 dm-6
253 7 263040 dm-7
253 8 263040 dm-8
253 9 263040
dm-9
253 10 263040 dm-10
253 11 576723840 dm-11
253 12 576723840
dm-12
253 13 576723840 dm-13
253 14 52429440 dm-14
253 15 524281275
dm-15
253 16 262305 dm-16
253 17 524281275 dm-17
253 19 262132605
dm-19
253 20 524281275 dm-20
2. Configure LUNs for
ASM:
(System Administrator's Task)
2a. Verify Multipath Devices:
Once multipathing has been configured and the multipathd service
started, the multipathed devices should now be available.
3. Automatic Storage Management
Library (ASMLIB) setup:
For detailed
multipathing commands, please refer to http://magazine.redhat.com/2008/07/17/tips-and-tricks-how-do-i-setup-device-mapper-multipathing-in-red-hat-enterprise-linux-4/
Update
the kernel partition table with the new partition as follow (If Real Application
Clusters (RAC), do on each node.):
Then verify that all multipaths are active
by executing:
360060e80045b2b0000005b2b000006c4dm-1
HITACHI,OPEN-V*20
[size=500G][features=0][hwhandler=0]
\_ round-robin 0
[prio=0][active]
\_ 3:0:0:17 sdab 65:176 [active][ready]
\_ 1:0:0:17 sdm
8:192 [active][ready]
360060e80045b2b0000005b2b000006d8dm-2
HITACHI,OPEN-V*20
[size=500G][features=0][hwhandler=0]
\_ round-robin 0
[prio=0][active]
\_ 3:0:0:18 sdac 65:192 [active][ready]
\_ 1:0:0:18 sdn
8:208 [active][ready]
360060e80045b2b0000005b2b00001679dm-14
HITACHI,OPEN-V
[size=50G][features=0][hwhandler=0]
\_ round-robin 0
[prio=0][active]
\_ 1:0:0:9 sdk 8:160 [active][ready]
\_ 3:0:0:9 sdz
65:144 [active][ready]
360060e80045b2b0000005b2b0000312edm-9
HITACHI,OPEN-V
[size=257M][features=0][hwhandler=0]
\_ round-robin 0
[prio=0][active]
\_ 1:0:0:4 sdf 8:80 [active][ready]
\_ 3:0:0:4 sdu 65:64
[active][ready]
360060e80045b2b0000005b2b00001007dm-3
HITACHI,OPEN-V*5
[size=250G][features=0][hwhandler=0]
\_ round-robin 0
[prio=0][active]
\_ 3:0:0:19 sdad 65:208 [active][ready]
\_ 1:0:0:19 sdo
8:224 [active][ready]
360060e80045b2b0000005b2b0000163cdm-20
HITACHI,OPEN-V*11 --->
multipathed
[size=550G][features=0][hwhandler=0]
\_ round-robin 0
[prio=0][active]
\_ 1:0:0:7 sdh 8:128 [active][ready] ---> Required
to be [active][ready]
\_ 3:0:0:7 sdi 8:112 [active][ready] --->
Required to be [active][ready]
In fact, various device names are created and used
to refer to multipathed devices, for example:
360060e80045b2b0000005b2b000006b0 (253,
0)
360060e80045b2b0000005b2b000006b0p1 (253,
15)
360060e80045b2b0000005b2b000006c4 (253,
1)
360060e80045b2b0000005b2b000006c4p1 (253,
17)
360060e80045b2b0000005b2b000006d8 (253,
2)
360060e80045b2b0000005b2b000006d8p1 (253,
26)
360060e80045b2b0000005b2b0000163c
(253,11)
360060e80045b2b0000005b2b0000163cp1
(253,20)
lrwxrwxrwx 1 root root 7 Jun 27 07:17
360060e80045b2b0000005b2b000006b0 -> ../dm-0
lrwxrwxrwx 1 root root 8 Jun
27 07:17 360060e80045b2b0000005b2b000006b0p1 -> ../dm-15
lrwxrwxrwx 1 root
root 7 Jun 27 07:17 360060e80045b2b0000005b2b000006c4 ->
../dm-1
lrwxrwxrwx 1 root root 8 Jun 27 07:17
360060e80045b2b0000005b2b000006c4p1 -> ../dm-17
lrwxrwxrwx 1 root root 7
Jun 27 07:17 360060e80045b2b0000005b2b000006d8 -> ../dm-2
lrwxrwxrwx 1
root root 8 Jun 27 07:17 360060e80045b2b0000005b2b000006d8p1 ->
../dm-26
lrwxrwxrwx 1 root root 7 Jun 27 07:17
360060e80045b2b0000005b2b0000163c -> ../dm-11
lrwxrwxrwx 1 root root 8 Jun
27 07:17 360060e80045b2b0000005b2b0000163cp1 ->
../dm-20
brw-rw---- 1 root disk 253, 0 Jun 27 07:17
360060e80045b2b0000005b2b000006b0
brw-rw---- 1 root disk 253, 15 Jun 27 07:17
360060e80045b2b0000005b2b000006b0p1
brw-rw---- 1 root disk 253, 1 Jun 27
07:17 360060e80045b2b0000005b2b000006c4
brw-rw---- 1 root disk 253, 17 Jun 27
07:17 360060e80045b2b0000005b2b000006c4p1
brw-rw---- 1 root disk 253, 11 Jun
27 07:17 360060e80045b2b0000005b2b0000163c
brw-rw---- 1 root disk
253, 20 Jun 27 07:17
360060e80045b2b0000005b2b0000163cp1
/dev:
drwxr-xr-x 3 root root 60 Jun 27 07:17
bus
lrwxrwxrwx 1 root root 4 Jun 27 07:17 cdrom -> scd0
lrwxrwxrwx 1
root root 3 Jun 27 07:17 cdrom-hda -> hda
lrwxrwxrwx 1 root root 4 Jun 27
07:17 cdrom-sr0 -> scd0
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrw ->
hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrw-hda -> hda
lrwxrwxrwx 1
root root 3 Jun 27 07:17 cdwriter -> hda
lrwxrwxrwx 1 root root 3 Jun 27
07:17 cdwriter-hda -> hda
crw------- 1 root root 5, 1 Jun 27 07:18
console
lrwxrwxrwx 1 root root 11 Jun 27 07:17 core ->
/proc/kcore
drwxr-xr-x 10 root root 200 Jun 27 07:17 cpu
drwxr-xr-x 6 root
root 120 Jun 27 07:17 disk
brw-rw---- 1 root root 253, 0 Jun 27 07:17
dm-0
brw-rw---- 1 root root 253, 1 Jun 27 07:17 dm-1
brw-rw---- 1 root
root 253, 10 Jun 27 07:17 dm-10
brw-rw---- 1 root root 253, 11 Jun 27 07:17
dm-11
brw-rw---- 1 root root 253, 12 Jun 27 07:17 dm-12
brw-rw---- 1 root
root 253, 13 Jun 27 07:17 dm-13
brw-rw---- 1 root root 253, 14 Jun 27 07:17
dm-14
brw-rw---- 1 root root 253, 15 Jun 27 07:17 dm-15
brw-rw---- 1 root
root 253, 16 Jun 27 07:17 dm-16
brw-rw---- 1 root root 253, 17 Jun 27 07:17
dm-17
brw-rw---- 1 root root 253, 18 Jun 27 07:17 dm-18
brw-rw---- 1 root
root 253, 19 Jun 27 07:17 dm-19
brw-rw---- 1 root root 253, 2 Jun 27 07:17
dm-2
brw-rw---- 1 root root 253, 20 Jun 27 07:17
dm-20
brw-rw---- 1 root root 253, 21 Jun 27 07:17 dm-21
brw-rw---- 1 root
root 253, 22 Jun 27 07:17 dm-22
brw-rw---- 1 root root 253, 23 Jun 27 07:17
dm-23
brw-rw---- 1 root root 253, 24 Jun 27 07:17 dm-24
brw-rw---- 1 root
root 253, 25 Jun 27 07:17 dm-25
brw-rw---- 1 root root 253, 26 Jun 27 07:17
dm-26
...
/dev/disk/by-label:
lrwxrwxrwx 1 root root 10 Jun 27 07:17 1
-> ../../sda5
lrwxrwxrwx 1 root root 10 Jun 27 07:17 boot1 ->
../../sda1
lrwxrwxrwx 1 root root 10 Jun 27 07:17 optapporacle1 ->
../../sda2
lrwxrwxrwx 1 root root 10 Jun 27 07:17 SWAP-sda3 ->
../../sda3
lrwxrwxrwx 1 root root 10 Jun 27 07:17 tmp1 ->
../../sda7
lrwxrwxrwx 1 root root 10 Jun 27 07:17 var1 ->
../../sda6
3a. Verify that ASMLIB has not been installed already before
installing (If Real Application Clusters (RAC), run this command on each
node):
For example (as root):
i. Output if
installed:
oracleasm-2.6.18-164.el5PAE-2.0.5-1.el5 ----->
optional
oracleasm-2.6.18-164.el5debug-2.0.5-1.el5 ----->
optional
oracleasm-2.6.18-164.el5-2.0.5-1.el5
oracleasmlib-2.0.4-1.el5
oracleasm-support-2.1.3-1.el5
oracleasm-2.6.18-164.el5xen-2.0.5-1.el5
-----> optional
ii. Output if not installed
package not
installed
a. Install (if not installed). Install MUST match
kernel version. (System Administrator's
Task)
b. Verify kernel version:
# uname
-r
2.6.18-164.el5PAE
c. Install the correct packages for the kernel
version.
# rpm -i
oracleasm-support-2.1.3-1.el5.i386.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-2.6.18-164.el5-2.0.5-1.el5.i386.rpm
3b. Check status (If Real Application
Clusters (RAC), run this command on each node):
Failed
example:
Checking if ASM is loaded: no
Checking if /dev/oracleasm is
mounted: no
Configuring the Oracle ASM library driver, for
example:
Default user to own the driver
interface []: grid -----> input Grid Infrastructure/ASM
user name
Default group to own the driver interface
[]: asmadmin -----> input Grid Infrastructure/ASM group
name
Start Oracle ASM library driver on boot (y/n) [n]:
y
Scan for Oracle ASM disks on boot (y/n) [y]:
y
Writing Oracle ASM library driver configuration:
done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for
Oracle ASMLib disks: [ OK ]
3c. Check status again:
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is
mounted: yes4. Create ASM
diskgroups:
4a. Check prior to createdisk command:
Disk "DAT" does not exist or is not instantiated
#
/etc/init.d/oracleasm querydisk
/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Device
"/dev/mapper/360060e80045b2b0000005b2b0000163cp1" is not marked as an ASM
disk4b. After Check, do createdisk command:
Marking disk
"/dev/mapper/360060e80045b2b0000005b2b0000163cp[ OK ] ASM
disk
# /etc/init.d/oracleasm createdisk
DAT01 /dev/mapper/360060e80045b2b0000005b2b0000163cp1
5. To make the disk available enter the
following commands:
5a. Scan ASM disks:
Scanning system for ASM disks: [ OK ]
5b. List ASM disks:
DAT
6. Check the ASM diskgroups:
For example (as root) (If Real Application Clusters (RAC), do on
each node.):
Disk "DAT" is a valid ASM disk
on device [253, 20]
or
#
/etc/init.d/oracleasm querydisk -d DAT
Disk "DAT" is a valid ASM
disk on device [253, 20]
Device
"/dev/mapper/360060e80045b2b0000005b2b0000163cp1" is marked as an ASM
disk
major minor #blocks
name
.
.
.
253 20 524281275
dm-207. Ensure that the allocated devices can
be seen in /dev/mpath:
For example (as root) (f Real
Application Clusters (RAC), do on each node.):
# ls -l
total 0
lrwxrwxrwx 1 root root 8 May 8 10:32
360060e80045b2b0000005b2b0000163c -> ../dm-11
lrwxrwxrwx 1 root root 8 May
8 10:32 360060e80045b2b0000005b2b0000163cp1 ->
../dm-20
lrwxrwxrwx 1 root root 8 May 8 10:32
360060e80045b2b0000005b2b0000155a -> ../dm-14
lrwxrwxrwx 1 root root 8 May
8 10:32 360060e80045b2b0000005b2b0000155ap1 -> ../dm-22
lrwxrwxrwx 1 root
root 8 May 8 10:32 360060e80045b2b0000005b2b00001584 ->
../dm-16
lrwxrwxrwx 1 root root 8 May 8 10:32
360060e80045b2b0000005b2b00001584p1 -> ../dm-23
lrwxrwxrwx 1 root root 7
May 8 10:32 360060e80045b2b0000005b2b00003130 -> ../dm-1
lrwxrwxrwx 1 root
root 7 May 8 10:32 360060e80045b2b0000005b2b00003131 -> ../dm-2
lrwxrwxrwx
1 root root 7 May 8 10:32 360060e80045b2b0000005b2b00003132 ->
../dm-3
lrwxrwxrwx 1 root root 7 May 8 10:32
360060e80045b2b0000005b2b00003133 -> ../dm-4
lrwxrwxrwx 1 root root 7 May
8 10:32 360060e80045b2b0000005b2b00003134 -> ../dm-5
8. Ensure that the devices can be seen
in /dev/mapper:
For example (as root) (f Real
Application Clusters (RAC), do on each node.):
total 0
brw-rw---- 1 root disk 253, 0 Jun 27 07:17
360060e80045b2b0000005b2b000006b0
brw-rw---- 1 root disk 253, 15 Jun 27 07:17
360060e80045b2b0000005b2b000006b0p1
brw-rw---- 1 root disk 253, 1 Jun 27
07:17 360060e80045b2b0000005b2b000006c4
brw-rw---- 1 root disk 253, 17 Jun 27
07:17 360060e80045b2b0000005b2b000006c4p1
brw-rw---- 1 root disk 253, 2 Jun
27 07:17 360060e80045b2b0000005b2b000006d8
brw-rw---- 1 root disk 253, 26 Jun
27 07:17 360060e80045b2b0000005b2b000006d8p1
brw-rw---- 1 root disk 253, 11
Jun 27 07:17 360060e80045b2b0000005b2b0000163c
brw-rw---- 1 root disk
253, 20 Jun 27 07:17 360060e80045b2b0000005b2b0000163cp1
brw-rw----
1 root disk 253, 12 Jun 27 07:17 360060e80045b2b0000005b2b0000155a
brw-rw----
1 root disk 253, 29 Jun 27 07:17
360060e80045b2b0000005b2b0000155ap1
brw-rw---- 1 root disk 253, 14 Jun 27
07:17 360060e80045b2b0000005b2b00001679
brw-rw---- 1 root disk 253, 25 Jun 27
07:17 360060e80045b2b0000005b2b00001679p1
brw-rw---- 1 root disk 253, 13 Jun
27 07:17 360060e80045b2b0000005b2b00001584
brw-rw---- 1 root disk 253, 24 Jun
27 07:17 360060e80045b2b0000005b2b00001584p1
brw-rw---- 1 root disk 253, 3
Jun 27 07:17 360060e80045b2b0000005b2b00001007
brw-rw---- 1 root disk 253, 19
Jun 27 07:17 360060e80045b2b0000005b2b00001007p1
brw-rw---- 1 root disk 253,
4 Jun 27 07:17 360060e80045b2b0000005b2b0000189c
brw-rw---- 1 root disk 253,
18 Jun 27 07:17
360060e80045b2b0000005b2b0000189cp19. Check the device type:
For
example (as root) (f Real Application Clusters (RAC), do on each
node.):
/dev/dm-20: LABEL="DAT" TYPE="oracleasm" --->
>>here multipathed
/dev/dm-22: LABEL="ARC"
TYPE="oracleasm"
/dev/dm-23: LABEL="FRA"
TYPE="oracleasm"
/dev/sdh1: LABEL="DAT" TYPE="oracleasm"
---> here physical
/dev/sdx1: LABEL="ARC"
TYPE="oracleasm"
/dev/sdj1: LABEL="FRA"
TYPE="oracleasm"
/dev/sdi1: LABEL="DAT" TYPE="oracleasm"
---> >> here physical
/dev/sdy1: LABEL="ARC"
TYPE="oracleasm"
/dev/sdz1: LABEL="FRA"
TYPE="oracleasm"
>>>>10. Setup the ASM parameter
(ORACLEASM_SCANORDER), in the file for ASMLIB configuration,
/etc/sysconfig/oracleasm, for forcing ASM to bind with the multipath
devices
10a. Check the file, /etc/sysconfig/oracleasm:
lrwxrwxrwx 1 root root 24 Jun 13 09:58
/etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm
10b. Make a backup of the original file,
/etc/sysconfig/oracleasm-_dev_oracleasm
10c. Modify the ORACLEASM_SCANORDER and ORACLEASM_SCANEXCLUDE
parameters in /etc/sysconfig/oracleasm:
# ORACLEASM_SCANORDER: Matching
patterns to order disk scanning
ORACLEASM_SCANORDER="mpath dm"
#
ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from
scan
ORACLEASM_SCANEXCLUDE="sd"
10d. save file
10e. Restart oracleasm:
or
# /etc/init.d/oracleasm restart
10f. Check mulitpath device against /proc/partitions
file:
major minor #blocks name
.
.
.
253 20
524281275 dm-20
10g. Check mulitpath device against the file,
/dev/oracleasm/disks:
brw-rw---- 1 grid asmadmin 253, 20 Oct
4 13:37 DAT
10f. Check oracleasm disks again:
DAT
Additional Resources
Still have questions? Use the above community to
search for similar discussions or start a new discussion on this
subject.
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/13750068/viewspace-734278/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Metlink:How to clean up a failed CRS/ClusterwareAI
- How to Configure a New Plant in SAP?
- Metlink:How to Modify Public Network Information including VIP in CrsORM
- Metlink:How to Match a Row Cache Object Child Latch to its Row CacheObject
- 【Autofs】How to Configure Autofs on CentOS 7?CentOS
- Metlink:10g RAC How to Clean Up After a Failed CRS InstallAI
- How to use ASMCA in silent mode to configure ASMASM
- How to configure a firewall for domains and trustsAIRust
- How to Install and Configure VNC Server in CentOS 7VNCServerCentOS
- How to Install and Configure VNC on Ubuntu 18.04VNCUbuntu
- How To Configure The "/etc/hosts" File On Linux [ID 242490.1]Linux
- How To Configure Server Side Transparent Application FailoverServerIDEAPPAI
- How to configure a Vagrant (Homestead) VM in Phpstorm with Xdebug on MacPHPORMMac
- 轉發:How To Configure Logging and Log Rotation in NginxNginx
- How to Configure the DNS Server for 11gR2 SCAN On LinuxDNSServerLinux
- How to Configure A Second Listener on a Separate Network in 11.2 Grid
- How to Configure TAF and Client Load Balancing in a Replicated Environment [ID 210596.1]client
- How to configure password openldap server in Red Hat Enterprise Linux 5?LDAServerLinux
- How to configure SAP connections with Connection Transmitter Over air_part1MITAI
- How to configure Client Failover after Data Guard Switchover or Failover [ID 316740.1]clientAI
- How To Configure Notification Rules in Enterprise Manager Grid Control_429422.1
- Metlink:Performance issues with enq: US - contentionORMENQ
- rman configure
- Metlink:Troubleshooting:WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!AIENQ
- set autotrace on [configure]
- ./configure 幫助
- docker install and configureDocker
- configure shared serverServer
- mysql configure 引數MySql
- rman:configure exclude for tablespace ...
- configure net card in Solaris
- Xenomai-2.6.0-configureAI
- rlwrap ./configure報錯configure: WARNING: No termcap nor curses library found
- How to Find Out How Much Space an Index is UsingIndex
- Configure innodb 表空間
- Cenots 7 Configure static IP
- configure Django db settingDjango
- Configure In-Memory HTTP ReplicationHTTP