Config DS devices for use Oracle ASM 11.2/12.1 IBM:Linux on System z_1377392.1
Applies to:
Oracle Database - Enterprise Edition - Version 11.2.0.3 and laterIBM: Linux on System z
Goal
This article is applicable to RHEL 6.2 and later This is an addendum to Document 1351746.1 as configuration has changed between Redhat 5 and Redhat 6 to set up alias names, permissions for fcp/scsi multipath devices.
ASMLib cannot be used in 11.2 or 12.1 and above with fcp/scsi devices due to a 512 block compatibility issue Bug:12346221 on Linux for System z. In addition, ASMLib may be desupported in RHEL 6 releases going forward. This article provides platform specific examples for manually configuring eckd/dasd devices, single path scsi/fcp, and multi-path scsi/fcp/LUNS via udev rules and multipathing which provides the same functionality, flexibility and device persistence as oracleasm and ASMLib.
Note: If using an Oracle single instance database without ASM, with RHEL 6 then an ext4 filesystem (the default) is the recommended file system type.
Solution
1) How to configure multi-pathing for Oracle 11.2/12.1 ASM for fcp/scsi Devices with Linux on System z on RHEL 6.
Attach to zVM
You can verify this as follows:-
In this example, at this point, your sysadmin has attached two shared LUNs with dual paths to each of the two nodes in the cluster. In the example, the nodes are node1 and node2.
The "options=-g" line was added to the /etc/scsi_id.config file.
[root@node1 ~]# modprobe vmcp
[root@node1 ~]# vmcp q fcp att node1
FCP 641C ATTACHED TO NODE1 641C CHPID 50
FCP 681D ATTACHED TO NODE1 681D CHPID 54
[root@node1 ~]# lsscsi
[0:0:0:1] disk IBM 2107900 .278 /dev/sda
[0:0:0:2] disk IBM 2107900 .278 /dev/sdb
[1:0:0:1] disk IBM 2107900 .278 /dev/sdc
[1:0:0:2] disk IBM 2107900 .278 /dev/sdd
[root@node2 ~]# modprobe vmcp
[root@node2 ~]# vmcp q fcp att node2
FCP 6700 ATTACHED TO NODE2 6700 CHPID 53
FCP 6B02 ATTACHED TO NODE2 6B02 CHPID 57
[root@node2 ~]# lsscsi
[0:0:0:1] disk IBM 2107900 .278 /dev/sda
[0:0:0:2] disk IBM 2107900 .278 /dev/sdb
[1:0:0:1] disk IBM 2107900 .278 /dev/sdc
[1:0:0:2] disk IBM 2107900 .278 /dev/sdd
# more /etc/scsi_id.configYou need to find the WWIDs (World Wide Identifiers) to use for multipathing:
=============================================================================
#
# scsi_id configuration
#
# lower or upper case has no affect on the left side. Quotes (") are
# required for spaces in values. Model is the same as the SCSI
# INQUIRY product identification field. Per the SCSI INQUIRY, the vendor
# is limited to 8 bytes, model to 16 bytes.
#
# The first matching line found is used. Short matches match longer ones,
# if you do not want such a match space fill the extra bytes. If no model
# is specified, only the vendor string need match.
#
# options=
# vendor=string[,model=string],options=
# some libata drives require vpd page 0x80
vendor="ATA",options=-p 0x80
options=-g
=============================================================================
[root@node1 ~]# scsi_id --whitespace --device=/dev/sda
36005076306ffc1150000000000001063
[root@node1 ~]# scsi_id --whitespace --device=/dev/sdb
36005076306ffc1150000000000001064
[root@node1 ~]# scsi_id --whitespace --device=/dev/sdc
36005076306ffc1150000000000001063
[root@node1 ~]# scsi_id --whitespace --device=/dev/sdd
36005076306ffc1150000000000001064
[root@node2 ~]# scsi_id --whitespace --device=/dev/sda
36005076306ffc1150000000000001063
[root@node2 ~]# scsi_id --whitespace --device=/dev/sdb
36005076306ffc1150000000000001064
[root@node2 ~]# scsi_id --whitespace --device=/dev/sdc
36005076306ffc1150000000000001063
[root@node2 ~]# scsi_id --whitespace --device=/dev/sdd
36005076306ffc1150000000000001064
The command "scsi_id" should return the same device identifier value for a given device, regardless of which node the command is run from.
From the output above you can identify which devices are on the same luns e.g. /dev/sda and /dev/sdc are on UUID/WWID 36005076306ffc1150000000000001063 and devices /dev/sdb and /dev/sdd on UUID/WWID 36005076306ffc1150000000000001064Check that device-mapper-multipath rpm is installed on each node.
The device mapper multipath allows multiple I/O paths to single LUN.In Redhat 6.2 and above the multipath setup has changed slightly and is greatly improved.
In RHEL 6, scsi multipath support is provided by the device-mapper-multipath package.
node1:~ # rpm -qa | grep multi
device-mapper-multipath-0.4.9-41.el6.s390x
Note: the exact version of device-mapper-multipath may vary slightly.
You still require /etc/multipath.conf to set up the alias names but the the UID,GID and MODE can no longer by set via this file.
Note: world:r permissions must exist on /etc/multipath.conf otherwise PRVF-5150 error will be returned during pre-requisite checking by the installer.
ls -la /etc/multipath.conf
-rw-r--r--. 1 root root 3084 Aug 5 13:24 /etc/multipath.conf
See example below:-
/etc/multipath.conf
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
find_multipaths yes
path_grouping_policy failover
rr_min_io 1
dev_loss_tmo 90
fast_io_fail_tmo 5
}
multipaths {
multipath {
wwid 36005076306ffc1150000000000001065
alias lun01
path_grouping_policy failover
}
multipath {
wwid 36005076306ffc1150000000000001066
alias lun02
path_grouping_policy failover
}
}
===============================================================
In Redhat 6.2 and above the permissions are now set via udev rules.
A template file can be found in /usr/share/doc/device-mapper-1.02.62/12-dm-permissions.rules
Take a copy of this file and place in /etc/udev/rules.d/
Edit the file /etc/udev/rules.d/12-dm-permissions.rules and set the permissions below the MULTIPATH DEVICES using the alias name/s specified in the multipath.conf
You are going to change the line
# ENV{DM_NAME}=="mpath-?*", OWNER:="root", GROUP:="root", MODE:="660" to reflect the alias name (lun0*)used in multipath.conf
to
# MULTIPATH DEVICES
#
# Set permissions for all multipath devices
ENV{DM_NAME}=="lun0*",OWNER:="grid",GROUP:="oinstall",MODE:="660"
Make sure that /etc/udev/rules.d/12-dm-permissions.rules is consistent across each node of the cluster.
For the changes to come into effect you should reload the rules:
#/sbin/udevadm control --reload-rules
#start_udev
The wildcard (lun0*) is optional, as a separate udev entry could be made for lun01 and lun02. If neither aliases nor user friendly names are specified then the wwid as specified in the multipath.conf file is the multipath default.
lrwxrwxrwx. 1 root root 7 Oct 25 07:22 36005076306ffc1150000000000001065 -> ../dm-1
lrwxrwxrwx. 1 root root 7 Oct 25 07:22 36005076306ffc1150000000000001066 -> ../dm-2
When using the explicit aliases (in this case lun01 and lun02) it is important that the identical multipath.conf file be used on all nodes. If using user_friendly_names without aliases then it's equally important to use the identical bindings_file on all nodes. Copy the multipath.conf files over to each node in the cluster.
If only user friendly names is specified then with the two multipath devices here the names generated are
lrwxrwxrwx. 1 root root 7 Oct 25 07:23 mpatha -> ../dm-1
lrwxrwxrwx. 1 root root 7 Oct 25 07:23 mpathb -> ../dm-2
Commands to set multipathing
Note: Please ensure that if the multipathd service is running when you make these changes you force a reload of the multipath device mapsOutput of multipath -ll command:-
#multipath -F
#service multipathd reload
Note: Make sure that you carry out the above on each node in the cluster.
Now that the 2 Luns are under control of DM-Multipath, the new devices will show up in two (2) place in the /dev directory:
/dev/mapper/lun01,02
/dev/dm-1,2
As per Redhat 5 we will use the /dev/mapper/lun0* as the asm diskstring. Note that the permissions, owner, and group are set according to the information in the /etc/udev/rules.d/12-dm-permissions.rules file - this is different from Redhat 5.
For example:
[root@node1 ~]# ls -la /dev/mapper/lun0[1-2]
lrwxrwxrwx. 1 root root 7 Nov 15 01:07 /dev/mapper/lun01 -> ../dm-1
lrwxrwxrwx. 1 root root 7 Nov 15 01:07 /dev/mapper/lun02 -> ../dm-2
[root@node2 ~]# ls -la /dev/mapper/lun0[1-2]
lrwxrwxrwx. 1 root root 7 Nov 15 01:07 /dev/mapper/lun01 -> ../dm-1
lrwxrwxrwx. 1 root root 7 Nov 15 01:07 /dev/mapper/lun02 -> ../dm-2
[root@node1 ~]#ls -altr /dev/dm*
brw-rw----. 1 root disk 253, 0 Oct 25 12:02 /dev/dm-0
brw-rw----. 1 grid oinstall 253, 1 Oct 25 17:29 /dev/dm-1
brw-rw----. 1 grid oinstall 253, 2 Oct 26 02:31 /dev/dm-2
[root@node2 ~]#ls -altr /dev/dm*
brw-rw----. 1 root disk 253, 0 Oct 25 12:02 /dev/dm-0
brw-rw----. 1 grid oinstall 253, 1 Oct 25 17:29 /dev/dm-1
brw-rw----. 1 grid oinstall 253, 2 Oct 26 02:31 /dev/dm-2
Make sure the /dev/mapper devices are consistent across each node in your cluster and verify that you can access each device from every node.
[oracle@node2 ~]$ dd if=/dev/zero of=/dev/mapper/lun01 bs=4096 count=1000
1000+0 records in
1000+0 records out
4096000 bytes (4.1 MB) copied, 0.051208 seconds, 80.0 MB/s
[oracle@node2 ~]$ dd if=/dev/zero of=/dev/mapper/lun02 bs=4096 count=1000
1000+0 records in
1000+0 records out
4096000 bytes (4.1 MB) copied, 0.050936 seconds, 80.4 MB/s
[root@node1 ~]# multipath -ll
lun02 (36005076306ffc1150000000000001064) dm-2 IBM,2107900
[size=5.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:2 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 1:0:0:2 sdd 8:48 [active][ready]
lun01 (36005076306ffc1150000000000001063) dm-1 IBM,2107900
[size=5.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:0:0:1 sda 8:0 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 1:0:0:1 sdc 8:32 [active][ready]
We would recommend that you reboot both systems to ensure that the /dev/mapper/lun01 and /dev/mapper/lun02 are still available and permissions, group, and owner are correct.You can now use the /dev/mapper/lun0* disk string with Oracle ASM:-
.
node1:~ # shutdown -r now
Broadcast message from root (pts/0) (Mon Aug 22 11:06:12 2011):
The system is going down for reboot NOW!
node2:~ # shutdown -r now
Broadcast message from root (pts/0) (Mon Aug 22 11:06:33 2011):
The system is going down for reboot NOW!
/dev and /dev/mapper directory listings after the reboot
[root@node1 ~]# ls -la /dev/mapper/lun0[1-2]
lrwxrwxrwx. 1 root root 7 Nov 15 01:07 /dev/mapper/lun01 -> ../dm-1
lrwxrwxrwx. 1 root root 7 Nov 15 01:07 /dev/mapper/lun02 -> ../dm-2
[root@node2 ~]# ls -la /dev/mapper/lun0[1-2]
lrwxrwxrwx. 1 root root 7 Nov 15 01:07 /dev/mapper/lun01 -> ../dm-1
lrwxrwxrwx. 1 root root 7 Nov 15 01:07 /dev/mapper/lun02 -> ../dm-2
[root@node1 ~]#ls -altr /dev/dm*
brw-rw----. 1 root disk 253, 0 Oct 25 12:02 /dev/dm-0
brw-rw----. 1 grid oinstall 253, 1 Oct 25 17:29 /dev/dm-1
brw-rw----. 1 grid oinstall 253, 2 Oct 26 02:31 /dev/dm-2
[root@node2 ~]#ls -altr /dev/dm*
brw-rw----. 1 root disk 253, 0 Oct 25 12:02 /dev/dm-0
brw-rw----. 1 grid oinstall 253, 1 Oct 25 17:29 /dev/dm-1
brw-rw----. 1 grid oinstall 253, 2 Oct 26 02:31 /dev/dm-2
SQL> show parameter asm_diskstring
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring string ORCL:A40*
SQL> alter system set asm_diskstring='ORCL:A40*','/dev/mapper/lun0*' scope =both;
System altered.
SQL> show parameter asm_diskstring
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring string ORCL:A40*, /dev/mapper/lun0*
Next you need to create a diskgroup using the 2 luns.
SQL> create diskgroup data2_luns external redundancy disk '/dev/mapper/lun01' name lun01, '/dev/mapper/lun02' name lun02;
Diskgroup created.
Check to see the new diskgroup in v$asm_diskgroup on node.
SQL> select name,total_mb from v$asm_diskgroup;
NAME TOTAL_MB
------------------------------ ----------
DATA 14084
DATA2_LUNS 10240
The Luns were each 5GB and there is 10GB in the new diskgroup.
Create Tablespace in DATA2_LUNS diskgroup on node2
Moving to the ASM instance on node 2, the space in the new diskgroup shows 0, you need to mount the diskgroup on node2
SQL> select name,total_mb from v$asm_diskgroup;
NAME TOTAL_MB
------------------------------ ----------
DATA 14084
DATA2_LUNS 0
SQL> alter diskgroup data2_luns mount;
Diskgroup altered.
SQL> select name,free_mb,total_mb from v$asm_diskgroup;
NAME FREE_MB TOTAL_MB
------------------------------ ---------- ----------
DATA 10321 14084
DATA2_LUNS 10145 10240
Next you can move to the DB instance on node2 to create the tablespace.
SQL> create tablespace ts_lun datafile '+data2_luns';
Tablespace created.
SQL> select name,free_mb,total_mb from v$asm_diskgroup;
NAME FREE_MB TOTAL_MB
------------------------------ ---------- ----------
DATA 10321 14084
DATA2_LUNS 10042 10240
The default size for an ASM tablespace is 100MB.
References
NOTE:1351746.1 - How to Manually Configure Disk Storage devices for use with Oracle ASM 11.2 on IBM: Linux on System z under RedHat 5BUG:12346221 - ORA-600[KFDADD03] WHEN CREATING A DISKGROUP USING FCP/SCSI STORAGE
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/17252115/viewspace-1448206/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Linux Use ODBC Connect OracleLinuxOracle
- ASM叢集檔案系統ACFS(ASM Cluster File System)ASM
- Oracle Linux 7使用syslog來管理Oracle ASM的審計檔案OracleLinuxASM
- Oracle 11.2 DataGuard RAC To RAC搭建Oracle
- 11. Oracle for Linux安裝和配置——11.2. Linux安裝和配置——11.2.5. Linux配置OracleLinux
- oracle RAC+DG 擴容ASM和表空間(Linux)OracleASMLinux
- Oracle ASM神書《撥雲見日 解密Oracle ASM核心》出版了OracleASM解密
- 11. Oracle for Linux安裝和配置——11.2. Linux安裝和配置——11.2.4. Linux命令(1)OracleLinux
- 11. Oracle for Linux安裝和配置——11.2. Linux安裝和配置——11.2.4. Linux命令(2)OracleLinux
- 11. Oracle for Linux安裝和配置——11.2. Linux安裝和配置——11.2.3. Linux登入OracleLinux
- 11. Oracle for Linux安裝和配置——11.2. Linux安裝和配置——11.2.4. Linux命令(3)OracleLinux
- Oracle ASM擴容(NFS)OracleASMNFS
- 【ASM】Oracle asm刪除磁碟組注意事項ASMOracle
- linux vdo驗證 oracle asm diskgroup sector_size 4096 udev asmlibLinuxOracleASMdev
- 11. Oracle for Linux安裝和配置——11.2. Linux安裝和配置——11.2.2.Linux安裝(1)OracleLinux
- 11. Oracle for Linux安裝和配置——11.2. Linux安裝和配置——11.2.2.Linux安裝(2)OracleLinux
- Oracle Linux 7使用cron來管理Oracle ASM審計檔案目錄的增長OracleLinuxASM
- 11. Oracle for Linux安裝和配置——11.2. Linux安裝和配置——11.2.1. 簡介OracleLinux
- Oracle ASM AMDU工具的使用OracleASM
- Oracle ASM Cluster File Systems (ACOracleASM
- Oracle:ASM & 密碼檔案OracleASM密碼
- Oracle 12C Database File Mapping for Oracle ASM FilesOracleDatabaseAPPASM
- CentOS7.1安裝Oracle 12.1客戶端以及cx_OracleCentOSOracle客戶端
- Linux 6.9 加盤後的Oracle 12c ASM DiskGroup配置過程LinuxOracleASM
- 儲存裝置IBM DS5020故障離線怎麼辦IBM
- 聯想(IBM)System X3550 M5- NO OPROMIBM
- 12C Oracle ASM Filter DriverOracleASMFilter
- Oracle ASM Rebalance執行過程OracleASM
- Try to run this command from the system terminal. Make sure that you use the correct version of ‘...
- use azure data studio to create external table for oracleOracle
- Overview of Oracle Flex ASM In Oracle 19c RAC-20220111ViewOracleFlexASM
- 【ASM】Oracle asm磁碟被格式化,如何掛載該磁碟組ASMOracle
- Embedded devices hackingdev
- IBM ds4700 兩塊硬碟掉線資料恢復過程IBM硬碟資料恢復
- 【Oracle】ASM例項安裝入門OracleASM
- ORACLE ASM磁碟組空間溢位OracleASM
- 2.10.3 使用 Oracle Automatic Storage Management (Oracle ASM) 克隆資料庫OracleASM資料庫
- Oracle 12C ASM asmcmd amdu_extractOracleASM
- Oracle RAC日常運維-ASM磁碟擴容Oracle運維ASM