Oracle 19c RAC on Linux 7.6安裝手冊

你好我是李白發表於2020-03-09

目錄

 

 

 

 

 

 

 

 

 

 

安裝手冊

Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), as part of an Oracle Flex

Cluster installation, Oracle ASM is configured within Oracle Grid Infrastructure to

provide storage services

 

Starting with Oracle Grid Infrastructure 19c (19.3), with Oracle Standalone

Clusters, you can again place OCR and voting disk files directly on shared file

systems.

 

Oracle Flex Clusters

Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), Oracle Grid

Infrastructure cluster configurations are Oracle Flex Clusters deployments.

 

12.2 開始,叢集分Standalone Cluster Domain Service Cluster 兩種叢集模式,

Standalone Cluster

ü   可以支援64 個節點

ü   每個節點都直接連線共享儲存

ü   各個節點共享儲存都透過各自節點的ASM 例項或者共享檔案系統掛載。

ü   本地控制GIMR

ü   19c Standalone Cluster 可選擇是否配置GIMR

ü   可以使用GNS 配置vip scan ,也可以自己手工配置。

Domain Services Cluster

ü   一個或多個節點組成域服務叢集( DSC

ü   一個或多個節點組成資料庫成員叢集( Database Member Cluster

ü   (可選)一個或多個節點組成應用成員節點( Application Member Cluster

ü   集中的網格基礎架構管理儲存庫(為 Oracle Cluster Domain 中的每個叢集提供 MGMTDB

ü   跟蹤檔案分析器( TFA )服務,用於 Oracle Clusterware Oracle 資料庫的目標診斷資料收集

ü   合併 Oracle ASM 儲存管理服務

ü   可選的快速家庭配置( RHP )服務,用於安裝群集,以及配置,修補和升級 Oracle Grid Infrastructure Oracle Database 家庭。 配置 Oracle 域服務群集時,還可以選擇配置 Rapid Home Provisioning Server

這些中心化的服務可以被 cluster Domain 中的資料庫成員叢集利用( Datebase Member Cluster Application Member Cluster )。

Domain Service Cluster 中的儲存訪問:

DSC 中的 ASM 能夠提供中心化的儲存管理服務,成員叢集( Member Cluster )能夠透過以下兩種方式訪問 DSC 上的分片式儲存:

ü   直接物理連線到分片儲存進行訪問

ü   使用 ASM IO Service 透過網路路徑進行訪問

單個Member Cluster 中所有節點必須以相同的方式訪問分片儲存,一個Domain Service Cluster 可以有多個Member Cluster ,架構圖如下:

 

                                             

環境檢查

專案

要求

檢查命令

RAM

至少 8G

# grep MemTotal   /proc/meminfo

執行級別

3 or 5

# runlevel

Linux 版本

Oracle Linux 7.4 with the Unbreakable Enterprise Kernel   4:

4.1.12-112.16.7.el7uek.x86_64 or later

Oracle Linux 7.4 with the Unbreakable Enterprise Kernel   5:

4.14.35-1818.1.6.el7uek.x86_64 or later

Oracle Linux 7.4 with the Red Hat Compatible kernel:

3.10.0-693.5.2.0.1.el7.x86_64 or later

Red Hat Enterprise Linux 7.4:   3.10.0-693.5.2.0.1.el7.x86_64

or later

SUSE Linux Enterprise Server 12 SP3:   4.4.103-92.56-default

or later

 

#   uname -mr

# cat   /etc/redhat-release

 

/tmp

至少 1G

# du -h /tmp

swap

SWAP Between 4 GB and 16 GB: Equal to RAM

More than 16 GB: 16 GB ,如果啟用了 Huge Page ,則計算 SWAP 需要減去分配給 HugePage 的記憶體。

# grep SwapTotal   /proc/meminfo

 

/dev/shm

檢查 /dev/shm 掛載型別,以及許可權。

# df -h /dev/shm

軟體空間要求

grid 至少 12G Oracle 至少 10g 空間,建議分配 100g 預留

19c 開始 GIMR standalone 安裝時變為可選項。

# df -h /u01

 

關閉THP ,開啟Hugepages

如果使用Oracle Linux ,可以透過Preinstallation RPM 配置作業系統, 如果安裝Oracle Domain Services Cluster ,則需要配置GIMR ,則需要考慮大頁面會被GIMR SGA 使用1G ,需要將此考慮到hugepages 中,standalone 則可以選擇是否配置GIMR

檢視透明大頁面是否開啟

[root@db-oracle-node1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled

[always] madvise never

檢視透明大頁面整理碎片功能是否開啟, THP defragmentation

[root@db-oracle-node1 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag  

[always] madvise never

 

"transparent_hugepage=never" 核心引數追加到GRUB_CMDLINE_LINUX 選項後:

# vi /etc/default/grub

GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap ... 

transparent_hugepage=never"

 

備份/boot/grub2/grub.cfg ,透過grub2-mkconfig -o 命令重建/boot/grub2/grub.cfg 檔案:

On BIOS-based machines: ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

On UEFI-based machines: ~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

 

重啟系統:

# shutdown -r now

驗證引數設定是否正確:

# cat /proc/cmdline

 

注:如果還沒有關閉THP ,參考 http://blog.itpub.net/31439444/viewspace-2674001/ 完成剩餘步驟。

# vim /etc/sysctl.conf

vm.nr_hugepages = xxxx

 

# sysctl -p

vim /etc/security/limits.conf

oracle soft memlock xxxxxxxxxxx

oracle hard memlock xxxxxxxxxxx

安裝軟體包

openssh

bc

binutils

compat-libcap1

compat-libstdc++

elfutils-libelf

elfutils-libelf-devel

fontconfig-devel

glibc

glibc-devel

ksh

libaio

libaio-devel

libX11

libXau

libXi

libXtst

libXrender

libXrender-devel

libgcc

librdmacm-devel

libstdc++

libstdc++-devel

libxcb

make

net-tools (for Oracle RAC and Oracle Clusterware)

nfs-utils (for Oracle ACFS)

python (for Oracle ACFS Remote)

python-configshell (for Oracle ACFS Remote)

python-rtslib (for Oracle ACFS Remote)

python-six (for Oracle ACFS Remote)

targetcli (for Oracle ACFS Remote)

smartmontools

sysstat

 

可以選擇是否安裝附加驅動與軟體包,可以配置:PAM OCFS2 ODBC LDAP

核心引數

如果是Oracle Linux, or Red Hat Enterprise Linux

可以使用 preinstall rpm 配置 os

# cd /etc/yum.repos.d/

# wget

# yum repolist

# yum install oracle-database-preinstall-19c

 

也可以手工下載 preinstall rpm 安裝包:

 

preinstall 做以下工作:

ü   建立 oracle 使用者,建立 oraInventory(oinstall) 以及 OSDBA(dba) 組。

ü   設定 sysctl.conf ,調整 Oracle 建議的系統啟動引數、驅動引數

ü   設定 hard 以及 soft 使用者資源限制。

ü   設定其他與系統核心版本相關的建議引數。

ü   設定 numa=off

 

如果不使用 preinstall rpm 配置核心引數,也可以手工配置 kernel parameter

# vi /etc/sysctl.d/97-oracledatabase-

sysctl.conf

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 4294967295

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

 

改變當前系統值:

# /sbin/sysctl –system

# /sbin/sysctl -a

 

設定網路埠範圍:

$ cat /proc/sys/net/ipv4/ip_local_port_range

# echo 9000 65500 > /proc/sys/net/ipv4/ip_local_port_range

# /etc/rc.d/init.d/network restart

如果不使用Oracle Preinstallation RPM ,可以使用Cluster Verification Utility, 按照下面步驟安裝CVU

ü   Locate the cvuqdisk RPM package, which is located in the directory

Grid_home/cv/rpm. Where Grid_home is the Oracle Grid Infrastructure home

directory.

 

ü   Copy the cvuqdisk package to each node on the cluster. You should ensure that

each node is running the same version of Linux.

 

ü   Log in as root.

 

ü   Use the following command to find if you have an existing version of the cvuqdisk

package:

# rpm -qi cvuqdisk

 

ü   If you have an existing version of cvuqdisk, then enter the following command to

deinstall the existing version:

# rpm -e cvuqdisk

 

ü   Set the environment variable CVUQDISK_GRP to point to the group that owns

cvuqdisk, typically oinstall. For example:

# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP

 

ü   In the directory where you have saved the cvuqdisk RPM, use the command rpm

-iv package to install the cvuqdisk package. For example:

# rpm -iv cvuqdisk-1.0.10-1.rpm

 

ü   執行安裝驗證

$ ./runcluvfy.sh stage -pre crsinst -fixup -n node1,node2,node3

 

網路配置

網路配置說明:

1 )要麼全部 ipv4 ,要麼全部 ipv6 GNS 可以生成 ipv6 地址

2 VIP Starting with Oracle Grid Infrastructure 18c, using VIP is optional for Oracle

Clusterware deployments. You can specify VIPs for all or none of the cluster

nodes. However, specifying VIPs for selected cluster nodes is not supported.

3Private:安裝過程可以配置四個 interface private IP做為 HAIP(高可用 IP),如果配置了超過四個 interface,則超過四個的部分自動做為冗餘, private可以不使用 bond網路卡繫結,叢集可以自動高可用。

4 Public/VIP 名稱:可以使用字母數字以及“ -”連線符,不允許使用“ _“下劃線

5 Public/VIP/SCAN VIP 需要在同一個子網段。

6 Public 需要固定配置在各個節點網路卡, VIP Private IP SCAN 都可以交給 GNS 來配置,除了 SCAN 需要三個固定 IP 以外,其他都需要一個固定 IP ,可以不固定在網路卡,但是要固定解析。

只透過DNS 解析SCAN Public/Private/VIP 均透過手工配置固定IP ,安裝時手工指定設定。

 

要啟用GNS,需要使用dhcp+DNS配置,DNS正反解析無需解析vip以及scan,只需要vip與scan的域名在交給gns管理的子域裡即可。

/etc/hosts

 

192.168.204.11       pub19-node1.rac.libai

192.168.204.12       pub19-node2.rac.libai

 

#private ip

40.40.40.41          priv19-node1.rac.libai

40.40.40.42          priv19-node2.rac.libai

 

#vip

192.168.204.21       vip19-node1.rac.libai

192.168.204.22       vip19-node2.rac.libai

 

#scan-vip

#192.168.204.33       scan19-vip.rac.libai

#192.168.204.34       scan19-vip.rac.libai

#192.168.204.35       scan19-vip.rac.libai

 

#gns-vip

192.168.204.10       gns19-vip.rac.libai

DNS配置:

[root@19c-node2 limits.d]# yum install -y bind chroot

[root@19c-node2 limits.d]# vi /etc/named.conf

options {

directory "/var/named";

dump-file "/var/named/data/cache_dump.db";

statistics-file "/var/named/data/named_stats.txt";

memstatistics-file "/var/named/data/named_mem_stats.txt";

allow-query { any; };        # any 可以為一個指定網段,允許該網段查詢 DNS 伺服器。

recursion yes;

allow-transfer { none; };     

};

zone "." IN {

type hint;

file "named.ca";

};

zone "rac.libai" IN {       # 正解域 centos.libai

type master;

file "named.rac.libai";

};

zone "204.168.192.in-addr.arpa" IN {           # 反解域 204.168.192.in-addr.arpa

type master;

file "named.192.168.204";

};

 

zone "40.40.40.in-addr.arpa" IN {           # 反解域 204.168.192.in-addr.arpa

type master;

file "named.40.40.40";

};

 

 

/* 編輯 vip pub 正解析域

[root@pub19-node2 ~]# vi /var/named/named.rac.libai

$TTL 600

@ IN SOA rac.libai. admin.rac.libai. (

                        0               ; serial number

                        1D              ; refresh

                        1H              ; retry

                        1W              ; expire

                        3H )            ; minimum

@ IN NS master

master  IN A 192.168.204.12

priv19-node1.rac.libai.    IN A 40.40.40.41

priv19-node2.rac.libai.    IN A 40.40.40.42

pub19-node1.rac.libai.     IN A 192.168.204.11

pub19-node2.rac.libai.     IN A 192.168.204.12

vip.rac.libai.              IN NS gns.rac.libai.

gns.rac.libai.              IN A 192.168.204.10

 

# 最後兩行表示:子域 vip.rac.libai 的解析伺服器為 gns.rac.libai gns.rac.libai 的伺服器地址為 192.168.204.10

這是配置 gns 的關鍵。

# gridSetup.sh 配置 SCAN 的頁面, scan 的域名 scan19.vip.rac.libai 必須包含交給 gns 管理的子域即 scan19.vip.rac.libai 需要包含 vip.rac.libai

# gridSetup.sh 配置 gns IP 地址即 192.168.204.10 subdomain vip.rac.libai

# 如果配合 DHCP ,則可以完成 vip private scan 都使用 gns 分配 IP

來自: http://blog.sina.com.cn/s/blog_701a48e70102w6gv.html

# 無需 DNS 解析 SCAN VIP ,交給 GNS 即可,需要啟用dhcp。

 

[root@19c-node2 named]# vi named.192.168.204

$TTL 600

@ IN SOA rac.libai. admin.rac.libai. (

                                10   ; serial

                                3H   ; refresh

                                15M  ; retry

                                1W   ; expire

                                1D ) ; minimum

@ IN NS master.rac.libai.

12 IN PTR master.rac.libai.

11 IN PTR pub19-node1.rac.libai.

12 IN PTR pub19-node2.rac.libai.

10 IN PTR gns.rac.libai.

 

[root@19c-node2 named]# vi named.40.40.40

$TTL 600

@ IN SOA rac.libai. admin.rac.libai. (

                                10   ; serial

                                3H   ; refresh

                                15M  ; retry

                                1W   ; expire

                                1D ) ; minimum

@ IN NS master.rac.libai.

42 In PTR 19cpriv-node2.rac.libai.

 

[root@19c-node2 named]# systemctl restart named

 

[root@19c-node1 software]# yum install -y dhcp

[root@19c-node1 software]# vi /etc/dhcp/dhcpd.conf

#   see /usr/share/doc/dhcp*/dhcpd.conf.example

#   see dhcpd.conf(5) man page

#

 

ddns-update-styleinterim;

ignoreclient-updates;

 

subnet 192.168.204.0 netmask 255.255.255.0 {

option routers 192.168.204.1;

option subnet-mask 255.255.255.0;

option nis-domain "rac.libai";

option domain-name "rac.libai";

option domain-name-servers 192.168.204.12;

option time-offset -18000; # Eastern Standard Time

range dynamic-bootp 192.168.204.21 192.168.204.26;

default-lease-time 21600;

max-lease-time 43200;

}

[root@19c-node2 ~]# systemctl enable dhcpd

[root@19c-node2 ~]# systemctl restart dhcpd

[root@19c-node2 ~]# systemctl status dhcpd

 

/* 檢視租約檔案

/var/lib/dhcp/dhcpd.leases

 

/* enp0s10 重新獲取 dhcp 地址

# dhclient -d enp0s10

 

/* 釋放租約

# dhclient -r enp0s10

 

其他配置

1 作業系統雜項配置

1 cluster 名稱:

大小寫不敏感,必須字母數字,必須包含-連線符,不能包含_下劃線,最長15個字元,安裝後,只能透過重灌GI修改叢集名稱。

2 /etc/hosts

#public Ip

192.168.204.11       pub19-node1.rac.libai

192.168.204.12       pub19-node2.rac.libai

 

#private ip

40.40.40.41          priv19-node1.rac.libai

40.40.40.42          priv19-node2.rac.libai

 

#vip

192.168.204.21       vip19-node1.rac.libai

192.168.204.22       vip19-node2.rac.libai

 

#scan-vip

#192.168.204.33       scan19.vip.rac.libai

#192.168.204.34       scan19.vip.rac.libai

#192.168.204.35       scan19.vip.rac.libai

 

#gns-vip

192.168.204.10       gns.rac.libai

 

3 )作業系統主機名

hostnamectl set-hostname pub19-node1.rac.libai –static

hostnamectl set-hostname pub19-node2.rac.libai --static

 

保證所有節點使用NTP或者CTSS同步時間。

安裝之前,保證各個節點時鐘相同,如果使用 CTSS ,可以透過下面步驟關閉 linux 7 自帶 NTP

By default, the NTP service available on Oracle Linux 7 and Red Hat

Linux 7 is chronyd.

Deactivating the chronyd Service

To deactivate the chronyd service, you must stop the existing chronyd service, and

disable it from the initialization sequences.

Complete this step on Oracle Linux 7 and Red Hat Linux 7:

1. Run the following commands as the root user:

# systemctl stop chronyd

# systemctl disable chronyd

Confirming Oracle Cluster Time Synchronization Service After Installation

To confirm that ctssd is active after installation, enter the following command as the

Grid installation owner:

$ crsctl check ctss

如果使用NAS ,為了Oracle Clusterware 更好的容忍NAS 裝置以及NAS 掛載的網路失敗,建議開啟Name Service Cache Daemon ( nscd)

# chkconfig --list nscd

# chkconfig --level 35 nscd on

# service nscd start

# service nscd restart

systemctl --all |grep nscd

 

For best performance for Oracle ASM, Oracle recommends that you use the Deadline

I/O Scheduler.

# cat /sys/block/${ASM_DISK}/queue/scheduler

noop [deadline] cfq

If the default disk I/O scheduler is not Deadline, then set it using a rules file:

 

1. Using a text editor, create a UDEV rules file for the Oracle ASM devices:

# vi /etc/udev/rules.d/60-oracle-schedulers.rules

 

2. Add the following line to the rules file and save it:

ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0",

ATTR{queue/scheduler}="deadline"

 

3. On clustered systems, copy the rules file to all other nodes on the cluster. For

example:

$ scp 60-oracle-schedulers.rules root@node2:/etc/udev/rules.d/

 

4. Load the rules file and restart the UDEV service. For example:

Oracle Linux and Red Hat Enterprise Linux

# udevadm control --reload-rules

 

5. Verify that the disk I/O scheduler is set as Deadline.

# cat /sys/block/${ASM_DISK}/queue/scheduler

noop [deadline] cfq

為了防止某些情況下 ssh 失敗,設定超時限制為 ulimit

/etc/ssh/sshd_config on all cluster nodes:

# vi /etc/ssh/sshd_config

LoginGraceTime 0

判斷是否有inventory 以及組是否之前存在:

# more /etc/oraInst.loc

$ grep oinstall /etc/group

建立 inventory 目錄,不要指定到 oracle base 目錄下,防止發生安裝過程中許可權改變導致安裝錯誤。

所有節點 user 以及 group id 必須相同。

# groupadd -g 54421 oinstall

# groupadd -g 54322 dba

# groupadd -g 54323 oper

# groupadd -g 54324 backupdba

# groupadd -g 54325 dgdba

# groupadd -g 54326 kmdba

# groupadd -g 54327 asmdba

# groupadd -g 54328 asmoper

# groupadd -g 54329 asmadmin

# groupadd -g 54330 racdba

 

# /usr/sbin/useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,oper,racdba oracle

# useradd -u 54322 -g oinstall -G asmadmin,asmdba,racdba grid

 

# id oracle

# id grid

# passwd oracle

# passwd grid

 

建議使用 OFA 目錄結構 , 保證 Oracle home 目錄路徑只包含 ASCII 碼字元。

GRID standalone 可以將 grid 安裝在 oracle database 軟體的 ORACLE_BASE 目錄下,其他不可以。

# mkdir -p /u01/app/19.0.0/grid

# mkdir -p /u01/app/grid

# mkdir -p /u01/app/oracle/product/19.0.0/dbhome_1/

# chown -R grid:oinstall /u01

# chown oracle:oinstall /u01/app/oracle

# chmod -R 775 /u01/

 

grid .bash_profile:

# su – grid

$ vi ~/.bash_profile

umask 022

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/19.0.0/grid

export PATH=$PATH:$ORACLE_HOME/bin

export NLS_DATE_FORMAT=’yyyy-mm-dd hh24:mi:ss’

export NLS_LANG=AMERICAN.AMERICA_AL32UTF8

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib

$ . ./.bash_profile

 

oracle .bash_profile:

# su – oracle

$ vi ~/.bash_profile

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1

export PATH=$PATH:$ORACLE_HOME/bin

export NLS_DATE_FORMAT=’yyyy-mm-dd hh24:mi:ss’

export NLS_LANG=AMERICAN.AMERICA_AL32UTF8

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib

$ . ./.bash_profile

 

$ xhost + hostname

$ export DISPLAY=local_host:0.0

 

preinstall rpm 包只配置 oracle 使用者,安裝 GI ,複製 oracle 設定,改為 grid 使用者:

以下oracle grid使用者都需要檢查:

file descriptor

$ ulimit -Sn

$ ulimit -Hn

number of processes

$ ulimit -Su

$ ulimit -Hu

stack

$ ulimit -Ss

$ ulimit -Hs

為了確保不會因為 X11 轉發導致安裝失敗, oracle grid 使用者家目錄下 .ssh:

$ ~/.ssh/config

Host *

ForwardX11 no

如果使用 DNFS ,則可以參考文件配置 DNFS

如果要建立 Oracle Member Cluster ,則需要在Oracle Domain Services Cluster 上建立Member Cluster Manifest File ,參照官方文件Oracle Grid Infrastructure Grid Infrastructure Installation and Upgrade Guide 下面章節:

Creating Member Cluster Manifest File for Oracle Member Clusters

/* 獲取磁碟 UUID

# /usr/lib/udev/scsi_id -g -u /dev/sdb

 

/* 編寫 UDEV 規則檔案

# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB9c33adf6-29245311",RUN+="/bin/sh -c 'mknod /dev/asmocr1 b $major $minor;chown grid:asmadmin /dev/asmocr1;chmod 0660 /dev/asmocr1'"

 

KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBb008c422-c636d509",RUN+="/bin/sh -c 'mknod /dev/asmdata1 b $major $minor;chown grid:asmadmin /dev/asmdata1;chmod 0660 /dev/asmdata1'"

 

KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB7d37c0f6-8f45f264",RUN+="/bin/sh -c 'mknod /dev/asmfra1 b $major $minor;chown grid:asmadmin /dev/asmfra1;chmod 0660 /dev/asmfra1'"

 

/* 複製 UDEV 規則檔案到叢集其他節點

# scp 99-oracle-asmdevices.rules root@node2:/etc/udev/rules.d/99-oracleasmdevices.

rules

 

/* reload udev 配置,測試

/sbin/udevadm trigger --type=devices --action=change

/sbin/udevadm control --reload

/sbin/udevadm test /sys/block/sdb

$ su root

# export ORACLE_HOME=/u01/app/19.0.0/grid

 

Use Oracle ASM command line tool (ASMCMD) to provision the disk devices

for use with Oracle ASM Filter Driver.

[root@19c-node1 grid]# asmcmd afd_label DATA1 /dev/sdb --init

[root@19c-node1 grid]# asmcmd afd_label DATA2 /dev/sdc --init

[root@19c-node1 grid]# asmcmd afd_label DATA3 /dev/sdd --init

[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdb

[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdc

[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdd

$ unzip LINUX.X64_193000_grid_home.zip -d /u01/app/19.0.0/grid/

$ /u01/app/19.0.0/grid/gridSetup.sh 

 

遇到問題:

圖形介面進行到建立OCR ASM磁碟組時,無法發現ASM磁碟,檢查UDEV,UDEV配置正確,檢查cfgtoollogs日誌發現如下報錯:

[root@19c-node1 ~]# su – grid

[grid@19c-node1 ~]$ cd $ORACLE_HOME/cfgtoollogs/out/GridSetupActions2020-03-09_01-02-16PM

[grid@19c-node1 ~]$ vi gridSetupActions2020-03-09_01-02-16PM.log

INFO:  [Mar 9, 2020 1:15:03 PM] Executing [/u01/app/19.0.0/grid/bin/kfod.bin, nohdr=true, verbose=true, disks=all, op=disks, shallow=true, asm_diskstring='/dev/asm*']

INFO:  [Mar 9, 2020 1:15:03 PM] Starting Output Reader Threads for process /u01/app/19.0.0/grid/bin/kfod.bin

INFO:  [Mar 9, 2020 1:15:03 PM] Parsing Error 49802 initializing ADR

INFO:  [Mar 9, 2020 1:15:03 PM] Parsing ERROR!!! could not initialize the diag context

grid ORACLE_HOME/cfgtoollogs/out/GridSetupActions2020-03-09_01-02-16PM

發現 ASM 磁碟路徑報錯:

INFO:  [Mar 9, 2020 1:15:03 PM] Executing [/u01/app/19.0.0/grid/bin/kfod.bin, nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/asm*']

INFO:  [Mar 9, 2020 1:15:03 PM] Starting Output Reader Threads for process /u01/app/19.0.0/grid/bin/kfod.bin

INFO:  [Mar 9, 2020 1:15:03 PM] Parsing Error 49802 initializing ADR

INFO:  [Mar 9, 2020 1:15:03 PM] Parsing ERROR!!! could not initialize the diag context

解決:

將報錯前命令單獨拿出來執行

/u01/app/19.0.0/grid/bin/kfod.bin nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/asm*'

發現報錯 NLS DATA 錯誤,很明顯,跟 .bash_profile 環境配置檔案設定的 NLS 相關變數有關,註釋掉相關 NLS_LANG 變數,生效,再次執行,一切正常。

 

[root@pub19-node1 ~]# /u01/app/oraInventory/orainstRoot.sh

[root@pub19-node2 ~]# /u01/app/oraInventory/orainstRoot.sh

 

[root@pub19-node1 ~]# /u01/app/19.0.0/grid/root.sh

[root@pub19-node2 ~]# /u01/app/19.0.0/grid/root.sh

 

 

 

 

 

 

 

 

 

 

 

[oracle@pub19-node1 dbhome_1]$ unzip LINUX.X64_193000_db_home.zip -d /u01/app/oracle/product/19.0.0/dbhome_1/

[oracle@pub19-node1 dbhome_1]$ ./runInstaller

[oracle@pub19-node1 dbhome_1]$ dbca

 

遇到問題:

CRS-5017: The resource action "ora.czhl.db start" encountered the following error:

ORA-12547: TNS:lost contact

. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/pub19-node2/crs/trace/crsd_oraagent_oracle.trc".

解決:

節點 2 ORACLE_HOME 目錄有兩層許可權不正確,修改許可權之後,手工啟動資料庫正常。

 

[root@pub19-node2 oracle]# chown oracle:oinstall product/

[root@pub19-node2 product]# chown oracle:oinstall 19.0.0

[root@pub19-node2 19.0.0]# chown oracle:oinstall dbhome_1/

[grid@pub19-node2 ~]$ srvctl start instance -node pub19-node2.rac.libai

starting database instances on nodes "pub19-node2.rac.libai" ...

started resources "ora.czhl.db" on node "pub19-node2"

 

 

 

grid 使用者(兩節點都要升級):

# su - grid

$ unzip LINUX.X64_193000_grid_home.zip -d /u01/app/19.0.0/grid/

$ unzip unzip p30464035_190000_Linux-x86-64.zip

 

oracle 使用者(兩節點都要升級):

# su - oracle

$ unzip -o p6880880_190000_Linux-x86-64.zip -d /u01/app/oracle/product/19.0.0/dbhome_1/

 

root 使用者:

/* 檢查補丁版本,其實只給 1 節點 GI 打了補丁,繼續給節點 2GI ,節點 1 DB ,節點 2 DB 打補丁,一定要注意 opatchauto GI 打補丁需要用 GI ORACLE_HOME opatchauto DB 打補丁需要 DB ORACLE_HOME opatchauto

節點 1

# /u01/app/19.0.0/grid/OPatch/opatchauto apply -oh /u01/app/19.0.0/grid /software/30464035/

 

節點 2

# /u01/app/19.0.0/grid/OPatch/opatchauto apply -oh /u01/app/19.0.0/grid /software/30464035/

 

節點 1

# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/19.0.0/grid,/u01/app/oracle/product/19.0.0/dbhome_1

 

節點 2

# ls -l /u01/app/oraInventory/ContentsXML/oui-patch.xml    # 一定要檢查此檔案此時許可權,否則報下面錯誤,導致補丁 corrupt ,且無法回退跟再次正向應用,修改許可權,打補丁,如果報錯,可採用 opatchauto resume 命令,繼續應用補丁即可。

# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1

 

Caution

[Mar 11, 2020 8:56:05 PM] [WARNING] OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)'

解決:

/* 按照日誌輸出,賦權

# chmod 664 /u01/app/oraInventory/ContentsXML/oui-patch.xml

# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto resume /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1

 

如果按照日誌提示恢復,則可以採取如下步驟來解決打補丁問題:

/* 執行 restore.sh ,最後還是失敗,所以只能採取手工複製軟體,加回滾的辦法

# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto rollback /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1

/* 按照失敗提示,哪些檔案不存在,將對應補丁解壓資料夾中複製到 ORACLE_HOME 指定目錄中,繼續回滾,直到成功回滾。

 

# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/19.0.0/grid,/u01/app/oracle/product/19.0.0/dbhome_1

 

驗證補丁:

$ /u01/app/19.0.0/grid/OPatch/opatch lsinv

$ /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatch lsinv

# su – grid

$ kfod op=patches

$ kfod op=patchlvl

Oracle 19c RAC on Linux 7.6安裝手冊 Oracle 19c RAC on Linux安裝手冊.docx


來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31439444/viewspace-2679289/,如需轉載,請註明出處,否則將追究法律責任。

相關文章