安裝RAC 19C
1、 安裝多路徑軟體(兩臺機器安裝)
[root@bms-10 ~]# unzip OceanStor_UltraPath_31.0.2_CentOS.zip [root@bms-10 ~]# cd CentOS/ [root@bms-10 CentOS]# ls doc install.sh packages Tools unattend_install.conf [root@bms-10 CentOS]# bash install.sh
安裝完重啟
2、 透過華為的多路徑軟體檢視(兩臺)
[root@bms-10 CentOS]# upadmin UltraPath CLI #0 >show vlun ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Vlun ID Disk Name Lun WWN Status Capacity Ctrl(Own/Work) Array Name Dev Lun ID No. of Paths(Available/Total) 0 sdfv RAC_BMS10_BMS9_OCR02 644227c10028c030171ef7e60000001c Normal 20.00GB --/-- HW-Stor-1 28 24/24 1 sdfw RAC_BMS10_BMS9_DATA06 644227c10028c030171faaec00000024 Normal 1.50TB --/-- HW-Stor-1 36 24/24 2 sdfx RAC_BMS10_BMS9_OCR03 644227c10028c030171f02360000001d Normal 20.00GB --/-- HW-Stor-1 29 24/24 3 sdfy RAC_BMS10_BMS9_FRA01 644227c10028c030171fb9ab00000025 Normal 1.00TB --/-- HW-Stor-1 37 24/24 4 sdfz RAC_BMS10_BMS9_MGMT01 644227c10028c030171f17e20000001e Normal 50.00GB --/-- HW-Stor-1 30 24/24 5 sdga RAC_BMS10_BMS9_DATA01 644227c10028c030171f3fa10000001f Normal 1.50TB --/-- HW-Stor-1 31 24/24 6 sdgb RAC_BMS10_BMS9_DATA02 644227c10028c030171f503b00000020 Normal 1.50TB --/-- HW-Stor-1 32 24/24 7 sdgc RAC_BMS10_BMS9_DATA03 644227c10028c030171f5f4100000021 Normal 1.50TB --/-- HW-Stor-1 33 24/24 8 sdgd RAC_BMS10_BMS9_DATA04 644227c10028c030171f7ffe00000022 Normal 1.50TB --/-- HW-Stor-1 34 24/24 9 sdge RAC_BMS10_BMS9_DATA05 644227c10028c030171f9dd400000023 Normal 1.50TB --/-- HW-Stor-1 35 24/24 10 sdgf RAC_BMS10_BMS9_OCR01 644227c10028c030171eeb4b0000001b Normal 20.00GB --/-- HW-Stor-1 27 24/24 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- UltraPath CLI #1 >
3、ip地址規劃
Hostname |
Host-alias |
type |
ip |
interface |
bms-10 |
bms-10 |
Public ip |
10.0.0.10 |
boundmg |
bms-10-vip |
bms-10-vip |
Virtual ip |
10.0.0.12 |
boundmg:1 |
bms-10 |
bms-10-priv |
Private ip |
192.168.149.10 |
bondheart |
|
|
|
|
|
bms-9 |
bms-9 |
Public ip |
10.0.0.11 |
boundmg |
bms-9-vip |
bms-9-vip |
Virtual ip |
10.0.0.13 |
boundmg:1 |
bms-9 |
bms-9-priv |
Private ip |
192.168.149.11 |
bondheart |
|
|
|
|
|
|
|
|
|
|
cluster-scan |
cluster-scan |
scan ip |
10.0.0.14 |
boundmg |
|
|
|
|
|
4、各節點編輯配置/etc/hosts 檔案
#public ip 10.0.0.10 bms-10 10.0.0.11 bms-9 #priv ip 192.168.149.10 bms-10-priv 192.168.149.11 bms-9-priv #vip ip 10.0.0.12 bms-10-vip 10.0.0.13 bms-9-vip #scan ip 10.0.0.14 cluster-scan
5、共享儲存 ASM 磁碟組規劃
序號 |
磁碟組名稱 |
模式 |
容量 |
|
1 |
OCR |
NOPMAL |
20G*3 |
|
2 |
FAR |
EXTERN |
1*1T |
|
3 |
MGMT |
EXTERN |
50G*1 |
|
4 |
DATA |
EXTERN |
1.5T*6 |
|
6、檢查是否已安裝多路徑軟體包
這邊我已經安裝了華為的多路徑軟體
7、 目錄規劃
Grid infrastructure |
Oracle database |
HOME=/home/oracle |
HOME=/home/oracle |
ORACLE_BASE=/u01/app/grid |
ORACLE_BASE=/u01/app/oracle |
ORACLE_HOME=/u01/app/19c/grid |
ORACLE_HOME=/u01/app/oracle/product/19c/db_1 |
ORACLE SID
db_name |
prod |
Node 1 instance sid |
prod1 |
Node 2 instance sid |
prod2 |
8、配置本地yum源,安裝相應的軟體包
yum install -y bc yum install -y compat-libcap1* yum install -y compat-libcap* yum install -y binutils yum install -y compat-libstdc++-33 yum install -y elfutils-libelf yum install -y elfutils-libelf-devel yum install -y gcc yum install -y gcc-c++ yum install -y glibc-2.5 yum install -y glibc-common yum install -y glibc-devel yum install -y glibc-headers yum install -y ksh libaio yum install -y libaio-devel yum install -y libgcc yum install -y libstdc++ yum install -y libstdc++-devel yum install -y make yum install -y sysstat yum install -y unixODBC yum install -y unixODBC-devel yum install -y binutils* yum install -y compat-libstdc* yum install -y elfutils-libelf* yum install -y gcc* yum install -y glibc* yum install -y ksh* yum install -y libaio* yum install -y libgcc* yum install -y libstdc* yum install -y make* yum install -y sysstat* yum install -y libXp* yum install -y glibc-kernheaders yum install -y net-tools-* yum install -y pam*
9、配置初始化指令碼--建立使用者、組、目錄
# vi dbinit.sh /usr/sbin/groupadd -g 54321 oinstall /usr/sbin/groupadd -g 54322 dba /usr/sbin/groupadd -g 54323 oper /usr/sbin/groupadd -g 54324 backupdba /usr/sbin/groupadd -g 54325 dgdba /usr/sbin/groupadd -g 54326 kmdba /usr/sbin/groupadd -g 54327 asmdba /usr/sbin/groupadd -g 54328 asmoper /usr/sbin/groupadd -g 54329 asmadmin /usr/sbin/groupadd -g 54330 racdba /usr/sbin/useradd -u 54321 -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,racdba oracle /usr/sbin/useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid echo oracle | passwd --stdin oracle echo oracle | passwd --stdin grid mkdir -p /u01/app/19c/grid mkdir -p /u01/app/grid mkdir -p /u01/app/oracle mkdir -p /u01/app/oracle/product/19c/db_1 mkdir -p /u01/app/oraInventory chown -R grid:oinstall /u01 chown -R oracle:oinstall /u01/app/oracle chmod -R 775 /u01/
# chmod +x dbinit.sh # ./dbinit.sh Changing password for user oracle. passwd: all authentication tokens updated successfully. Changing password for user grid. passwd: all authentication tokens updated successfully.
安裝檔案及補丁檔案檢驗
[root@bms-10 ~]# sha256sum LINUX.X64_193000_db_home.zip ba8329c757133da313ed3b6d7f86c5ac42cd9970a28bf2e6233f3235233aa8d8 LINUX.X64_193000_db_home.zip [root@bms-10 ~]# sha256sum LINUX.X64_193000_grid_home.zip d668002664d9399cf61eb03c0d1e3687121fc890b1ddd50b35dcbe13c5307d2e LINUX.X64_193000_grid_home.zip [root@bms-10 ~]#
10、解壓 GI 軟體安裝包 任意一個伺服器節點解壓 GI 軟體安裝包到 grid 使用者的$ORACLE_HOME 目錄
[grid@bms-9 grid]$ pwd /u01/app/19c/grid [grid@bms-9 grid]$ ll LINUX.X64_193000_grid_home.zip -rw-r--r--. 1 grid oinstall 2889184573 Dec 31 09:23 LINUX.X64_193000_grid_home.zip [grid@bms-9 grid]$ unzip LINUX.X64_193000_grid_home.zip
grid 軟體目錄下 Grid_home/cv/rpm root 使用者各個 rac 節點分別安裝。 設定環境變數 # CVUQDISK_GRP=oinstall; export CVUQDISK_GRP --在 rpm 檔案所在路徑下執行安裝 # rpm -iv cvuqdisk-1.0.10-1.rpm
[root@bms-9 ~]# cd /u01/app/19c/grid/cv/rpm [root@bms-9 rpm]# pwd /u01/app/19c/grid/cv/rpm [root@bms-9 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP [root@bms-9 rpm]# ls cvuqdisk-1.0.10-1.rpm [root@bms-9 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm Preparing... ################################# [100%] Updating / installing... 1:cvuqdisk-1.0.10-1 ################################# [100%] [root@bms-9 rpm]#
11、udev 繫結多路徑磁碟供 ASM 使用,含許可權配
先檢視華為多路徑軟體 [root@bms-10 ~]# upadmin UltraPath CLI #0 >show vlun ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Vlun ID Disk Name Lun WWN Status Capacity Ctrl(Own/Work) Array Name Dev Lun ID No. of Paths(Available/Total) 0 sdb RAC_BMS10_BMS9_OCR01 644227c10028c030171eeb4b0000001b Normal 20.00GB --/-- HW-Stor-1 27 24/24 1 sdc RAC_BMS10_BMS9_OCR02 644227c10028c030171ef7e60000001c Normal 20.00GB --/-- HW-Stor-1 28 24/24 2 sdd RAC_BMS10_BMS9_OCR03 644227c10028c030171f02360000001d Normal 20.00GB --/-- HW-Stor-1 29 24/24 3 sde RAC_BMS10_BMS9_MGMT01 644227c10028c030171f17e20000001e Normal 50.00GB --/-- HW-Stor-1 30 24/24 4 sdf RAC_BMS10_BMS9_DATA01 644227c10028c030171f3fa10000001f Normal 1.50TB --/-- HW-Stor-1 31 24/24 5 sdg RAC_BMS10_BMS9_DATA02 644227c10028c030171f503b00000020 Normal 1.50TB --/-- HW-Stor-1 32 24/24 6 sdh RAC_BMS10_BMS9_DATA03 644227c10028c030171f5f4100000021 Normal 1.50TB --/-- HW-Stor-1 33 24/24 7 sdi RAC_BMS10_BMS9_DATA04 644227c10028c030171f7ffe00000022 Normal 1.50TB --/-- HW-Stor-1 34 24/24 8 sdj RAC_BMS10_BMS9_DATA05 644227c10028c030171f9dd400000023 Normal 1.50TB --/-- HW-Stor-1 35 24/24 9 sdk RAC_BMS10_BMS9_DATA06 644227c10028c030171faaec00000024 Normal 1.50TB --/-- HW-Stor-1 36 24/24 10 sdl RAC_BMS10_BMS9_FRA01 644227c10028c030171fb9ab00000025 Normal 1.00TB --/-- HW-Stor-1 37 24/24 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- [root@bms-10 ~]# upLinux 設定別名使用 [root@bms-10 ~]# upLinux setDiskAlias src_name=sdb dest_alias=asm-dgocr01 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdc dest_alias=asm-dgocr02 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdd dest_alias=asm-dgocr03 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sde dest_alias=asm-dgmgmt01 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdf dest_alias=asm-dgdata01 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdg dest_alias=asm-dgdata02 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdh dest_alias=asm-dgdata03 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdi dest_alias=asm-dgdata04 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdj dest_alias=asm-dgdata05 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdk dest_alias=asm-dgdata06 owner=grid group=asmadmin method=SYMLINK upLinux setDiskAlias src_name=sdl dest_alias=asm-dgfra01 owner=grid group=asmadmin method=SYMLINK 檢視別名設定 root@bms-10 ~]# upLinux showDiskAlias ---------------------------------------------------------------------------------------------------------------------------------------- ID Alias Lun WWN Disk Type 0 asm-dgdata01 644227c10028c030171f3fa10000001f sdf SYMLINK 1 asm-dgdata02 644227c10028c030171f503b00000020 sdg SYMLINK 2 asm-dgdata03 644227c10028c030171f5f4100000021 sdh SYMLINK 3 asm-dgdata04 644227c10028c030171f7ffe00000022 sdi SYMLINK 4 asm-dgdata05 644227c10028c030171f9dd400000023 sdj SYMLINK 5 asm-dgdata06 644227c10028c030171faaec00000024 sdk SYMLINK 6 asm-dgfra01 644227c10028c030171fb9ab00000025 sdl SYMLINK 7 asm-dgmgmt01 644227c10028c030171f17e20000001e sde SYMLINK 8 asm-dgocr01 644227c10028c030171eeb4b0000001b sdb SYMLINK 9 asm-dgocr02 644227c10028c030171ef7e60000001c sdc SYMLINK 10 asm-dgocr03 644227c10028c030171f02360000001d sdd SYMLINK ---------------------------------------------------------------------------------------------------------------------------------------- [root@bms-10 ~]#
注意,雖然這邊
upLinux setDiskAlias src_name=sdb dest_alias=asm-dgocr01 owner=grid group=asmadmin method=SYMLINK
是透過磁碟機代號來設定別名,但實際上是將別名繫結到磁碟機代號對應的lun WWN上了,重啟系統後,磁碟機代號會變,
但是alias和Lun WWN的對應關係還是一樣的。
如上,配置了udev,啟動udev
/sbin/udevadm trigger --type=devices --action=change
99-ultrapath-alias.rules 99-ultrapath.rules 這兩個檔案是自動生成的,帶有grid的屬性。非常方便。 [root@bms-10 rules.d]# cat 99-ultrapath-alias.rules # Do not modify this rule configuration file; otherwise, the usage of UltraPath may be affected. KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171eeb4b0000001b", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgocr01", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171ef7e60000001c", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgocr02", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171f02360000001d", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgocr03", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171f17e20000001e", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgmgmt01", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171f3fa10000001f", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgdata01", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171f503b00000020", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgdata02", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171f5f4100000021", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgdata03", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171f7ffe00000022", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgdata04", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171f9dd400000023", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgdata05", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171faaec00000024", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgdata06", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*[a-z]", SUBSYSTEM=="block", ENV{ID_SERIAL}=="3644227c10028c030171fb9ab00000025", ENV{DEVTYPE}=="disk", SYMLINK+="ultrapath/asm-dgfra01", OWNER="grid", GROUP="asmadmin", MODE="0660" [root@bms-10 rules.d]#
12、各節點加大共享記憶體配置
1、/dev/shm大小應該大於SGA+PGA的總記憶體大小
2、一般建議/dev/shm設定為物理伺服器的一半。
3、我這裡伺服器512G,
shmfs /dev/shm tmpfs size=250g 則pga+sga也取250G
echo "shmfs /dev/shm tmpfs size=250g 0">>/etc/fstab mount -a [root@bms-10 rules.d]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 252G 0 252G 0% /dev shmfs 250G 0 250G 0% /dev/shm tmpfs 252G 22M 252G 1% /run tmpfs 252G 0 252G 0% /sys/fs/cgroup /dev/mapper/centos-root 442G 26G 416G 6% / /dev/sda2 1014M 171M 844M 17% /boot /dev/sda1 200M 12M 189M 6% /boot/efi tmpfs 51G 12K 51G 1% /run/user/42 tmpfs 51G 0 51G 0% /run/user/0 [root@bms-10 rules.d]#
一般為記憶體的一半
13、首先檢查 THP 的啟用狀態;關閉透明頁
檢查
# cat /sys/kernel/mm/transparent_hugepage/defrag [always] madvise never # cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never [always]這個狀態就說明是啟用狀態。
編輯 rc.local 檔案: # vi /etc/rc.d/rc.local 增加下列內容: if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi 儲存退出,然後賦予 rc.local 檔案執行許可權: # chmod +x /etc/rc.d/rc.local 重啟系統,再檢查 THP [never]就是已經被禁用 注意[ ]框的字不一樣了,現在禁用框的是never # cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] # cat /sys/kernel/mm/transparent_hugepage/defrag always madvise [never]
14、修改核心引數
# vi /etc/sysctl.d/97-oracledatabase-sysctl.conf fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 134217728 ##kernel.shmall---greater than or equal to kernel.shmmax in pages,一般情況下每個記憶體頁大小為 2M ##[root@bms-10 ~]# 檢視記憶體頁大小方法#getconf PAGESIZE 4096 ##grep page /proc/meminfo Hugepagesize: 2048 kB ## getconf PAGESIZE 4096 ##kernel.shmall=512*1024*1024*1024/4096=134217728 kernel.shmmax = 268435456000 ##--kernel.shmmax 一半實體記憶體 shmmax.這裡實體記憶體 512G,取 250G ## 250*1024*1024*1024=268435456000 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576
重啟主機或者使用以下命令,使/etc/sysctl.d/97-oracle-database-sysctl.conf 檔案配置值在核心
記憶體中生效。
[root@bms-9 ~]# /sbin/sysctl --system
在/etc/sysctl.conf 檔案中配置 UDP 和 TCP 核心引數 新增或修改 #vi /etc/sysctl.conf net.ipv4.ip_local_port_range = 9000 65500 重啟網路服務 # /etc/rc.d/init.d/network restart
15、配置各節點 oracle、grid 使用者 系統資源限制
cat >> /etc/security/limits.conf <<EOF oracle soft nofile 65536 oracle hard nofile 65536 oracle soft nproc 16384 oracle hard nproc 16384 oracle soft stack 10240 oracle hard stack 32768 grid soft nofile 65536 grid hard nofile 65536 grid soft nproc 16384 grid hard nproc 16384 grid soft stack 10240 grid hard stack 32768 * soft memlock 483183821 * hard memlock 483183821 EOF
修改/etc/pam.d/login檔案, vi /etc/pam.d/login 新增如下
新增以下 #oracle setting add session required /lib/security/pam_limits.so session required pam_limits.so
16、配置各節點 grid、oracle 使用者環境變數
需要注意的是 ORACLE_UNQNAME 是資料庫名,ORACLE_SID 指的是資料庫例項名 # su - grid $ vi ~/.bash_profile 第 1 個節點 grid 使用者 export TMPDIR=/tmp export ORACLE_SID=+ASM1 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/19c/grid export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib umask 022 第 2 個節點 grid 使用者 export TMPDIR=/tmp export ORACLE_SID=+ASM2 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/19c/grid export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib umask 022 # su - oracle $ vi ~/.bash_profile 第 1 個節點 oracle 使用者 export TMPDIR=/tmp export ORACLE_SID=prod1 export ORACLE_UNQNAME=prod export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/19c/db_1 export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export TNS_ADMIN=$ORACLE_HOME/network/admin umask 022 第 2 個節點 oracle 使用者 export TMPDIR=/tmp export ORACLE_SID=prod2 export ORACLE_UNQNAME=prod export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/19c/db_1 export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export TNS_ADMIN=$ORACLE_HOME/network/admin umask 022
17、各節點系統時間同步配置以及校對, 禁用NTP
systemctl stop chronyd systemctl disable chronyd mv /etc/chrony.conf /etc/chrony.conf.bak
18、關閉各節點防火牆,關閉各節點 SELinux
systemctl stop firewalld systemctl disable firewalld vi /etc/selinux/config SELINUX=disabled
19、配置超時等待為無限制
# vi /etc/ssh/sshd_config LoginGraceTime 0
20、 禁用avahi
systemctl stop avahi-daemon.socket systemctl stop avahi-daemon.service systemctl stop avahi-daemon systemctl disable avahi-daemon.socket systemctl disable avahi-daemon.service systemctl disable avahi-daemon 配置 NOZEOCONF echo "NOZEROCONF=yes" >> /etc/sysconfig/network
21、控制使用者分配的資源
vi /etc/pam.d/login 新增如下
#oracle setting add session required /lib/security/pam_limits.so session required pam_limits.so
22、安裝 VNC,並重啟
[root@bms-9 ~]# yum install tigervnc*
23、安裝 GI 軟體
進入 GI 軟體目錄/u01/app/19c/grid
手動執行 cvu 使用驗證程式驗證 GI 軟體預安裝要求
到 grid 軟體目錄下執行 runcluvfy.sh 命令:
[grid@iam-db01 ~]# su - grid [grid@iam-db01 ~]$ cd /u01/app/19.0.0/grid/ [grid@iam-db01 grid]$ [grid@iam-db01 grid]$ ./runcluvfy.sh stage -pre crsinst -n bms-10,bms-9 -fixup -verbos
24、grid安裝叢集軟體
25、asmca新增asm磁碟組
26、dbca僅安裝軟體
27、dbca安裝資料庫
以上四步參照 http://blog.itpub.net/70004783/viewspace-2791938/
28、資料庫安裝完成後 配置標準大頁支援
配置前
$ uname -r $ ipcs -m --ipcs –m 顯示出的共享記憶體大小。 $ grep Huge /proc/meminfo
[root@bms-10 ~]# grep Huge /proc/meminfo AnonHugePages: 4096 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB [root@bms-10 ~]# [root@bms-9 ~]# grep Huge /proc/meminfo AnonHugePages: 2048 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB [root@bms-9 ~]#
【hugepages_settings.sh 指令碼內容】
#!/bin/bash # # hugepages_settings.sh # # Linux bash script to compute values for the # recommended HugePages/HugeTLB configuration # on Oracle Linux # # Note: This script does calculation for all shared memory # segments available when the script is run, no matter it # is an Oracle RDBMS shared memory segment or not. # # This script is provided by Doc ID 401749.1 from My Oracle Support # # Welcome text echo " This script is provided by Doc ID 401749.1 from My Oracle Support () where it is intended to compute values for the recommended HugePages/HugeTLB configuration for the current shared memory segments on Oracle Linux. Before proceeding with the execution please note following: * For ASM instance, it needs to configure ASMM instead of AMM. * The 'pga_aggregate_target' is outside the SGA and you should accommodate this while calculating the overall size. * In case you changes the DB SGA size, as the new SGA will not fit in the previous HugePages configuration, it had better disable the whole HugePages, start the DB with new SGA size and run the script again. And make sure that: * Oracle Database instance(s) are up and running * Oracle Database 11g Automatic Memory Management (AMM) is not setup (See Doc ID 749851.1) * The shared memory segments can be listed by command: # ipcs -m Press Enter to proceed..." read # Check for the kernel version KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` # Find out the HugePage size HPG_SZ=`grep Hugepagesize /proc/meminfo | awk '{print $2}'` if [ -z "$HPG_SZ" ];then echo "The hugepages may not be supported in the system where the script is being executed." exit 1 fi # Initialize the counter NUM_PG=0 # Cumulative number of pages required to handle the running shared memory segments for SEG_BYTES in `ipcs -m | cut -c44-300 | awk '{print $1}' | grep "[0-9][0-9]*"` do MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` if [ $MIN_PG -gt 0 ]; then NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` fi done RES_BYTES=`echo "$NUM_PG * $HPG_SZ * 1024" | bc -q` # An SGA less than 100MB does not make sense # Bail out if that is the case if [ $RES_BYTES -lt 100000000 ]; then echo "***********" echo "** ERROR **" echo "***********" echo "Sorry! There are not enough total of shared memory segments allocated for HugePages configuration. HugePages can only be used for shared memory segments that you can list by command: # ipcs -m of a size that can match an Oracle Database SGA. Please make sure that: * Oracle Database instance is up and running * Oracle Database 11g Automatic Memory Management (AMM) is not configured" exit 1 fi # Finish with results case $KERN in '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; '3.8') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; '3.10') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; '4.1') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; '4.14') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; '5.4') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; *) echo "Kernel version $KERN is not supported by this script (yet). Exiting." ;; esac # End
oracle 使用者執行
[oracle@bms-10 ~]$ ./hugepages_setting.sh This script is provided by Doc ID 401749.1 from My Oracle Support () where it is intended to compute values for the recommended HugePages/HugeTLB configuration for the current shared memory segments on Oracle Linux. Before proceeding with the execution please note following: * For ASM instance, it needs to configure ASMM instead of AMM. * The 'pga_aggregate_target' is outside the SGA and you should accommodate this while calculating the overall size. * In case you changes the DB SGA size, as the new SGA will not fit in the previous HugePages configuration, it had better disable the whole HugePages, start the DB with new SGA size and run the script again. And make sure that: * Oracle Database instance(s) are up and running * Oracle Database 11g Automatic Memory Management (AMM) is not setup (See Doc ID 749851.1) * The shared memory segments can be listed by command: # ipcs -m Press Enter to proceed... Recommended setting: vm.nr_hugepages = 93973
root 使用者執行,使用 sysctl –p 生效設定
[root@bms-10 ~]# sysctl -w vm.nr_hugepages=93973 vm.nr_hugepages = 93973 [root@bms-10 ~]# echo 'vm.nr_hugepages=93973' >>/etc/sysctl.conf [root@bms-10 ~]# sysctl -p net.ipv4.ip_local_port_range = 9000 65500 vm.nr_hugepages = 93973 [root@bms-10 ~]# [root@bms-9 ~]# sysctl -w vm.nr_hugepages=93973 vm.nr_hugepages = 93973 [root@bms-9 ~]# echo 'vm.nr_hugepages=93973' >>/etc/sysctl.conf [root@bms-9 ~]# sysctl -p net.ipv4.ip_local_port_range = 9000 65500 vm.nr_hugepages = 93973 [root@bms-9 ~]#
再次查詢
[root@bms-9 ~]# grep Huge /proc/meminfo AnonHugePages: 2048 kB HugePages_Total: 93973 HugePages_Free: 93973 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB [root@bms-9 ~]#
關閉資料庫
[root@bms-9 ~]# srvctl stop database -db prod
重啟資料庫
[oracle@bms-10 ~]$ srvctl start database -db prod
再檢查 HugePage
[root@bms-9 ~]# grep Huge /proc/meminfo AnonHugePages: 2048 kB HugePages_Total: 93973 HugePages_Free: 778 HugePages_Rsvd: 246 HugePages_Surp: 0 Hugepagesize: 2048 kB [root@bms-9 ~]# [root@bms-10 ~]# grep Huge /proc/meminfo AnonHugePages: 4096 kB HugePages_Total: 93973 HugePages_Free: 778 HugePages_Rsvd: 246 HugePages_Surp: 0 Hugepagesize: 2048 kB
ASM新增磁碟操作
儲存側操作
1、儲存建立lun,取名要規範
2、將建立的lun新增到原來的rac叢集的lun組
系統側操作
1、檢視系統能否識別出新的磁碟
[root@bms-10 ~]# lsblk | grep sd* sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 445.4G 0 part ├─centos-root 253:0 0 441.4G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm [SWAP] sddk 71:32 0 20G 0 disk sddl 71:48 0 20G 0 disk sddm 71:64 0 50G 0 disk sddn 71:80 0 1.5T 0 disk sddo 71:96 0 1.5T 0 disk sddp 71:112 0 1.5T 0 disk sddq 71:128 0 1.5T 0 disk sddr 71:144 0 1.5T 0 disk sdds 71:160 0 1.5T 0 disk sddt 71:176 0 1T 0 disk sddu 71:192 0 20G 0 disk
可以知道1.5T的資料盤還是6塊,需要重新掃描。
[root@bms-10 ~]# echo "- - -">/sys/class/scsi_host/host0/scan [root@bms-10 ~]# echo "- - -" >/sys/class/scsi_host/host1/scan [root@bms-10 ~]# echo "- - -" >/sys/class/scsi_host/host2/scan [root@bms-9 ~]# echo "- - -">/sys/class/scsi_host/host0/scan [root@bms-9 ~]# echo "- - -" >/sys/class/scsi_host/host1/scan [root@bms-9 ~]# echo "- - -" >/sys/class/scsi_host/host2/scan
主備機都識別到新的1.5T硬碟sdb
[root@bms-9 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 445.4G 0 part ├─centos-root 253:0 0 441.4G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm [SWAP] sdb 8:16 0 1.5T 0 disk sdea 128:32 0 1.5T 0 disk sdeb 128:48 0 1.5T 0 disk sdec 128:64 0 1T 0 disk sded 128:80 0 20G 0 disk sdee 128:96 0 20G 0 disk sdef 128:112 0 20G 0 disk sdeg 128:128 0 50G 0 disk sdeh 128:144 0 1.5T 0 disk sddx 71:240 0 1.5T 0 disk sddy 128:0 0 1.5T 0 disk sddz 128:16 0 1.5T 0 disk
2、華為多路徑軟體檢視(2節點),可以看到DATA07已經識別到了
[root@bms-10 ~]# upadmin UltraPath CLI #0 >show vlun ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Vlun ID Disk Name Lun WWN Status Capacity Ctrl(Own/Work) Array Name Dev Lun ID No. of Paths(Available/Total) 0 sddk RAC_BMS10_BMS9_OCR02 644227c10028c030171ef7e60000001c Normal 20.00GB --/-- HW-Stor-1 28 24/24 1 sddl RAC_BMS10_BMS9_OCR03 644227c10028c030171f02360000001d Normal 20.00GB --/-- HW-Stor-1 29 24/24 2 sddm RAC_BMS10_BMS9_MGMT01 644227c10028c030171f17e20000001e Normal 50.00GB --/-- HW-Stor-1 30 24/24 3 sddn RAC_BMS10_BMS9_DATA01 644227c10028c030171f3fa10000001f Normal 1.50TB --/-- HW-Stor-1 31 24/24 4 sddo RAC_BMS10_BMS9_DATA02 644227c10028c030171f503b00000020 Normal 1.50TB --/-- HW-Stor-1 32 24/24 5 sddp RAC_BMS10_BMS9_DATA03 644227c10028c030171f5f4100000021 Normal 1.50TB --/-- HW-Stor-1 33 24/24 6 sddq RAC_BMS10_BMS9_DATA04 644227c10028c030171f7ffe00000022 Normal 1.50TB --/-- HW-Stor-1 34 24/24 7 sddr RAC_BMS10_BMS9_DATA05 644227c10028c030171f9dd400000023 Normal 1.50TB --/-- HW-Stor-1 35 24/24 8 sdds RAC_BMS10_BMS9_DATA06 644227c10028c030171faaec00000024 Normal 1.50TB --/-- HW-Stor-1 36 24/24 9 sddt RAC_BMS10_BMS9_FRA01 644227c10028c030171fb9ab00000025 Normal 1.00TB --/-- HW-Stor-1 37 24/24 10 sddu RAC_BMS10_BMS9_OCR01 644227c10028c030171eeb4b0000001b Normal 20.00GB --/-- HW-Stor-1 27 24/24 11 sdb RAC_BMS10_BMS9_DATA07 644227c10028c03024fa050c00000026 Normal 1.50TB --/-- HW-Stor-1 38 12/12 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- UltraPath CLI #1 >
華為多路徑軟體設定別名(2節點)注意此時新盤磁碟機代號,我此處為sdb(2節點)
[root@bms-10 ~]# upLinux setDiskAlias src_name=sdb dest_alias=asm-dgdata07 owner=grid group=asmadmin method=SYMLINK Succeeded in executing the command. [root@bms-10 ~]# [root@bms-9 ~]# upLinux setDiskAlias src_name=sdb dest_alias=asm-dgdata07 owner=grid group=asmadmin method=SYMLINK Succeeded in executing the command. [root@bms-9 ~]# 檢查 [root@bms-9 ~]# upLinux showDiskAlias ---------------------------------------------------------------------------------------------------------------------------------------- ID Alias Lun WWN Disk Type 0 asm-dgdata01 644227c10028c030171f3fa10000001f sdeh SYMLINK 1 asm-dgdata02 644227c10028c030171f503b00000020 sddx SYMLINK 2 asm-dgdata03 644227c10028c030171f5f4100000021 sddy SYMLINK 3 asm-dgdata04 644227c10028c030171f7ffe00000022 sddz SYMLINK 4 asm-dgdata05 644227c10028c030171f9dd400000023 sdea SYMLINK 5 asm-dgdata06 644227c10028c030171faaec00000024 sdeb SYMLINK 6 asm-dgdata07 644227c10028c03024fa050c00000026 sdb SYMLINK 7 asm-dgfra01 644227c10028c030171fb9ab00000025 sdec SYMLINK 8 asm-dgmgmt01 644227c10028c030171f17e20000001e sdeg SYMLINK 9 asm-dgocr01 644227c10028c030171eeb4b0000001b sded SYMLINK 10 asm-dgocr02 644227c10028c030171ef7e60000001c sdee SYMLINK 11 asm-dgocr03 644227c10028c030171f02360000001d sdef SYMLINK ----------------------------------------------------------------------------------------------------------------------------------------
檢查許可權
[root@bms-9 ~]# ll /dev/sd* brw-rw---- 1 root disk 8, 0 Jan 5 17:17 /dev/sda brw-rw---- 1 root disk 8, 1 Jan 5 17:17 /dev/sda1 brw-rw---- 1 root disk 8, 2 Jan 5 17:17 /dev/sda2 brw-rw---- 1 root disk 8, 3 Jan 5 17:17 /dev/sda3 brw-rw---- 1 grid asmadmin 8, 16 Jan 7 11:15 /dev/sdb brw-rw---- 1 grid asmadmin 71, 240 Jan 7 11:16 /dev/sddx brw-rw---- 1 grid asmadmin 128, 0 Jan 7 11:15 /dev/sddy brw-rw---- 1 grid asmadmin 128, 16 Jan 7 11:16 /dev/sddz brw-rw---- 1 grid asmadmin 128, 32 Jan 7 11:16 /dev/sdea brw-rw---- 1 grid asmadmin 128, 48 Jan 7 11:16 /dev/sdeb brw-rw---- 1 grid asmadmin 128, 64 Jan 7 11:16 /dev/sdec brw-rw---- 1 grid asmadmin 128, 80 Jan 7 11:16 /dev/sded brw-rw---- 1 grid asmadmin 128, 96 Jan 7 11:16 /dev/sdee brw-rw---- 1 grid asmadmin 128, 112 Jan 7 11:16 /dev/sdef brw-rw---- 1 grid asmadmin 128, 128 Jan 7 11:16 /dev/sdeg brw-rw---- 1 grid asmadmin 128, 144 Jan 7 11:16 /dev/sdeh [root@bms-9 ~]# [root@bms-9 ~]# ll /dev/ultrapath/asm* lrwxrwxrwx 1 root root 7 Jan 7 11:11 /dev/ultrapath/asm-dgdata01 -> ../sdeh lrwxrwxrwx 1 root root 7 Jan 7 11:11 /dev/ultrapath/asm-dgdata02 -> ../sddx lrwxrwxrwx 1 root root 7 Jan 7 10:18 /dev/ultrapath/asm-dgdata03 -> ../sddy lrwxrwxrwx 1 root root 7 Jan 7 11:11 /dev/ultrapath/asm-dgdata04 -> ../sddz lrwxrwxrwx 1 root root 7 Jan 7 11:01 /dev/ultrapath/asm-dgdata05 -> ../sdea lrwxrwxrwx 1 root root 7 Jan 7 11:17 /dev/ultrapath/asm-dgdata06 -> ../sdeb lrwxrwxrwx 1 root root 6 Jan 7 11:15 /dev/ultrapath/asm-dgdata07 -> ../sdb lrwxrwxrwx 1 root root 7 Jan 7 11:17 /dev/ultrapath/asm-dgfra01 -> ../sdec lrwxrwxrwx 1 root root 7 Jan 7 11:17 /dev/ultrapath/asm-dgmgmt01 -> ../sdeg lrwxrwxrwx 1 root root 7 Jan 7 11:17 /dev/ultrapath/asm-dgocr01 -> ../sded lrwxrwxrwx 1 root root 7 Jan 7 10:18 /dev/ultrapath/asm-dgocr02 -> ../sdee lrwxrwxrwx 1 root root 7 Jan 7 10:18 /dev/ultrapath/asm-dgocr03 -> ../sdef [root@bms-9 ~]#
oracle新增磁碟前
[grid@bms-10 ~]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 9437184 9407876 0 9407876 0 N DGDATA/ MOUNTED EXTERN N 512 512 4096 4194304 1048576 1008152 0 1008152 0 N DGFRA/ MOUNTED EXTERN N 512 512 4096 4194304 51200 27516 0 27516 0 N DGMGMT/ MOUNTED NORMAL N 512 512 4096 4194304 61440 60524 20480 20022 0 Y DGOCR/ ASMCMD>
新增asm磁碟組dgdata
[grid@bms-10 ~]$ sqlplus / as sysasm SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 7 11:24:31 2022 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> alter diskgroup DGDATA add disk '/dev/ultrapath/asm-dgdata07' ; Diskgroup altered. SQL> alter diskgroup DGDATA rebalance power 10; Diskgroup altered. SQL>
注意要用這個路徑 '/dev/ultrapath/asm-dgdata07' ; 這樣重啟也不會有問題
新增磁碟後
ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 11010048 10980464 0 10980464 0 N DGDATA/ MOUNTED EXTERN N 512 512 4096 4194304 1048576 1005496 0 1005496 0 N DGFRA/ MOUNTED EXTERN N 512 512 4096 4194304 51200 27516 0 27516 0 N DGMGMT/ MOUNTED NORMAL N 512 512 4096 4194304 61440 60524 20480 20022 0 Y DGOCR/ ASMCMD>
確認新增成功,很方便。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/70004783/viewspace-2850188/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Oracle 19c RAC on Linux 7.6安裝手冊OracleLinux
- 靜默安裝19C RAC的指令碼指令碼
- 【BUILD_ORACLE】Oracle 19c RAC搭建(五)DB軟體安裝UIOracle
- 【BUILD_ORACLE】Oracle 19c RAC搭建(四)Grid軟體安裝UIOracle
- RedHat 7.7 平臺安裝19c(19.3) RAC 詳細操作過程Redhat
- Oracle 19c rac安裝,只能啟動一個節點的ASMOracleASM
- 【BUILD_ORACLE】Oracle 19c RAC搭建(一)安裝資源規劃UIOracle
- oracle 19c 安裝、解除安裝Oracle
- rac靜默安裝
- RedHat 7.7 平臺安裝19c(19.3) RAC 靜默詳細操作過程Redhat
- 19c安裝配置
- 19c RAC Convert to OneNde
- 19c OneNode Convert to RAC
- 【ASK_ORACLE】Oracle 19c RAC使用opatchauto安裝補丁報錯OPATCHAUTO-72083Oracle
- CentOS 7.6 安裝11.2.0.4 RACCentOS
- 2節點RAC安裝
- Oracle 19c的安裝Oracle
- Oracle 19c RPM安裝Oracle
- Windows 11.2.0.4 RAC安裝配置以及RAC新增節點Windows
- Oracle 19C RAC 安裝 Error 4 opening dom ASM/Self in 0x5984500 報錯處理OracleErrorASM
- RAC安裝【AIX 7 + 11.2.0.4 + ASM】AIASM
- 7:OracleRAC安裝配置(19C)Oracle
- oracle 19C 靜默安裝Oracle
- Docker中安裝Oracle 19cDockerOracle
- Oracle 19c 安裝嚐鮮Oracle
- 手工清理19c RAC環境
- Oracle 19c RAC INS-40724Oracle
- vgant 安裝oracle資料庫racOracle資料庫
- centos7 安裝ORACLE 11.2.0.4.0 RACCentOSOracle
- Solaris下Oracle RAC 11.2.0.4 安裝方法Oracle
- oracle rac資料庫的安裝Oracle資料庫
- Oracle Database 19c安裝Sample SchemasOracleDatabase
- Oracle 19C 下載和安裝Oracle
- How To Deal With Split Brain In Oracle 19c RacAIOracle
- Oracle Linux 7.1 silent install 19C RACOracleLinux
- Disable Tfa In Oracle 19c RAC-20220112Oracle
- ORACLE 19C RAC叢集安裝與PRCR-1079&CRS-5017&ORA-03113Oracle
- Oracle 19C的下載和安裝部署(圖形安裝和靜默安裝)Oracle