一步一步搭建oracle 11gR2 rac+dg之database安裝(五)
-
一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之database安裝 (五)
本章目錄結構:
這一步主要可能安裝的時候找不見磁碟組,這個也不要急,一步一步肯定可以解決的,,,,,Database安裝與配置
-
安裝資料庫
日誌:tail -f /u01/app/oraInventory/logs/installActions2014-06-05_01-30-25AM.log
解壓檔案:
[oracle@localhost ~]$ ll linux*
-rw-r--r-- 1 oracle oinstall 1239269270 Apr 18 20:44 linux.x64_11gR2_database_1of2.zip
-rw-r--r-- 1 oracle oinstall 1111416131 Apr 18 20:47 linux.x64_11gR2_database_2of2.zip
[oracle@localhost ~]$ unzip linux.x64_11gR2_database_1of2.zip && unzip linux.x64_11gR2_database_2of2.zip
以Oracle 使用者在rac1上安裝:
[oracle@rac1 database]$ export DISPLAY=192.168.1.100:0.0
[oracle@rac1 database]$ xhost +
access control disabled, clients can connect from any host
[oracle@rac1 database]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 10840 MB Passed
Checking swap space: must be greater than 150 MB. Actual 1599 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-06-05_01-30-25AM. Please wait ...
安裝oracle database軟體
以下圖形介面:
這裡可能報錯:INS-35354,ins-35354 the system on which you are attempting to install oracle rac is not part of a valid cluster.
解決辦法:
修改檔案:vi /u01/app/oraInventory/ContentsXML/inventory.xml
把:<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" >
修改為:<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
56% 的rman 工具的時候也很慢,,,,,
-
94%的時候很慢
這個時候在copy到rac2上,可以檢視大小來確定是否掛起:
-
執行root指令碼:
在兩個節點上,分別以root身份執行上述指令碼,然後點選OK。
[root@rac1 app]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@rac1 app]#
節點二執行:
[root@rac2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@rac2 11.2.0]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 11.2.0]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@rac2 11.2.0]#
資料庫軟體安裝完成,點選close退出。
資料庫軟體安裝完成後就可以在2個節點上測試一下了:
[oracle@rac1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Wed Oct 1 22:42:56 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to an idle instance.
SQL>
-
使用DBCA建立資料庫
在Oracle 使用者下在節點一操作:
注意這一步2個節點都需要選擇:
這一步中的em可以不用配置,不然太耗資源
這裡的閃回區域選擇FRA磁碟組:
-
日誌路徑
可以檢視dbca建庫日誌
路徑:/u01/app/oracle/cfgtoollogs/dbca/racdb
tail -f /u01/app/oracle/cfgtoollogs/dbca/racdb/trace.log
-
驗證
驗證叢集化資料庫已開啟
[grid@node1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.DATADG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.FRADG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.LISTENER.lsnr
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.asm
ONLINE ONLINE node1 Started
ONLINE ONLINE node2 Started
ora.gsd
OFFLINE OFFLINE node1
OFFLINE OFFLINE node2
ora.net1.network
ONLINE ONLINE node1
ONLINE ONLINE node2
ora.ons
ONLINE ONLINE node1
ONLINE ONLINE node2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE node1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE node1
ora.cvu
1 ONLINE ONLINE node1
ora.node1.vip
1 ONLINE ONLINE node1
ora.node2.vip
1 ONLINE ONLINE node2
ora.oc4j
1 ONLINE ONLINE node1
ora.scan1.vip
1 ONLINE ONLINE node2
ora.scan2.vip
1 ONLINE ONLINE node1
ora.scan3.vip
1 ONLINE ONLINE node1
ora.zhongwc.db
1 ONLINE ONLINE node1 Open
2 ONLINE ONLINE node2 Open
檢查叢集的執行狀況
[grid@node1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@node1 ~]$
所有 Oracle 例項
[grid@node1 ~]$ srvctl status database -d zhongwc
Instance zhongwc1 is running on node node1
Instance zhongwc2 is running on node node2
單個 Oracle 例項
[grid@node1 ~]$ srvctl status instance -d zhongwc -i zhongwc1
Instance zhongwc1 is running on node node1
節點應用程式
[grid@node1 ~]$ srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is running on node: node1
Network is running on node: node2
GSD is disabled
GSD is not running on node: node1
GSD is not running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2
節點應用程式
[grid@node1 ~]$ srvctl config nodeapps
Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static
VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1
VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
資料庫配置
[grid@node1 ~]$ srvctl config database -d zhongwc -a
Database unique name: zhongwc
Database name: zhongwc
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATADG/zhongwc/spfilezhongwc.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: zhongwc
Database instances: zhongwc1,zhongwc2
Disk Groups: DATADG,FRADG
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
ASM 狀態
[grid@node1 ~]$ srvctl status asm
ASM is running on node2,node1
ASM 配置
[grid@node1 ~]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
TNS 監聽器狀態
[grid@node1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): node2,node1
TNS 監聽器配置
[grid@node1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
/u01/app/11.2.0/grid on node(s) node2,node1
End points: TCP:1521
節點應用程式配置 VIP、GSD、ONS、監聽器
[grid@node1 ~]$ srvctl config nodeapps -a -g -s -l
Warning:-l option has been deprecated and will be ignored.
Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static
VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1
VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
/u01/app/11.2.0/grid on node(s) node2,node1
End points: TCP:1521
SCAN 狀態
[grid@node1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
[grid@node1 ~]$
SCAN 配置
[grid@node1 ~]$ srvctl config scan
SCAN name: cluster-scan.localdomain, Network: 1/192.168.0.0/255.255.0.0/eth0
SCAN VIP name: scan1, IP: /cluster-scan.localdomain/192.168.1.57
SCAN VIP name: scan2, IP: /cluster-scan.localdomain/192.168.1.58
SCAN VIP name: scan3, IP: /cluster-scan.localdomain/192.168.1.59
[grid@node1 ~]$
-
驗證所有叢集節點間的時鐘同步
[grid@node1 ~]$ cluvfy comp clocksync -verbose
Verifying Clock Synchronization across the cluster nodes
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name Status
------------------------------------ ------------------------
node1 passed
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
Node Name State
------------------------------------ ------------------------
node1 Active
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
Node Name Time Offset Status
------------ ------------------------ ------------------------
node1 0.0 passed
Time offset is within the specified limits on the following set of nodes:
"[node1]"
Result: Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes was successful.
-
登陸
[oracle@node1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 29 14:30:08 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> col host_name format a20
SQL> set linesize 200
SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE;
INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS
---------------- -------------------- ----------------- ----------------------- ------------ --------- ------------------ -----------------
zhongwc1 node1.localdomain 11.2.0.3.0 29-DEC-2012 13:55:55 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
zhongwc2 node2.localdomain 11.2.0.3.0 29-DEC-2012 13:56:07 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
[grid@node1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 29 14:31:04 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> select name from v$asm_diskgroup;
NAME
------------------------------------------------------------
CRS
DATADG
FRADG
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29337971/viewspace-1819886/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 一步一步搭建oracle 11gR2 rac+dg之環境準備(二)Oracle
- 一步一步搭建11gR2 rac+dg之配置單例項的DG(八)單例
- Oracle RAC+DG搭建Oracle
- Solaris 10.5 安裝Oracle 11gR2Oracle
- Oracle OCP(35):Database 安裝OracleDatabase
- 一步一步搭建腳手架
- 第一步,anaconda的安裝
- 一步一步分析vue之observeVue
- 【Oracle】Windows安裝oracle11gR1 database 11.1.0.6OracleWindowsDatabase
- Oracle Database 19c安裝Sample SchemasOracleDatabase
- 一步一步分析vue之$mount(1)Vue
- 一步一步搭建 springboot-2.1.3+dubbo-2.7.1 專案Spring Boot
- 【BUILD_ORACLE】Oracle 19c RAC搭建(五)DB軟體安裝UIOracle
- 操作規範(四)——Linux 5.4安裝Oracle 11gR2LinuxOracle
- 一步一步搭建Flutter開發架子-網路請求,非同步UI更新封裝Flutter非同步UI封裝
- 教你一步一步在vim中配置史上最難安裝的You Complete Me
- Oracle 11gR2(11.2.0.4)安裝包(7個)作用說明Oracle
- 一步一步搭建react應用-前後端初始化React後端
- kaldi第一步安裝kaldi測試yesno
- 一步一步教你封裝最新版的Dio封裝
- 一步一步搭建,功能最全的許可權管理系統之動態路由選單(一)路由
- 一步一步學ROP之linux_x86篇Linux
- 一步一步學ROP之Android ARM 32位篇Android
- 一步一步學ROP之linux_x64篇Linux
- 【配置上線】靜默安裝資料庫Oracle 11gR2資料庫Oracle
- 一步一步來
- window+python3.6+opencv3.4安裝一步到位PythonOpenCV
- 一步搭建你的私密網盤
- 一步一步帶你封裝基於react的modal元件封裝React元件
- 一步一步上手MyBatisPlusMyBatis
- 首次安裝Linux,配置網路、換源一步到位Linux
- oracle之 11.2.0.4 bbed安裝Oracle
- 一步一步學spring bootSpring Boot
- 如何一步一步配置webpackWeb
- 一步一步理解命令模式模式
- 一步一步手寫GPTGPT
- 靜默方式安裝11gR2
- 一步步像 cms 一樣安裝 Laravel 專案Laravel
- 一步一步教你寫kubernetes sidecarIDE