虛擬RHEL5上安裝11g RAC-安裝配置

jinqibingl發表於2012-10-04

虛擬RHEL5上安裝11g RAC-安裝配置  

虛擬RHEL5上安裝11g RAC-安裝配置時間:2009-03-13 00:00 來源:IT168 字型:[大 中 小]  本文描述瞭如何使用VMware ESX Server和NFS作為共享儲存 style="COLOR: #000000" href="http://storage.it168.com/" target=_blank>儲存在Red Hat Enterprise Linux 5上安裝Oracle 11g R1 RAC。

  介紹

  ESX Server是VMware公司提供的企業級管理程式(hypervisor),它安裝在裸機上,比起桌面虛擬工具而言,它的效率高多了,本文使用ESX Server為安裝Oracle真正應用叢集(RAC)提供基礎架構。

  本文假設你已經安裝了VMware ESX Server和一個VMware基礎架構客戶端(VMware Infrastructure Client),它們的安裝說明請參考:

  

  下載軟體

  需要的軟體:Red Hat Enterprise Linux (RHEL) 5和Oracle 11g Release 1 (11.1) Clusterware and DB software。請去和下載。

  虛擬機器設定

  在左邊窗格ESX伺服器 style="COLOR: #000000" href="http://server.it168.com/" target=_blank>伺服器上點選右鍵,然後選擇“新的虛擬機器... ”選單選項。


  選擇自定義選項,並點選“下一步”按鈕。


  輸入你希望出現在客戶端右側窗格中的虛擬機器的名稱( RAC1 ),然後點選“下一步”按鈕。


  選擇預設的資料儲存,然後點選“下一步”按鈕。


  選擇“ Linux系統”和“紅帽企業Linux 5 ”選項,然後點選“下一步”按鈕。


  為虛擬機器選擇所需處理器的數量,然後點選“下一步”按鈕。

 

  這裡證明我們完成安裝最低需要1G的記憶體。輸入虛擬機器所需的記憶體數量,然後點選“下一步”按鈕。


  我們至少需要兩個網路卡。一個為公網IP和虛擬IP地址,為私有IP地址單獨提供一個。選擇需要的數量和網路卡型別,然後點選“下一步”按鈕。


  接受預設的儲存介面卡,點選“下一步”按鈕


  接受“建立一個新的虛擬磁碟”選項,點選“下一步”按鈕。


  我們使用的是NFS共享儲存ORACLE HOME和資料庫檔案,因此在每個虛擬機器上我們並不需要多少磁碟空間。假設您使用的是1G的記憶體,您將確定的2G的交換空間,使10G的磁碟空間足夠了。如果您使用了更多的記憶體,您將需要增加相應的磁碟空間。輸入適當的磁碟容量,然後點選“下一步”按鈕。


  本地磁碟不需要共享,所以點選“下一步”按鈕,忽略高階選項。


  如果你對摘要資訊感到滿意,請單擊“完成”按鈕。


  現在在左側窗格中就可以看到虛擬機器了。


  重複此過程來建立第二個節點(RAC2)。


  要啟動虛擬機器,在工具欄上點選播放按鈕。


  虛擬機器將開始從已安裝媒體或網路啟動。

  請將RHEL 第5版Linux的DVD放到客戶端PC的DVD驅動器,在工具欄上點選播放按鈕,啟動虛擬機器,右窗格中顯示VMware ESX伺服器的客戶機啟動載入器,然後顯示RHEL Linux 5安裝螢幕。


  下面的安裝就和正常的作業系統安裝一樣了,但至少要有2G SWAP空間,要禁用防火牆和SELinux,並安裝下列軟體包:

   GNOME Desktop Environment

   Editors

   Graphical Internet

   Text-based Internet

   Development Libraries

   Development Tools

   Server Configuration Tools

   Administration Tools

   Base

   System Tools

   X Window System

  要保持和本文剩餘的部分一致,在安裝過程中必須象下面這樣進行設定:

  RAC1:

  主機名:rac1.localdomain

  eth0的IP地址:10.1.10.201(公共地址)

  eth0預設閘道器:10.1.10.1(公共地址)

  eth1的IP地址:10.1.9.201(私有地址)

  eth1預設閘道器:無

  RAC2:

  主機名:rac2.localdomain

  eth0的IP地址:10.1.10.202(公共地址)

  eth0預設閘道器:10.1.10.2(公共地址)

  eth1的IP地址:10.1.9.202(私有地址)

  eth1預設閘道器:無

  你可以自由更改IP地址,以適應您的網路,但請記住保持這些調整符合文章的其他內容。

  一旦基本安裝完成後,您必須安裝一些額外的軟體包,同時登入為根使用者。如果您有網際網路連線,您可以使用以下命令進行下載和安裝。

  yum install binutils elfutils-libelf glibc glibc-common libaio \

  libgcc libstdc++ make compat-libstdc++-33 elfutils-libelf-devel \

  glibc-headers glibc-devel libgomp gcc gcc-c++ libaio-devel \

  libstdc++-devel unixODBC unixODBC-devel sysstat

  或者從RHEL 5的DVD安裝它們。

  # From Enterprise Linux 5.2 DVD

  cd /media/dvd/Server

  rpm -Uvh binutils-2.*

  rpm -Uvh elfutils-libelf-0.*

  rpm -Uvh glibc-2.*

  rpm -Uvh glibc-common-2.*

  rpm -Uvh libaio-0.*

  rpm -Uvh libgcc-4.*

  rpm -Uvh libstdc++-4.*

  rpm -Uvh make-3.*

  rpm -Uvh compat-libstdc++-33*

  rpm -Uvh elfutils-libelf-devel-*

  rpm -Uvh glibc-headers*

  rpm -Uvh glibc-devel-2.*

  rpm -Uvh libgomp*

  rpm -Uvh gcc-4.*

  rpm -Uvh gcc-c++-4.*

  rpm -Uvh libaio-devel-0.*

  rpm -Uvh libstdc++-devel-4.*

  rpm -Uvh unixODBC-2.*

  rpm -Uvh unixODBC-devel-2.*

  rpm -Uvh sysstat-7.*

  cd /

  eject

  請記住,安裝完客戶端作業系統後要安裝VMware工具。

  Oracle安裝先決條件

  請作為root使用者登入到RAC1虛擬機器執行下列步驟。

  在/etc/hosts檔案必須包含以下資訊:

  127.0.0.1 localhost.localdomain localhost

  # Public

  10.1.10.201 rac1.localdomain rac1

  10.1.10.202 rac2.localdomain rac2

  #Private

  10.1.9.201 rac1-priv.localdomain rac1-priv

  10.1.9.202 rac2-priv.localdomain rac2-priv

  #Virtual

  10.1.10.203 rac1-vip.localdomain rac1-vip

  10.1.10.204 rac2-vip.localdomain rac2-vip

  #NAS

  10.1.10.61 nas1.localdomain nas1

  將下列語句新增到/etc/sysctl.conf檔案:

  kernel.shmmni = 4096

  # semaphores: semmsl, semmns, semopm, semmni

  kernel.sem = 250 32000 100 128

  net.ipv4.ip_local_port_range = 1024 65000

  net.core.rmem_default=4194304

  net.core.rmem_max=4194304

  net.core.wmem_default=262144

  net.core.wmem_max=262144

  # Additional and amended parameters suggested by Kevin Closson

  #net.core.rmem_default = 524288

  #net.core.wmem_default = 524288

  #net.core.rmem_max = 16777216

  #net.core.wmem_max = 16777216

  net.ipv4.ipfrag_high_thresh=524288

  net.ipv4.ipfrag_low_thresh=393216

  net.ipv4.tcp_rmem=4096 524288 16777216

  net.ipv4.tcp_wmem=4096 524288 16777216

  net.ipv4.tcp_timestamps=0

  net.ipv4.tcp_sack=0

  net.ipv4.tcp_window_scaling=1

  net.core.optmem_max=524287

  net.core.netdev_max_backlog=2500

  sunrpc.tcp_slot_table_entries=128

  sunrpc.udp_slot_table_entries=128

  net.ipv4.tcp_mem=16384 16384 16384

  執行以下命令以改變當前的核心引數:

  /sbin/sysctl –p

  將下列語句新增到/etc/security/limits.conf檔案:

  oracle soft nproc 2047

  oracle hard nproc 16384

  oracle soft nofile 1024

  oracle hard nofile 65536

  請將以下幾行新增到/etc/pam.d /login檔案,如果它們不存在的話:

  session required /lib/security/pam_limits.so

  session required pam_limits.so

  透過編輯/etc/SELinux/config檔案禁用安全 style="COLOR: #000000" href="http://safe.it168.com/" target=_blank>安全Linux,確保了SELinux標誌設定如下:

  SELINUX=disabled

  另外,這一改動可以使用GUI工具(系統?管理?安全級和防火牆)做到 。按一下SELinux標籤,點選停用功能。

  建立新的組和使用者:

  groupadd oinstall

  groupadd dba

  groupadd oper

  groupadd asmadmin

  useradd -u 500 -g oinstall -G dba,oper,asmadmin oracle

  passwd oracle

  配置群集每個節點上的SSH。在每個節點上以Oracle使用者登陸執行下列任務:

  su - oracle

  mkdir ~/.ssh

  chmod 700 ~/.ssh

  /usr/bin/ssh-keygen -t rsa # Accept the default settings.

  exit

  RSA公鑰被寫入~/.ssh/id_rsa.pub檔案,私鑰寫入~/.ssh/id_rsa檔案。

  在RAC1節點上以Oracle使用者登陸,生成一個authorized_keys檔案,然後複製到RAC2上:

  su - oracle

  cd ~/.ssh

  cat id_rsa.pub >> authorized_keys

  scp authorized_keys rac2:/home/oracle/.ssh/

  exit

  接下來,在RAC2上以Oracle使用者登陸,執行下面的命令:

  su - oracle

  cd ~/.ssh

  cat id_rsa.pub >> authorized_keys

  scp authorized_keys rac1:/home/oracle/.ssh/

  exit

  現在在兩臺伺服器上的authorized_keys檔案都包括了所有節點的公鑰。

  為了使每個群整合員節點上的SSH使用者都對等,在每個節點上執行下面的命令:

  su - oracle

  ssh rac1 date

  ssh rac2 date

  ssh rac1.localdomain date

  ssh rac2.localdomain date

  exec /usr/bin/ssh-agent $SHELL

  /usr/bin/ssh-add

  現在在這兩臺伺服器之間應該可以不要密碼使用SSH和SCP了。

  以Oracle使用者登陸,然後將下面的語句新增到.bash_profile末尾:

  # Oracle Settings

  TMP=/tmp; export TMP

  TMPDIR=$TMP; export TMPDIR

  ORACLE_HOSTNAME=rac1.localdomain; export ORACLE_HOSTNAME

  ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

  ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1; export ORACLE_HOME

  ORACLE_SID=RAC1; export ORACLE_SID

  ORACLE_TERM=xterm; export ORACLE_TERM

  PATH=/usr/sbin:$PATH; export PATH

  PATH=$ORACLE_HOME/bin:$PATH; export PATH

  LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

  CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

  if [ $USER = "oracle" ]; then

  if [ $SHELL = "/bin/ksh" ]; then

  ulimit -p 16384

  ulimit -n 65536

  else

  ulimit -u 16384 -n 65536

  fi

  fi

  請記住,在第二個節點上為ORACLE_SID和ORACLE_HOSTNAME設定正確的值。

  這裡安裝使用NFS為RAC提供共享儲存,修改下面的語句以適應你的NAS或NFS伺服器。

  如果你使用了第三個Linux伺服器提供NFS服務,你應該如下面語句這樣建立一些共享目錄:

  mkdir /shared_config

  mkdir /shared_crs

  mkdir /shared_home

  mkdir /shared_data

  將下列語句新增到/etc/exports檔案:

  /shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  /shared_crs *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  /shared_home *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  /shared_data *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  執行以下命令匯出NFS共享:

  chkconfig nfs on

  service nfs restart

  如果你使用的是NAS或其他一些支援NFS的儲存裝置,也請建立4個共享。

  在RAC1和RAC2上建立用於安裝Oracle軟體的目錄:

  mkdir -p /u01/app/crs/product/11.1.0/crs

  mkdir -p /u01/app/oracle/product/11.1.0/db_1

  mkdir -p /u01/oradata

  mkdir -p /u01/shared_config

  chown -R oracle:oinstall /u01/app /u01/app/oracle /u01/oradata /u01/shared_config

  chmod -R 775 /u01/app /u01/app/oracle /u01/oradata /u01/shared_config

  將下面的語句新增到每個伺服器的/etc/fstab檔案中,掛載選項是基於Oracle metalink註記:359515.1的建議:

  nas1:/shared_config /u01/shared_config nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600 0 0

  nas1:/shared_crs /u01/app/crs/product/11.1.0/crs nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0

  nas1:/shared_home /u01/app/oracle/product/11.1.0/db_1 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0

  nas1:/shared_data /u01/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0

  以root使用者登陸兩臺伺服器執行下列命令掛載NFS共享:

  mount /u01/shared_config

  mount /u01/app/crs/product/11.1.0/crs

  mount /u01/app/oracle/product/11.1.0/db_1

  mount /u01/oradata

  建立共享CRS配置和表決磁碟檔案:

  touch /u01/shared_config/ocr_configuration

  touch /u01/shared_config/voting_disk

  在每臺伺服器上以root登陸執行下列命令確保共享目錄的許可權設定正確:

  chown -R oracle:oinstall /u01/shared_config

  chown -R oracle:oinstall /u01/app/crs/product/11.1.0/crs

  chown -R oracle:oinstall /u01/app/oracle/product/11.1.0/db_1

  chown -R oracle:oinstall /u01/oradata

  開始安裝clusterware之前,先在clusterware根目錄使用runcluvfy.sh檢查先決條件是否已經滿足:

  /mountpoint/clusterware/runcluvfy.sh stage -pre crsinst -n rac1,rac2 –verbose

  如果你收到任何失敗訊息,請先糾正後再繼續安裝。

  安裝clusterware軟體

  解壓clusterware和資料庫軟體:

  unzip linux_11gR1_clusterware.zip

  unzip linux_11gR1_database.zip

  以Oracle使用者登陸到RAC1,然後執行安裝程式:

  cd clusterware

  ./runInstaller

  在“歡迎”螢幕,點選“下一步”按鈕。


  接受預設的inventory目錄,點選“下一步”按鈕。


  輸入“/u01/app/crs/product/11.1.0/crs ”的ORACLE HOME,並點選“下一步”按鈕。


  等待先決條件檢查,遇到任何失敗都應該糾正並重新測試,確保所有先決條件檢查都透過,然後點選“下一步”按鈕。


  “指定群集配置”螢幕顯示只有RAC1節點。點選“新增”按鈕繼續。


  輸入RAC2節點的詳細資料,並點選“確定”按鈕


  按“下一步”按鈕繼續。


  在“指定網路介面用法”螢幕定義每個網路介面的用途。選中“eth0”介面,點選“修改”按鈕。


  設定“eht0”介面型別“public”,並點選“確定”按鈕。


  保留“eth1”介面為私有,點選“下一步”按鈕。


  點選“外部冗餘”選項,輸入“/u01/shared_config/ocr_configuration”作為OCR位置,點選“下一步”按鈕。為了有更大的冗餘,我們需要確定另一個共享磁碟的備用位置。


  點選“外部冗餘”選項,輸入“/u01/shared_config/voting_disk”的表決磁碟位置,並點選“下一步”按鈕,為了有更大的冗餘,我們需要確定另一個共享磁碟的替代的位置。


  在“摘要”螢幕上,單擊“安裝”按鈕,繼續。


  等待安裝


  一旦安裝完成,在兩個節點上執行下列螢幕顯示的orainstRoot.sh root.sh指令碼。


  執行orainstRoot.sh檔案的輸出看起來應該像下面這樣。

  # cd /u01/app/oraInventory

  # ./orainstRoot.sh

  Changing permissions of /u01/app/oraInventory to 770.

  Changing groupname of /u01/app/oraInventory to oinstall.

  The execution of the script. is complete

  #

  執行root.sh的輸出將取決於它執行的節點。下列文字是來自RAC1節點的輸出。

  # cd /u01/app/crs/product/11.1.0/crs

  # ./root.sh

  WARNING: directory '/u01/app/crs/product/11.1.0' is not owned by root

  WARNING: directory '/u01/app/crs/product' is not owned by root

  WARNING: directory '/u01/app/crs' is not owned by root

  WARNING: directory '/u01/app' is not owned by root

  Checking to see if Oracle CRS stack is already configured

  /etc/oracle does not exist. Creating it now.

  Setting the permissions on OCR backup directory

  Setting up Network socket directories

  Oracle Cluster Registry configuration upgraded successfully

  The directory '/u01/app/crs/product/11.1.0' is not owned by root. Changing owner to root

  The directory '/u01/app/crs/product' is not owned by root. Changing owner to root

  The directory '/u01/app/crs' is not owned by root. Changing owner to root

  The directory '/u01/app' is not owned by root. Changing owner to root

  Successfully accumulated necessary OCR keys.

  Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

  node :

  node 1: rac1 rac1-priv rac1

  node 2: rac2 rac2-priv rac2

  Creating OCR keys for user 'root', privgrp 'root'..

  Operation successful.

  Now formatting voting device: /u01/shared_config/voting_disk

  Format of 1 voting devices complete.

  Startup will be queued to init within 30 seconds.

  Adding daemons to inittab

  Expecting the CRS daemons to be up within 600 seconds.

  Cluster Synchronization Services is active on these nodes.

  rac1

  Cluster Synchronization Services is inactive on these nodes.

  rac2

  Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.

  #

  下面的輸出來自RAC2節點。

  # /u01/app/crs/product/11.1.0/crs

  # ./root.sh

  WARNING: directory '/u01/app/crs/product/11.1.0' is not owned by root

  WARNING: directory '/u01/app/crs/product' is not owned by root

  WARNING: directory '/u01/app/crs' is not owned by root

  WARNING: directory '/u01/app' is not owned by root

  Checking to see if Oracle CRS stack is already configured

  /etc/oracle does not exist. Creating it now.

  Setting the permissions on OCR backup directory

  Setting up Network socket directories

  Oracle Cluster Registry configuration upgraded successfully

  The directory '/u01/app/crs/product/11.1.0' is not owned by root. Changing owner to root

  The directory '/u01/app/crs/product' is not owned by root. Changing owner to root

  The directory '/u01/app/crs' is not owned by root. Changing owner to root

  The directory '/u01/app' is not owned by root. Changing owner to root

  clscfg: EXISTING configuration version 4 detected.

  clscfg: version 4 is 11 Release 1.

  Successfully accumulated necessary OCR keys.

  Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

  node :

  node 1: rac1 rac1-priv rac1

  node 2: rac2 rac2-priv rac2

  clscfg: Arguments check out successfully.

  NO KEYS WERE WRITTEN. Supply -force parameter to override.

  -force is destructive and will destroy any previous cluster

  configuration.

  Oracle Cluster Registry for cluster has already been initialized

  Startup will be queued to init within 30 seconds.

  Adding daemons to inittab

  Expecting the CRS daemons to be up within 600 seconds.

  Cluster Synchronization Services is active on these nodes.

  rac1

  rac2

  Cluster Synchronization Services is active on all the nodes.

  Waiting for the Oracle CRSD and EVMD to start

  Waiting for the Oracle CRSD and EVMD to start

  Oracle CRS stack installed and running under init(1M)

  Running vipca(silent) for configuring nodeapps

  Creating VIP application resource on (2) nodes...

  Creating GSD application resource on (2) nodes...

  Creating ONS application resource on (2) nodes...

  Starting VIP application resource on (2) nodes...

  Starting GSD application resource on (2) nodes...

  Starting ONS application resource on (2) nodes...

  Done.

  #

  在這裡您可以看到,有些配置的步驟被省略了,因為他們在第一個節點做,此外,最後一部分指令碼以安靜模式執行虛擬IP配置助理(VIPCA)。

  您現在應該回到RAC1 的“執行配置指令碼”螢幕上,並點選“確定”按鈕。


  等待配置助手完成。


  當安裝完成後,點選“退出”按鈕離開安裝程式。


  該叢集安裝現已完成。

  安裝完成後,在共享的$ ORACLE_HOME 目錄下“$ORACLE_HOME/network/admin/listener.ora”檔案中將包含下列專案:

  # listener.ora Network Configuration File: /u01/app/oracle/product/11.1.0/db_1/network/admin/listener.ora

  # Generated by Oracle configuration tools.

  LISTENER_RAC2 =

  (DESCRIPTION_LIST =

  (DESCRIPTION =

  (ADDRESS_LIST =

  (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521)(IP = FIRST))

  )

  (ADDRESS_LIST =

  (ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.10.202)(PORT = 1521)(IP = FIRST))

  )

  (ADDRESS_LIST =

  (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))

  )

  )

  )

  LISTENER_RAC1 =

  (DESCRIPTION_LIST =

  (DESCRIPTION =

  (ADDRESS_LIST =

  (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)(IP = FIRST))

  )

  (ADDRESS_LIST =

  (ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.10.201)(PORT = 1521)(IP = FIRST))

  )

  (ADDRESS_LIST =

  (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))

  )

  )

  )

  共享的$ORACLE_HOME/network/admin/tnsnames.ora檔案將包含下列內容:

  # tnsnames.ora Network Configuration File: /u01/app/oracle/product/11.1.0/db_1/network/admin/tnsnames.ora

  # Generated by Oracle configuration tools.

  RAC =

  (DESCRIPTION =

  (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))

  (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))

  (LOAD_BALANCE = yes)

  (CONNECT_DATA =

  (SERVER = DEDICATED)

  (SERVICE_NAME = RAC.WORLD)

  )

  )

  LISTENERS_RAC =

  (ADDRESS_LIST =

  (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))

  (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))

  )

  RAC2 =

  (DESCRIPTION =

  (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))

  (CONNECT_DATA =

  (SERVER = DEDICATED)

  (SERVICE_NAME = RAC.WORLD)

  (INSTANCE_NAME = RAC2)

  )

  )

  RAC1 =

  (DESCRIPTION =

  (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))

  (CONNECT_DATA =

  (SERVER = DEDICATED)

  (SERVICE_NAME = RAC.WORLD)

  (INSTANCE_NAME = RAC1)

  )

  )

  這種配置可直接連線到具體的例項,或使用負載平衡連線到主服務。

  $ sqlplus / as sysdba

  SQL*Plus: Release 11.1.0.6.0 - Production on Tue Aug 19 16:54:45 2008

  Copyright (c) 1982, 2007, Oracle. All rights reserved.

  Connected to:

  Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

  With the Partitioning, Real Application Clusters, OLAP, Data Mining

  and Real Application Testing options

  SQL> CONN AS SYSDBA

  Connected.

  SQL> SELECT instance_name, host_name FROM v$instance;

  INSTANCE_NAME HOST_NAME

  ---------------- ----------------------------------------------------

  RAC1 rac1.lynx.co.uk

  SQL> CONN AS SYSDBA

  Connected.

  SQL> SELECT instance_name, host_name FROM v$instance;

  INSTANCE_NAME HOST_NAME

  ---------------- -----------------------------------------------------

  RAC2 rac2.lynx.co.uk

  SQL> CONN AS SYSDBA

  Connected.

  SQL> SELECT instance_name, host_name FROM v$instance;

  INSTANCE_NAME HOST_NAME

  ---------------- --------------------------------------------

  RAC1 rac1.lynx.co.uk

  SQL>

  檢查RAC的狀態

  有幾種方法來檢查RAC的現況。srvctl實用程式顯示當前的配置和RAC資料庫的狀態。

  $ srvctl config database -d RAC

  rac1 RAC1 /u01/app/oracle/product/11.1.0/db_1

  rac2 RAC2 /u01/app/oracle/product/11.1.0/db_1

  $

  $ srvctl status database -d RAC

  Instance RAC1 is running on node rac1

  Instance RAC2 is running on node rac2

  $

  在V$ACTIVE_INSTANCES檢視也可以顯示例項目前的狀況。

  $ sqlplus / as sysdba

  SQL*Plus: Release 11.1.0.6.0 - Production on Tue Aug 19 16:55:31 2008

  Copyright (c) 1982, 2007, Oracle. All rights reserved.

  Connected to:

  Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

  With the Partitioning, Real Application Clusters, OLAP, Data Mining

  and Real Application Testing options

  SQL> SELECT * FROM v$active_instances;

  INST_NUMBER INST_NAME

  ----------- ------------------------------------------------------------

  1 rac1.lynx.co.uk:RAC1

  2 rac2.lynx.co.uk:RAC2

  SQL>

  最後,GV$檢視讓您可以顯示整個RAC的資訊。

  SQL> SELECT inst_id, username, sid, serial# FROM gv$session WHERE username IS NOT NULL;

  INST_ID USERNAME SID SERIAL#

  ---------- ------------------------------ ---------- ----------

  2 SYS 116 841

  2 SYSMAN 118 78

  2 SYS 119 1992

  2 SYSMAN 121 1

  2 SYSMAN 122 29

  2 SYS 123 2

  2 SYSMAN 124 50

  2 DBSNMP 129 1

  2 DBSNMP 130 6

  2 DBSNMP 134 1

  2 SYSMAN 145 53

  INST_ID USERNAME SID SERIAL#

  ---------- ------------------------------ ---------- ----------

  2 SYS 170 14

  1 SYSMAN 117 144

  1 SYSMAN 118 186

  1 SYSMAN 119 31

  1 SYS 121 3

  1 SYSMAN 122 162

  1 SYSMAN 123 99

  1 DBSNMP 124 3

  1 SYS 125 2

  1 SYS 126 19

  1 SYS 127 291

  INST_ID USERNAME SID SERIAL#

  ---------- ------------------------------ ---------- ----------

  1 DBSNMP 131 61

  1 SYS 170 17

  24 rows selected.

  SQL>

  如果您已配置了企業管理器,它可以用來檢視配置和資料庫的現狀,使用類似 “https://rac1.localdomain:1158/em”的網址 。


  為了改善NFS的效能,oracle公司建議使用隨oracle 11g釋出的直接NFS客戶。直接NFS客戶在下列地點尋找NFS資訊:

  (1)$ORACLE_HOME/dbs/oranfstab

  (2)/etc/oranfstab

  (3)/etc/mtab

  既然在“/etc/fstab”我們已經有了我們的掛載NFS點,因此“/etc/mtab”檔案不需要配置任何額外資訊。

  為了使客戶端工作,我們需要切換libodm11.so庫到libnfsodm11.so庫,如下所示:

  srvctl stop database -d RAC

  cd $ORACLE_HOME/lib

  mv libodm11.so libodm11.so_stub

  ln -s libnfsodm11.so libodm11.so

  srvctl start database -d RAC

  配置完成後,你就可以透過以下檢視看到直接NFS客戶端:

   v$dnfs_servers

   v$dnfs_files

   v$dnfs_channels

   v$dnfs_stats

  例如:

  SQL> SELECT svrname, dirname FROM v$dnfs_servers;

  SVRNAME DIRNAME

  ------------- -----------------

  nas1 /shared_data

  SQL>

  預設情況下,直接NFS客戶端支援直接I/O和非同步I/O。

 

文章轉載自網管之家:

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/9606200/viewspace-745648/,如需轉載,請註明出處,否則將追究法律責任。

相關文章