solaris下安裝oracle

xiexingzhi發表於2010-08-20
Solaris 9 disk suite cluster3.1 oracle10g 雙機安裝文件
一、 硬體配置 
Sun Fire V890 Server                  2臺 
2G b PCI 單邊光纖主機介面卡         2塊 
四口10/100/1000自適應乙太網卡       2塊 
EMC  CX500 
二、 軟體配置 
Solaris 9(9/05)  
Sun Cluster 3.1(9/04) 
Oracle 10g (10.2.0.1.0) 
Sun補丁(9_Recommended 17/04/06) 
三、 作業系統安裝 
    ①系統盤(146GB)分割槽: 
0    /                  123006MB 
1    swap              16009MB 
2                      整個硬碟 
3                     (未用分割槽) 
4                     (未用分割槽) 
5                     (未用分割槽) 
6    /globaldevices       516MB 
7                      109MB 
②主機名命名、IP設定、子網掩碼 
    sys-1  192.168.22.14   255.255.255.0 
    注:IPMP  測試地址 192.168.22.16 ,192.168.22.18 
        oracle  對外地址 192.168.22.17 
    sys-1  192.168.22.15   255.255.255.0 
    注:IPMP  測試地址 192.168.22.19 ,192.168.22.20 
        oracle  對外地址 192.168.22.17 
    心跳網口   ce0 ce1 
③補丁安裝 
    9_Recommended 17/04/06 
④修改核心引數 
  在sys-1和sys-2中的/etc/system檔案中加入如下: 
    set shmsys:shminfo_shmmax=4294967295 
set semsys:seminfo_semmap=1024 
set semsys:seminfo_semmni=2048 
set semsys:seminfo_semmns=2048 
set semsys:seminfo_semmsl=2048 
set semsys:seminfo_semmnu=2048 
set semsys:seminfo_semume=200 
set shmsys:shminfo_shmmin=200 
set shmsys:shminfo_shmmni=200 
set shmsys:shminfo_shmseg=200 
set semsys:seminfo_semvmx=32767 
set noexec_user_stack=1 
set noexec_user_stack_log=1 
set ce:ce_reclaim_pending=1 
set ce:ce_taskq_disable=1 
注:set ce:ce_reclaim_pending=1對ce網路卡在NAFO中的bug進行修正。 
⑤HBA卡SG-XPCI1FC-QF2驅動程式安裝 
    Download.sun.com下載HBA卡驅動SAN_4.4.9_install_it.tar.Z分別在sys-1和sys-2上做如下操作: 
      # compress -dc SAN_4.4.9_install_it.tar.Z |tar xvf - 
      #./install_it 
Logfile /var/tmp/install_it_Sun_StorEdge_SAN.log : created on Thu Apr 20 11:12:53 CST 2006  


This routine installs the packages and patches that 
make up Sun StorEdge SAN. 

Would you like to continue with the installation?  
 [y,n,?] y 


Verifying system... 


Checking for incompatiable SAN patches 


Begin installation of SAN software 

Installing StorEdge SAN packages - 

         Package SUNWsan        : Installed Successfully. 
         Package SUNWcfpl       : Installed Successfully. 
         Package SUNWcfplx      : Installed Successfully. 
         Package SUNWcfclr      : Installed Successfully. 
         Package SUNWcfcl       : Installed Successfully. 
         Package SUNWcfclx      : Installed Successfully. 
         Package SUNWfchbr      : Installed Successfully. 
         Package SUNWfchba      : Installed Successfully. 
         Package SUNWfchbx      : Installed Successfully. 
         Package SUNWfcsm       : Installed Successfully. 
         Package SUNWfcsmx      : Installed Successfully. 
         Package SUNWmdiu       : Installed Successfully. 
         Package SUNWjfca       : Installed Successfully. 
         Package SUNWjfcax      : Installed Successfully. 
         Package SUNWjfcau      : Installed Successfully. 
         Package SUNWjfcaux     : Installed Successfully. 
         Package SUNWemlxs      : Installed Successfully. 
         Package SUNWemlxsx     : Installed Successfully. 
         Package SUNWemlxu      : Installed Successfully. 
         Package SUNWemlxux     : Installed Successfully. 

StorEdge SAN packages installation completed. 

Begin patch installation 
        Patch 111847-08         : Installed Successfully. 
        Patch 113046-01         : Installed Previously. 
        Patch 113049-01         : Installed Previously. 
        Patch 113039-13         : Installed Successfully. 
        Patch 113040-18         : Installed Successfully. 
        Patch 113041-11         : Installed Successfully. 
        Patch 113042-14         : Installed Successfully. 
        Patch 113043-12         : Installed Successfully. 
        Patch 113044-05         : Installed Successfully. 
        Patch 114476-07         : Installed Successfully. 
        Patch 114477-03         : Installed Successfully. 
        Patch 114478-07         : Installed Successfully. 
        Patch 114878-10         : Installed Successfully. 
        Patch 119914-08         : Installed Successfully. 


Installation of Sun StorEdge SAN completed Successfully 

------------------------------------------- 
------------------------------------------- 
        Please reboot your system. 
------------------------------------------- 
------------------------------------------- 
⑥設定OK下local-mac-address? True 
  分別在sys-1和sys-2上做如下操作: 
  OK setenv local-mac-address? True 
  OK reset-all 
⑦編輯/etc/hosts檔案 
  在sys-1和sys-2上編輯成如下: 
  127.0.0.1       localhost 
192.168.22.14   sys-1    loghost  
192.168.22.15   sys-2 
192.168.22.17   oracle

[ 本帖最後由 阿毛~ 於 2006-4-25 16:21 編輯 ]


 阿毛~ 回覆於:2006-04-25 16:14:23

四、 安裝SSH 
注:分別在sys-1和sys-2上做如下操作: 
下載軟體 
gcc-3.3.2-sol9-sparc-local 
make-3.80-sol9-sparc-local 
ssh-3.2.5.tar 
安裝GCC、MAKE軟體 
#pkgadd -d gcc* 
#pkgadd -d make* 
③  編輯/.profile 
# cp /etc/skel/local.profile /.profile 
# vi /.profile 
加入: 
PATH=/usr/bin:/sbin:/usr/local/bin:/usr/loal/sbin:/usr/ccs/bin:/usr/sbin:/usr/openwin/bin/bin:/usr/ucb:/etc:. 
export PATH 
# . /.profile (使/.profile生效) 
編譯安裝SSH 
#tar xvf ssh-3.2.5.tar 
#cd ssh* 
#./configure 
#make 
#make install 
    ④ 生成密匙 
#ssh-keygen2 -b 1024 輸入使用者名稱root及密碼 
    ⑤ 啟動sshd2 
# /usr/local/sbin/sshd2 
⑥ 設定ssh2自動啟動 
    #vi /etc/rc2.d/S99local 加入/usr/local/sbin/sshd2 
    
五、 配置IPMP 
注:分別在sys-1和sys-2上做如下操作: 
使用eri0和ce3來配置IPMP 
修改/etc/hostname.eri0 
#vi /etc/hostname.eri0加入 
192.168.22.14 netmask + broadcast + group xxml up addif 192.168.22.16 deprecated -failover netmask + broadcast + up 
② 修改/etc/hostname.ce3 
#vi /etc/hostname.ce3加入 
    192.168.22.18 netmask + broadcast + group xxml up deprecated -failover standby up 
③ 加入預設閘道器 
   # vi /etc/defaultrouter 
   加入192.168.22.10 
   # ping 192.168.22.10 
   192.168.22.10 alive 
說明:必須加入預設閘道器,並且主機可以ping通閘道器,IPMP才可使用 
六、安裝Sun Cluster 3.1軟體 
注:分別在sys-1和sys-2上做如下操作: 
安裝Sun Web Console 
#cd /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_web_console/2.1 
#./setup 
……….. 

Installation complete. 

Server not started! No management applications registered. 
安裝Sun Cluster3.1軟體 
#cd /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Prudct/sun_cluster 
#./installer 
  
   點選Next 
    

   點選next 

      

   點選Next 
     
      

點選 install now 
   
點選exit完成安裝 

七、 建立節點 
以sys-1作為第一節點建立cluster 
#cd/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools 
#./scinstall 
  *** Main Menu *** 

    Please select from one of the following (*) options: 

      * 1) Install a cluster or cluster node 
        2) Configure a cluster to be JumpStarted from this install server 
        3) Add support for new data services to this cluster node 
      * 4) Print release information for this cluster node 

      * ?) Help with menu options 
      * q) Quit 

    Option:  1 

_[H_[J 
  *** Install Menu *** 

    Please select from any one of the following options: 

        1) Install all nodes of a new cluster 
        2) Install just this machine as the first node of a new cluster 
        3) Add this machine as a node in an existing cluster 

        ?) Help with menu options 
        q) Return to the Main Menu 

    Option:  2 

_[H_[J 
  *** Installing just the First Node of a New Cluster *** 


    This option is used to establish a new cluster using this machine as  
    the first node in that cluster. 

    Once the cluster framework software is installed, you will be asked  
    for the name of the cluster. Then, you will have the opportunity to  
    run sccheck(1M) to test this machine for basic Sun Cluster  
    pre-configuration requirements. 

    After sccheck(1M) passes, you will be asked for the names of the  
    other nodes which will initially be joining that cluster. In  
    addition, you will be asked to provide certain cluster transport  
    configuration information. 

    Press Control-d at any time to return to the Main Menu. 


    Do you want to continue (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Software Patch Installation <<

    If there are any Solaris or Sun Cluster patches that need to be added  
    as part of this Sun Cluster installation, scinstall can add them for  
    you. All patches that need to be added must first be downloaded into  
    a common patch directory. Patches can be downloaded into the patch  
    directory either as individual patches or as patches grouped together  
    into one or more tar, jar, or zip files. 

    If a patch list file is provided in the patch directory, only those  
    patches listed in the patch list file are installed. Otherwise, all  
    patches found in the directory will be installed. Refer to the  
    patchadd(1M) man page for more information regarding patch list files. 

    Do you want scinstall to install patches for you (yes/no) [yes]?   

    What is the name of the patch directory?  /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages 

    If a patch list file is provided in the patch directory, only those  
    patches listed in the patch list file are installed. Otherwise, all  
    patches found in the directory will be installed. Refer to the  
    patchadd(1M) man page for more information regarding patch list files. 

    Do you want scinstall to use a patch list file (yes/no) [no]?   

_[H_[J_[H_[J 
  >>> Cluster Name <<

    Each cluster has a name assigned to it. The name can be made up of  
    any characters other than whitespace. Each cluster name should be  
    unique within the namespace of your enterprise. 

    What is the name of the cluster you want to establish?  test 

_[H_[J_[H_[J 
  >>> Check <<

    This step allows you to run sccheck(1M) to verify that certain basic  
    hardware and software pre-configuration requirements have been met.  
    If sccheck(1M) detects potential problems with configuring this  
    machine as a cluster node, a report of failed checks is prepared and  
    available for display on the screen. Data gathering and report  
    generation can take several minutes, depending on system  
    configuration. 

    Do you want to run sccheck (yes/no) [yes]?  no

[ 本帖最後由 阿毛~ 於 2006-4-25 16:23 編輯 ]

 阿毛~ 回覆於:2006-04-25 16:15:02

_[H_[J_[H_[J 
  >>> Cluster Nodes <<

    This Sun Cluster release supports a total of up to 16 nodes. 

    Please list the names of the other nodes planned for the initial  
    cluster configuration. List one node name per line. When finished,  
    type Control-D: 

    Node name:  sys-1 
    Node name:  sys-2 
    Node name (Control-D to finish):  ^D__ 


    This is the complete list of nodes: 

        sys-1 
        sys-2 

    Is it correct (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Authenticating Requests to Add Nodes <<

    Once the first node establishes itself as a single node cluster,  
    other nodes attempting to add themselves to the cluster configuration  
    must be found on the list of nodes you just provided. You can modify  
    this list using scconf(1M) or other tools once the cluster has been  
    established. 

    By default, nodes are not securely authenticated as they attempt to  
    add themselves to the cluster configuration. This is generally  
    considered adequate, since nodes which are not physically connected  
    to the private cluster interconnect will never be able to actually  
    join the cluster. However, DES authentication is available. If DES  
    authentication is selected, you must configure all necessary  
    encryption keys before any node will be allowed to join the cluster  
    (see keyserv(1M), publickey(4)). 

    Do you need to use DES authentication (yes/no) [no]?   

_[H_[J_[H_[J 
  >>> Network Address for the Cluster Transport <<

    The private cluster transport uses a default network address of  
    172.16.0.0. But, if this network address is already in use elsewhere  
    within your enterprise, you may need to select another address from  
    the range of recommended private addresses (see RFC 1597 for details). 

    If you do select another network address, bear in mind that the Sun  
    Cluster software requires that the rightmost two octets always be  
    zero. 

    The default netmask is 255.255.0.0. You can select another netmask,  
    as long as it minimally masks all bits given in the network address. 

    Is it okay to accept the default network address (yes/no) [yes]?   

    Is it okay to accept the default netmask (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Point-to-Point Cables <<

    The two nodes of a two-node cluster may use a directly-connected  
    interconnect. That is, no cluster transport junctions are configured.  
    However, when there are greater than two nodes, this interactive form.  
    of scinstall assumes that there will be exactly two cluster transport  
    junctions. 

    Does this two-node cluster use transport junctions (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Cluster Transport Junctions <<

    All cluster transport adapters in this cluster must be cabled to a  
    transport junction, or "switch". And, each adapter on a given node  
    must be cabled to a different junction. Interactive scinstall  
    requires that you identify two switches for use in the cluster and  
    the two transport adapters on each node to which they are cabled. 

    What is the name of the first junction in the cluster [switch1]?   

    What is the name of the second junction in the cluster [switch2]?   

_[H_[J_[H_[J 
  >>> Cluster Transport Adapters and Cables <<

    You must configure at least two cluster transport adapters for each  
    node in the cluster. These are the adapters which attach to the  
    private cluster interconnect. 

    Select the first cluster transport adapter: 

        1) ce0 
        2) ce1 
        3) ce2 
        4) ce3 
        5) ge0 
        6) Other 

    Option:  1 

    Adapter "ce0" is an Ethernet adapter. 

    Searching for any unexpected network traffic on "ce0" ... done 
    Verification completed. No traffic was detected over a 10 second  
    sample period. 

    The "dlpi" transport type will be set for this cluster. 

    Name of the junction to which "ce0" is connected [switch1]?   

    Each adapter is cabled to a particular port on a transport junction.  
    And, each port is assigned a name. You can explicitly assign a name  
    to each port. Or, for Ethernet switches, you can choose to allow  
    scinstall to assign a default name for you. The default port name  
    assignment sets the name to the node number of the node hosting the  
    transport adapter at the other end of the cable. 

    For more information regarding port naming requirements, refer to the  
    scconf_transp_jct family of man pages (e.g.,  
    scconf_transp_jct_dolphinswitch(1M)). 

    Use the default port name for the "ce0" connection (yes/no) [yes]?   

    Select the second cluster transport adapter: 

        1) ce0 
        2) ce1 
        3) ce2 
        4) ce3 
        5) ge0 
        6) Other 

    Option:  2 

    Adapter "ce1" is an Ethernet adapter. 

    Searching for any unexpected network traffic on "ce1" ... done 
    Verification completed. No traffic was detected over a 10 second  
    sample period. 

    Name of the junction to which "ce1" is connected [switch2]?   

    Use the default port name for the "ce1" connection (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Global Devices File System <<

    Each node in the cluster must have a local file system mounted on  
    /global/.devices/node@ before it can successfully participate  
    as a cluster member. Since the "nodeID" is not assigned until  
    scinstall is run, scinstall will set this up for you. 

    You must supply the name of either an already-mounted file system or  
    raw disk partition which scinstall can use to create the global  
    devices file system. This file system or partition should be at least  
    512 MB in size. 

    If an already-mounted file system is used, the file system must be  
    empty. If a raw disk partition is used, a new file system will be  
    created for you. 

    The default is to use /globaldevices. 

    Is it okay to use this default (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Automatic Reboot <<

    Once scinstall has successfully installed and initialized the Sun  
    Cluster software for this machine, it will be necessary to reboot.  
    After the reboot, this machine will be established as the first node  
    in the new cluster. 

    Do you want scinstall to reboot for you (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Confirmation <<

    Your responses indicate the following options to scinstall: 

      scinstall -ik \  
           -C test \  
           -F \  
           -M patchdir=/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages \  
           -T node=sys-1,node=sys-2,authtype=sys \  
           -A trtype=dlpi,name=ce0 -A trtype=dlpi,name=ce1 \  
           -B type=switch,name=switch1 -B type=switch,name=switch2 \  
           -m endpoint=:ce0,endpoint=switch1 \  
           -m endpoint=:ce1,endpoint=switch2 \  
            

    Are these the options you want to use (yes/no) [yes]?   

    Do you want to continue with the install (yes/no) [yes]?   


Checking device to use for global devices file system ... done 
Installing patches ... failed 

scinstall:  Problems detected during extraction or installation of patches. 


Initializing cluster name to "xxml" ... done 
Initializing authentication options ... done 
Initializing configuration for adapter "ce0" ... done 
Initializing configuration for adapter "ce1" ... done 
Initializing configuration for junction "switch1" ... done 
Initializing configuration for junction "switch2" ... done 
Initializing configuration for cable ... done 
Initializing configuration for cable ... done 


Setting the node ID for "sys-1" ... done (id=1) 

Setting the major number for the "did" driver ... done

[ 本帖最後由 阿毛~ 於 2006-4-25 16:24 編輯 ]

 阿毛~ 回覆於:2006-04-25 16:15:52

"did" driver major number set to 300 

Checking for global devices global file system ... done 
Updating vfstab ... done 

Verifying that NTP is configured ... done 
Installing a default NTP configuration ... done 
Please complete the NTP configuration after scinstall has finished. 

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done 
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done 

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done 
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done 

Verifying that power management is NOT configured ... done 
Unconfiguring power management ... done 
/etc/power.conf has been renamed to /etc/power.conf.042006154016 
Power management is incompatible with the HA goals of the cluster. 
Please do not attempt to re-configure power management. 

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done 

Ensure network routing is disabled ... done 
Network routing has been disabled on this node by creating /etc/notrouter. 
Having a cluster node act as a router is not supported by Sun Cluster. 
Please do not re-enable network routing. 

Log file - /var/cluster/logs/install/scinstall.log.2140 
將sys-2作為第二節點加入到cluster中 
#cd/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools 
#./scinstall 
*** Main Menu *** 

    Please select from one of the following (*) options: 

      * 1) Install a cluster or cluster node 
        2) Configure a cluster to be JumpStarted from this install server 
        3) Add support for new data services to this cluster node 
      * 4) Print release information for this cluster node 

      * ?) Help with menu options 
      * q) Quit 

Option:  1 
*** Install Menu *** 

    Please select from any one of the following options: 

        1) Install all nodes of a new cluster 
        2) Install just this machine as the first node of a new cluster 
        3) Add this machine as a node in an existing cluster 

        ?) Help with menu options 
        q) Return to the Main Menu 

    Option:  3 
*** Adding a Node to an Existing Cluster *** 


    This option is used to add this machine as a node in an already  
    established cluster. If this is an initial cluster install, there may  
    only be a single node which has established itself in the new cluster. 

    Once the cluster framework software is installed, you will be asked  
    to provide both the name of the cluster and the name of one of the  
    nodes already in the cluster. Then, sccheck(1M) is run to test this  
    machine for basic Sun Cluster pre-configuration requirements. 

    After sccheck(1M) passes, you may be asked to provide certain cluster  
    transport configuration information. 

    Press Control-d at any time to return to the Main Menu. 


    Do you want to continue (yes/no) [yes]?   
  >>> Software Patch Installation <<

    If there are any Solaris or Sun Cluster patches that need to be added  
    as part of this Sun Cluster installation, scinstall can add them for  
    you. All patches that need to be added must first be downloaded into  
    a common patch directory. Patches can be downloaded into the patch  
    directory either as individual patches or as patches grouped together  
    into one or more tar, jar, or zip files. 

    If a patch list file is provided in the patch directory, only those  
    patches listed in the patch list file are installed. Otherwise, all  
    patches found in the directory will be installed. Refer to the  
    patchadd(1M) man page for more information regarding patch list files. 

    Do you want scinstall to install patches for you (yes/no) [yes]?   

    What is the name of the patch directory [/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages]?   

    If a patch list file is provided in the patch directory, only those  
    patches listed in the patch list file are installed. Otherwise, all  
    patches found in the directory will be installed. Refer to the  
    patchadd(1M) man page for more information regarding patch list files. 

    Do you want scinstall to use a patch list file (yes/no) [no]? 
  >>> Sponsoring Node <<

    For any machine to join a cluster, it must identify a node in that  
    cluster willing to "sponsor" its membership in the cluster. When  
    configuring a new cluster, this "sponsor" node is typically the first  
    node used to build the new cluster. However, if the cluster is  
    already established, the "sponsoring" node can be any node in that  
    cluster. 

    Already established clusters can keep a list of hosts which are able  
    to configure themselves as new cluster members. This machine should  
    be in the join list of any cluster which it tries to join. If the  
    list does not include this machine, you may need to add it using  
    scconf(1M) or other tools. 

    And, if the target cluster uses DES to authenticate new machines  
    attempting to configure themselves as new cluster members, the  
    necessary encryption keys must be configured before any attempt to  
    join. 

    What is the name of the sponsoring node [sys-1]? 
>>> Cluster Name <<

    Each cluster has a name assigned to it. When adding a node to the  
    cluster, you must identify the name of the cluster you are attempting  
    to join. A sanity check is performed to verify that the "sponsoring"  
    node is a member of that cluster. 

    What is the name of the cluster you want to join [test]?   

    Attempting to contact "sys-1" ... done 

    Cluster name "xxml" is correct. 
     
Press Enter to continue: 
>>> Check <<

    This step allows you to run sccheck(1M) to verify that certain basic  
    hardware and software pre-configuration requirements have been met.  
    If sccheck(1M) detects potential problems with configuring this  
    machine as a cluster node, a report of failed checks is prepared and  
    available for display on the screen. Data gathering and report  
    generation can take several minutes, depending on system  
    configuration. 

    Do you want to run sccheck (yes/no) [yes]?  No 
  >>> Autodiscovery of Cluster Transport <<

    If you are using Ethernet adapters as your cluster transport  
    adapters, autodiscovery is the best method for configuring the  
    cluster transport. 

    However, it appears that scinstall has already been run at least once  
    before on this machine. You can either attempt to autodiscover or  
    continue with the answers that you gave the last time you ran  
    scinstall. 

    Do you want to use autodiscovery anyway (yes/no) [no]?  yes 
    Probing ..................... 

    The following connection was discovered: 

        sys-1:ce1  switch2  sys-2:ce1 

    Probes were sent out from all transport adapters configured for  
    cluster node "sys-1". But, they were only received on one of the  
    network adapters on this machine ("sys-2"). This may be due to  
    any number of reasons, including improper cabling, an improper  
    configuration for "sys-1", or a switch which was confused by the  
    probes. 

    You can either attempt to correct the problem and try the probes  
    again or try to manually configure the transport. Correcting the  
    problem may involve re-cabling, changing the configuration for  
    "sys-1", or fixing hardware. 

    Do you want to try again (yes/no) [yes]?  no 

 [H [J [H [J 
  >>> Point-to-Point Cables <<

    The two nodes of a two-node cluster may use a directly-connected  
    interconnect. That is, no cluster transport junctions are configured.  
    However, when there are greater than two nodes, this interactive form.  
    of scinstall assumes that there will be exactly two cluster transport  
    junctions. 

    Is this a two-node cluster (yes/no) [yes]?   

    Does this two-node cluster use transport junctions (yes/no) [yes]?   

 [H [J [H [J 
  >>> Cluster Transport Junctions <<

    All cluster transport adapters in this cluster must be cabled to a  
    transport junction, or "switch". And, each adapter on a given node  
    must be cabled to a different junction. Interactive scinstall  
    requires that you identify two switches for use in the cluster and  
    the two transport adapters on each node to which they are cabled. 

    What is the name of the first junction in the cluster [switch1]?   

    What is the name of the second junction in the cluster [switch2]?   

 [H [J [H [J 
  >>> Cluster Transport Adapters and Cables <<

    You must configure at least two cluster transport adapters for each  
    node in the cluster. These are the adapters which attach to the  
    private cluster interconnect. 

    What is the name of the first cluster transport adapter (help) [ce0]?   

    Adapter "ce0" is an Ethernet adapter. 

    The "dlpi" transport type will be set for this cluster. 

    Name of the junction to which "ce0" is connected [switch1]?   

    Each adapter is cabled to a particular port on a transport junction.  
    And, each port is assigned a name. You can explicitly assign a name  
    to each port. Or, for Ethernet switches, you can choose to allow  
    scinstall to assign a default name for you. The default port name  
    assignment sets the name to the node number of the node hosting the  
    transport adapter at the other end of the cable. 

    For more information regarding port naming requirements, refer to the  
    scconf_transp_jct family of man pages (e.g.,  
    scconf_transp_jct_dolphinswitch(1M)). 

    Use the default port name for the "ce0" connection (yes/no) [yes]?   

    What is the name of the second cluster transport adapter (help) [ce1]?   

    Adapter "ce1" is an Ethernet adapter. 

    Name of the junction to which "ce1" is connected [switch2]?   

    Use the default port name for the "ce1" connection (yes/no) [yes]?   

 [H [J [H [J 
  >>> Global Devices File System <<

    Each node in the cluster must have a local file system mounted on  
    /global/.devices/node@ before it can successfully participate  
    as a cluster member. Since the "nodeID" is not assigned until  
    scinstall is run, scinstall will set this up for you. 

    You must supply the name of either an already-mounted file system or  
    raw disk partition which scinstall can use to create the global  
    devices file system. This file system or partition should be at least  
    512 MB in size. 

    If an already-mounted file system is used, the file system must be  
    empty. If a raw disk partition is used, a new file system will be  
    created for you. 

    The default is to use /globaldevices. 

    Is it okay to use this default (yes/no) [yes]?   

 [H [J [H [J 
  >>> Automatic Reboot <<

    Once scinstall has successfully installed and initialized the Sun  
    Cluster software for this machine, it will be necessary to reboot.  
    The reboot will cause this machine to join the cluster for the first  
    time. 

    Do you want scinstall to reboot for you (yes/no) [yes]?   

 [H [J [H [J 
  >>> Confirmation <<

    Your responses indicate the following options to scinstall: 

      scinstall -ik \  
           -C xxml \  
           -N sjz-xxml-1 \  
           -M patchdir=/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages \  
           -A trtype=dlpi,name=ce0 -A trtype=dlpi,name=ce1 \  
           -m endpoint=:ce0,endpoint=switch1 \  
           -m endpoint=:ce1,endpoint=switch2 \  
            

    Are these the options you want to use (yes/no) [yes]?   

    Do you want to continue with the install (yes/no) [yes]?   


Checking device to use for global devices file system ... done 
Installing patches ... failed 

scinstall:  Problems detected during extraction or installation of patches. 


Adding node "sys-2" to the cluster configuration ... done 
Adding adapter "ce0" to the cluster configuration ... done 
Adding adapter "ce1" to the cluster configuration ... done 
Adding cable to the cluster configuration ... done 
Adding cable to the cluster configuration ... done 

Copying the config from "sys-1" ... done 
Copying the cacao keys from "sys-1" ... done 


Setting the node ID for "sys-2" ... done (id=2) 

Setting the major number for the "did" driver ...  
Obtaining the major number for the "did" driver from "sys-1" ... done 
"did" driver major number set to 300 

Checking for global devices global file system ... done 
Updating vfstab ... done 

Verifying that NTP is configured ... done 
Installing a default NTP configuration ... done 
Please complete the NTP configuration after scinstall has finished. 

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done 
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done 

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done 
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done 

Verifying that power management is NOT configured ... done 
Unconfiguring power management ... done 
/etc/power.conf has been renamed to /etc/power.conf.042206104133 
Power management is incompatible with the HA goals of the cluster. 
Please do not attempt to re-configure power management. 

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ...

[ 本帖最後由 阿毛~ 於 2006-4-25 16:27 編輯 ]

 阿毛~ 回覆於:2006-04-25 16:16:59

八、 建立共享磁碟集 
注:一下只在sys-1上操作 
檢視DID裝置 
   # scdidadm -L 
1        sys-1:/dev/rdsk/c0t0d0    /dev/did/rdsk/d1      
2        sys-1:/dev/rdsk/c1t2d0    /dev/did/rdsk/d2      
3        sys-1:/dev/rdsk/c1t0d0    /dev/did/rdsk/d3      
4        sys-1:/dev/rdsk/c1t5d0    /dev/did/rdsk/d4      
5        sys-1:/dev/rdsk/c1t1d0    /dev/did/rdsk/d5      
6        sys-1:/dev/rdsk/c1t4d0    /dev/did/rdsk/d6      
7        sys-1:/dev/rdsk/c1t3d0    /dev/did/rdsk/d7      
8        sys-1:/dev/rdsk/c3t5006016930226EF3d0 /dev/did/rdsk/d8      
8        sys-1:/dev/rdsk/c3t5006016030226EF3d0 /dev/did/rdsk/d8      
8        sys-1:/dev/rdsk/c2t5006016830226EF3d0 /dev/did/rdsk/d8      
8        sys-1:/dev/rdsk/c2t5006016130226EF3d0 /dev/did/rdsk/d8      
9        sys-1:/dev/rdsk/c2t5006016830226EF3d53 /dev/did/rdsk/d9      
9        sys-1:/dev/rdsk/c2t5006016130226EF3d53 /dev/did/rdsk/d9      
9        sys-1:/dev/rdsk/c3t5006016930226EF3d53 /dev/did/rdsk/d9      
9        sys-1:/dev/rdsk/c3t5006016030226EF3d53 /dev/did/rdsk/d9      
9        sys-1:/dev/rdsk/c4t60060160478D1900F658B0B052D0DA11d0 /dev/did/rdsk/d9      
9        sys-2:/dev/rdsk/c2t5006016830226EF3d53 /dev/did/rdsk/d9      
9        sys-2:/dev/rdsk/c2t5006016130226EF3d53 /dev/did/rdsk/d9      
9        sys-2:/dev/rdsk/c3t5006016030226EF3d53 /dev/did/rdsk/d9      
9        sys-2:/dev/rdsk/c3t5006016930226EF3d53 /dev/did/rdsk/d9      
10       sys-1:/dev/rdsk/c2t5006016830226EF3d52 /dev/did/rdsk/d10     
10       sys-1:/dev/rdsk/c2t5006016130226EF3d52 /dev/did/rdsk/d10     
10       sys-1:/dev/rdsk/c3t5006016930226EF3d52 /dev/did/rdsk/d10     
10       sys-1:/dev/rdsk/c3t5006016030226EF3d52 /dev/did/rdsk/d10     
10       sys-1:/dev/rdsk/c4t60060160478D19001269AABC52D0DA11d0 /dev/did/rdsk/d10     
10       sys-2:/dev/rdsk/c2t5006016830226EF3d52 /dev/did/rdsk/d10     
10       sys-2:/dev/rdsk/c2t5006016130226EF3d52 /dev/did/rdsk/d10     
10       sys-2:/dev/rdsk/c3t5006016030226EF3d52 /dev/did/rdsk/d10     
10       sys-2:/dev/rdsk/c3t5006016930226EF3d52 /dev/did/rdsk/d10     
11       sys-1:/dev/rdsk/c2t5006016830226EF3d51 /dev/did/rdsk/d11     
11      sys-1:/dev/rdsk/c2t5006016130226EF3d51 /dev/did/rdsk/d11     
11       sys-1:/dev/rdsk/c3t5006016930226EF3d51 /dev/did/rdsk/d11     
11       sys-1:/dev/rdsk/c3t5006016030226EF3d51 /dev/did/rdsk/d11     
11       sys-1:/dev/rdsk/c4t60060160478D1900F0377FC752D0DA11d0 /dev/did/rdsk/d11     
11       sys-2:/dev/rdsk/c2t5006016830226EF3d51 /dev/did/rdsk/d11     
11       sys-2:/dev/rdsk/c2t5006016130226EF3d51 /dev/did/rdsk/d11     
11       sys-2:/dev/rdsk/c3t5006016030226EF3d51 /dev/did/rdsk/d11     
11       sys-2:/dev/rdsk/c3t5006016930226EF3d51 /dev/did/rdsk/d11     
12       sys-1:/dev/rdsk/c2t5006016830226EF3d50 /dev/did/rdsk/d12     
12       sys-1:/dev/rdsk/c2t5006016130226EF3d50 /dev/did/rdsk/d12     
12       sys-1:/dev/rdsk/c3t5006016930226EF3d50 /dev/did/rdsk/d12     
12       sys-1:/dev/rdsk/c3t5006016030226EF3d50 /dev/did/rdsk/d12     
12       sys-1:/dev/rdsk/c4t60060160478D190082D97FD952D0DA11d0 /dev/did/rdsk/d12     
12       sys-2:/dev/rdsk/c2t5006016830226EF3d50 /dev/did/rdsk/d12     
12       sys-2:/dev/rdsk/c2t5006016130226EF3d50 /dev/did/rdsk/d12     
12       sys-2:/dev/rdsk/c3t5006016030226EF3d50 /dev/did/rdsk/d12     
12       sys-2:/dev/rdsk/c3t5006016930226EF3d50 /dev/did/rdsk/d12

[ 本帖最後由 阿毛~ 於 2006-4-25 16:30 編輯 ]

 阿毛~ 回覆於:2006-04-25 16:18:05

建立metaset磁碟組 
#metadb -a -f -c 3 c1t0d0s7 (同時在sys-2上操作) 
# metadb 
        flags           first blk       block count 
     a        u         16              8192            /dev/dsk/c1t0d0s7 
     a        u         8208            8192            /dev/dsk/c1t0d0s7 
     a        u         16400           8192            /dev/dsk/c1t0d0s7 
# metaset -s oraset -a -h sys-1 sys-2 
# metaset -s oraset -t 
# metaset 

Set name = oraset, Set number = 1 

Host                Owner 
        sys-1         yes 
        sys-2          
將共享DID裝置加入到oraset組中 
#metaset -s oraset -a /dev/did/rdsk/d9 /dev/did/rdsk/d10 \ 
 /dev/did/rdsk/d11 /dev/did/rdsk/d12 /dev/did/rdsk/d13 \ 
/dev/did/rdsk/d14 /dev/did/rdsk/d15 /dev/did/rdsk/d16 \ 
/dev/did/rdsk/d17 /dev/did/rdsk/d18 /dev/did/rdsk/d19 \ 
/dev/did/rdsk/d20 /dev/did/rdsk/d21 /dev/did/rdsk/d22 \ 
/dev/did/rdsk/d23 /dev/did/rdsk/d24 /dev/did/rdsk/d25 \ 
/dev/did/rdsk/d26 /dev/did/rdsk/d27 /dev/did/rdsk/d28 \ 
/dev/did/rdsk/d29 /dev/did/rdsk/d30 /dev/did/rdsk/d31 \ 
/dev/did/rdsk/d32 /dev/did/rdsk/d33 /dev/did/rdsk/d34 \ 
/dev/did/rdsk/d35 /dev/did/rdsk/d36 /dev/did/rdsk/d37 \ 
/dev/did/rdsk/d38 /dev/did/rdsk/d39 /dev/did/rdsk/d40 \ 
/dev/did/rdsk/d41 /dev/did/rdsk/d42 /dev/did/rdsk/d43 \ 
/dev/did/rdsk/d44 /dev/did/rdsk/d45 /dev/did/rdsk/d46 \ 
/dev/did/rdsk/d47 /dev/did/rdsk/d48 /dev/did/rdsk/d49 \ 
/dev/did/rdsk/d50 /dev/did/rdsk/d51 /dev/did/rdsk/d52 \ 
/dev/did/rdsk/d53 /dev/did/rdsk/d54 /dev/did/rdsk/d55 \ 
/dev/did/rdsk/d56 /dev/did/rdsk/d57 /dev/did/rdsk/d58 \ 
/dev/did/rdsk/d59 /dev/did/rdsk/d60 /dev/did/rdsk/d61 \ 
/dev/did/rdsk/d62 
13塊盤建立Raid0 Concatenation 
# metainit oraset/d110 13 1 /dev/did/rdsk/d9s0 1 \ 
/dev/did/rdsk/d10s0 1 /dev/did/rdsk/d11s0 1 \ 
/dev/did/rdsk/d12s0 1 /dev/did/rdsk/d13s0 1 \ 
/dev/did/rdsk/d14s0 1 /dev/did/rdsk/d15s0 1 \ 
/dev/did/rdsk/d16s0 1 /dev/did/rdsk/d17s0 1 \ 
/dev/did/rdsk/d18s0 1 /dev/did/rdsk/d19s0 1 \ 
/dev/did/rdsk/d20s0 1 /dev/did/rdsk/d21s0 
oraset/d110: Concat/Stripe is setup 
注:Concatenation和stripe區別 
     RAID 0是把多個硬碟空間組織在一起形成一個大的邏輯盤。Concatenation方式相當把多個盤空間一個一個一次串接,stripe方式是把每個盤空間劃分為一條條的,然後按條(不論該條在哪個盤上)將空間重新組織成一個大的邏輯盤。 
     在使用上,前一種方式相當於一個物理盤存滿後才用下一個物理盤;後一種方式相當於可以同時往存在於幾個不同物理盤上的條上讀寫。因此後一種方式在I/O上效能更好。 
在oraset/d110上建立軟分割槽存放oracle資料庫檔案 
    # metainit oraset/d111 -p d110 50m 
oraset/d111: Soft Partition is setup 
# metainit oraset/d112 -p d110 50m 
oraset/d112: Soft Partition is setup 
# metainit oraset/d113 -p d110 50m 
oraset/d113: Soft Partition is setup 
# metainit oraset/d114 -p d110 1024m 
oraset/d114: Soft Partition is setup 
# metainit oraset/d115 -p d110 1024m 
oraset/d115: Soft Partition is setup 
# metainit oraset/d116 -p d110 1024m 
oraset/d116: Soft Partition is setup 
# metainit oraset/d117 -p d110 1024m 
oraset/d117: Soft Partition is setup 
# metainit oraset/d118 -p d110 1024m 
oraset/d118: Soft Partition is setup 
# metainit oraset/d119 -p d110 2048m 
oraset/d119: Soft Partition is setup 
# metainit oraset/d120 -p d110 2048m 
oraset/d120: Soft Partition is setup 
# metainit oraset/d121 -p d110 2048m 
oraset/d121: Soft Partition is setup 
# metainit oraset/d122 -p d110 8192m 
oraset/d122: Soft Partition is setup 
改變新建裸裝置宿主 
    # chown oracle /dev/md/oraset/rdsk/d* 
# chgrp dba /dev/md/oraset/rdsk/d* 
# chmod 600 /dev/md/oraset/rdsk/d* 
# ls -lL /dev/md/oraset/rdsk/d1* 
注:以上需在兩臺主機上分別操作 
   
九、 安裝oracle 10G軟體 
獲取oracle 10G(10.2.0.1.0)介質 
設定oracle安裝環境 
注:以下需在兩臺主機上分別操作 
●建立安裝必需的組和使用者 
#groupadd oinstall 
#groupadd dba 
#useradd -d /export/home/oracle -g oinstall -G dba -m oracle 
#passwd oracle 
●建立安裝目錄  
#mkdir /oracle 
#mkdir /oracle/oradata 
#chown -R oracle:oinstall /oracle/oradata 
#chmod 755 /oracle/oradata 
●設定oracle使用者環境變數  
#su - oracle 
#vi .profile 
加入如下內容: 
This is the default standard profile provided to a user. 
# They are expected to edit it to meet their own needs. 

MAIL=/usr/mail/${LOGNAME:?} 

umask 022 
ORACLE_BASE=/oracle;export ORACLE_BASE 
ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1 
export ORACLE_HOME

[ 本帖最後由 阿毛~ 於 2006-4-25 16:31 編輯 ]

 阿毛~ 回覆於:2006-04-25 16:18:47

ORACLE_SID=orcl;export ORACLE_SID 
PATH=$ORACLE_HOME/bin:/usr/bin:/usr/ucb:/etc:/usr/openwin/bin:/usr/ccs/bin 
export PATH 
安裝oracle軟體 
注:只在sys-1上操作,安裝過程中不進行資料庫的建立 
#cd /cdrom/cdrom0 
#./runInstaller 
安裝過程省略……… 
十、 建立oracle資料庫所需檔案的連結至裸裝置 
注:只在sys-1上操作 
# su - oracle 
$ mkdir -p /oracle/oradata/orcl 
$ cd /oracle/oradata/orcl 
$ ln -s /dev/md/oraset/rdsk/d111 control01.ctl 
$ ln -s /dev/md/oraset/rdsk/d112 control02.ctl 
$ ln -s /dev/md/oraset/rdsk/d113 control03.ctl 
$ ln -s /dev/md/oraset/rdsk/d114 sysaux01.dbf 
$ ln -s /dev/md/oraset/rdsk/d115 system01.dbf 
$ ln -s /dev/md/oraset/rdsk/d116 temp01.dbf 
$ ln -s /dev/md/oraset/rdsk/d117 undotbs01.dbf 
$ ln -s /dev/md/oraset/rdsk/d118 users01.dbf 
$ ln -s /dev/md/oraset/rdsk/d119 redo01.log 
$ ln -s /dev/md/oraset/rdsk/d120 redo02.log 
$ ln -s /dev/md/oraset/rdsk/d121 redo03.log 
$ mkdir flash_recovery_area 
$ ln -s /dev/md/oraset/rdsk/d122 flash_recovery_area 
十一、建立oracle資料庫 
    注:只在sys-1上操作 
圖形化登陸sys-1,執行dbca建立資料庫 
#cd /oracle/p*/*/*/bin 
#./dbca 
  

點選NEXT 
  
    
點選Next 
      

點選Next 
     
  

輸入Dabase Name orcl 
    SID          orcl 
點選Next 

  

點選Next 

  

輸入The Same Password For All Accounts oracle 點選Next 

  

選擇Storage Options Raw Devices(裸裝置)點選Next 

  

Flash Recovery Area {ORACLE_BASE}/oradata/flash_recovery_area 
Flash Recovery Size 4096MB 
點選Next 

  

選擇如圖,點選Next 

  
File name                          File Directory 
control01.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
control02.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
control03.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
system01.dbf               {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
undotbs01.dbf              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
sysaux01.dbf               {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
users01.dbf                {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
redo01.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
redo02.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
redo03.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
點選Next 

  

點選Finish 
  

點選OK 

  

建立完成exit退出 



十二、建立監聽 
注:只在sys-1上操作 
# cd /oracle/p*/*/*/bin 
# netca 
  
點選NEXT 
  
點選 Next 
  
完成建立

[ 本帖最後由 阿毛~ 於 2006-4-25 16:32 編輯 ]

 阿毛~ 回覆於:2006-04-25 16:19:48

十三、啟動資料庫,並將sys-1上的oracle軟體目錄打包後ftp至sys-2上,並解壓 

①在sys-1上啟動資料庫測試 
# su - oracle 
$ sqlplus “as sysdba” 
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Apr 23 14:14:12 2006 

Copyright (c) 1982, 2005, Oracle.  All rights reserved. 

Connected to an idle instance. 

SQL> startup 
ORACLE instance started. 

Total System Global Area 4294967296 bytes 
Fixed Size                  1984144 bytes 
Variable Size             805312880 bytes 
Database Buffers         3472883712 bytes 
Redo Buffers               14786560 bytes 
Database mounted. 
Database opened. 
SQL> shutdown immediate; 
Database closed. 
Database dismounted. 
ORACLE instance shut down. 
SQL>exit 
建立資料庫連線使用者,並賦予許可權 
    SQL> create user oracle identified by oracle; 

User created. 

SQL> grant connect, resource to oracle; 

Grant succeeded. 

SQL> 
將sys-1上的oracle軟體目錄打包後ftp至sys-2上,並解壓 
啟動資料庫進行測試 
# su - oracle 
$ sqlplus “as sysdba” 
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Apr 23 14:14:12 2006 

Copyright (c) 1982, 2005, Oracle.  All rights reserved. 

Connected to an idle instance. 

SQL> startup 
ORACLE instance started. 

Total System Global Area 4294967296 bytes 
Fixed Size                  1984144 bytes 
Variable Size             805312880 bytes 
Database Buffers         3472883712 bytes 
Redo Buffers               14786560 bytes 
Database mounted. 
Database opened. 
SQL> shutdown immediate; 
Database closed. 
Database dismounted. 
ORACLE instance shut down. 
SQL>exit 
修改/oracle/product/10.2.0.0.1/db_1/network/admin/listener.ora 
        /oracle/product/10.2.0.0.1/db_1/network/admin/tnsnames.ora 
注:兩臺機器上同時操作 
將sys-1替換為192.168.22.17 
   
十四、新增oracle Agent 
     注:兩臺主機分別新增 
     # ./scinstall 


  *** Main Menu *** 

    Please select from one of the following (*) options: 

        1) Install a cluster or cluster node 
        2) Configure a cluster to be JumpStarted from this install server 
      * 3) Add support for new data services to this cluster node 
      * 4) Print release information for this cluster node 

      * ?) Help with menu options 
      * q) Quit 

    Option:   

*** Adding Data Service Software *** 


    This option is used to install data services software. 

    Where is the data services CD [/cdrom/cdrom0]?  /export/soft/sc-agents-3_1_904-sparc 

    Select the data services you want to install: 

           Identifier     Description                                        

        1) pax            Sun Cluster HA for AGFA IMPAX 
        2) tomcat         Sun Cluster HA for Apache Tomcat 
        3) apache         Sun Cluster HA for Apache 
        4) wls            Sun Cluster HA for BEA WebLogic Server 
        5) dhcp           Sun Cluster HA for DHCP 
        6) dns            Sun Cluster HA for DNS 
        7) ebs            Sun Cluster HA for Oracle E-Business Suite 
        8) mqi            Sun Cluster HA for WebSphere MQ Integrator 
        9) mqs            Sun Cluster HA for WebSphere MQ 
       10) mys            Sun Cluster HA for MySQL 

        n) Next > 
        q) Done 

    Option(s):  n 

    Select the data services you want to install: 

           Identifier     Description                                        

       11) sps            Sun Cluster HA for N1 Grid Service Provisioning 
       12) nfs            Sun Cluster HA for NFS 
       13) netbackup      Sun Cluster HA for NetBackup 
       14) 9ias           Sun Cluster HA for Oracle9i Application Server 
       15) oracle         Sun Cluster HA for Oracle 
       16) sapdb          Sun Cluster HA for SAPDB 
       17) sapwebas       Sun Cluster HA for SAP Web Application Server 
       18) sap            Sun Cluster HA for SAP 
       19) livecache      Sun Cluster HA for SAP liveCache 
       20) sge            Sun Cluster HA for Sun Grid Engine 

        p) < Previous 
        n) Next > 
        q) Done 

    Option(s):  15 
     Selected:  15 

    Option(s):  q 


    This is the complete list of data services you selected: 

        oracle 

    Is it correct (yes/no) [yes]?   

    Is it okay to add the software for this data service [yes]   


scinstall -ik -s oracle -d /export/soft/sc-agents-3_1_904-sparc 


** Installing Sun Cluster HA for Oracle ** 
        SUNWscor....done 
        SUNWcscor...done 
        SUNWjscor...done 

     
Press Enter to continue:  s 


十五、建立Quorum devices 
      注:只在sys-1上操作 
     # scconf -a -q globaldev=d9 
     # scconf -a -q globaldev=d10 
     # scconf -c -q reset

[ 本帖最後由 阿毛~ 於 2006-4-25 16:33 編輯 ]

 阿毛~ 回覆於:2006-04-25 16:20:19

十六、建立資源組並新增資源 
     注:只在sys-1上操作 
註冊資源型別 
         # scrgadm -a -t SUNW.oracle_server 
# scrgadm -a -t SUNW.oracle_listener 
建立空資源組 
scrgadm -a -g orareg 
增加IP資源Create a LogicalHostname resource 
   # scrgadm -a -L -g orareg -l oracle 
增加儲存資源Create an HAStoragePlus resource 
# scrgadm -a -j oradata -g orareg -t SUNW.HAStoragePlus \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d112 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d113 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d114 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d115 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d116 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d117 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d118 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d119 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d120 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d121 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d122 
增加應用資源 oracle_server Create an oracle_server resource 
# scrgadm -a -j oraser -g orareg \ 
-t SUNW.oracle_server \ 
-x ORACLE_SID=orcl \ 
-x ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1 \ 
-x Alert_log_file=/oracle/admin/orcl/bdump/alert_orcl.log \ 
-x Parameter_file=/oracle/admin/orcl/pfile/init.ora \ 
-x Connect_string=oracle/oracle 

增加資源Oracle_listener Create an oracle_listener resource by entering 
# scrgadm -a -j oralistener -g orareg -t oracle_listener \ 
-x ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1 \ 
-x LISTENER_NAME=LISTENER 
啟用資源組Bring the resource group online 
# scswitch -Z -g orareg 
檢視cluster狀態 
    # scstat 
------------------------------------------------------------------ 

-- Cluster Nodes -- 

                    Node name           Status 
                    ---------           ------ 
  Cluster node:     sys-1          Online 
  Cluster node:     sys-2          Online 

------------------------------------------------------------------ 

-- Cluster Transport Paths -- 

                    Endpoint            Endpoint            Status 
                    --------            --------            ------ 
  Transport path:   sys-1:ce1      sys-2:ce1      Path online 
  Transport path:   sys-1:ce0      sys-2:ce0      Path online 

------------------------------------------------------------------ 

-- Quorum Summary -- 

  Quorum votes possible:      4 
  Quorum votes needed:        3 
  Quorum votes present:       4 


-- Quorum Votes by Node -- 

                    Node Name           Present Possible Status 
                    ---------           ------- -------- ------ 
  Node votes:       sys-1          1        1       Online 
  Node votes:       sys-2          1        1       Online 


-- Quorum Votes by Device -- 

                    Device Name         Present Possible Status 
                    -----------         ------- -------- ------ 
  Device votes:     /dev/did/rdsk/d9s2  1        1       Online 
  Device votes:     /dev/did/rdsk/d10s2 1        1       Online 

------------------------------------------------------------------ 

-- Device Group Servers -- 

                         Device Group        Primary             Secondary 
                         ------------        -------             --------- 
  Device group servers:  oraset              sys-1          sys-2 

-- Device Group Status -- 

                              Device Group        Status               
                              ------------        ------               
  Device group status:        oraset              Online 


-- Multi-owner Device Groups -- 

                              Device Group        Online Status 
                              ------------        ------------- 

------------------------------------------------------------------ 

-- Resource Groups and Resources -- 

            Group Name          Resources 
            ----------          --------- 
 Resources: orareg              oracle oradata oraser oralistener 


-- Resource Groups -- 

            Group Name          Node Name           State 
            ----------          ---------           ----- 
     Group: orareg              sys-1          Online 
     Group: orareg              sys-2          Offline 


-- Resources -- 

            Resource Name       Node Name           State     Status Message 
            -------------       ---------           -----     -------------- 
  Resource: oracle              sys-1          Online    Online - LogicalHostname online. 
  Resource: oracle              sys-2          Offline   Offline - LogicalHostname offline. 

  Resource: oradata             sys-1          Online    Online 
  Resource: oradata             sys-2          Offline   Offline 

  Resource: oraser              sys-1          Online    Online 
  Resource: oraser              sys-2          Offline   Offline 

  Resource: oralistener         sys-1          Online    Online 
  Resource: oralistener         sys-2          Offline   Offline 

------------------------------------------------------------------ 

-- IPMP Groups -- 

              Node Name           Group   Status         Adapter   Status 
              ---------           -----   ------         -------   ------ 
  IPMP Group: sys-1          xxml    Online         eri0      Online 
  IPMP Group: sys-1          xxml    Online         ce3       Standby 

  IPMP Group: sys-2          xxml    Online         eri0      Online 
  IPMP Group: sys-2          xxml    Online         ce3       Standby

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/22531473/viewspace-671481/,如需轉載,請註明出處,否則將追究法律責任。

上一篇: 沒有了~
solaris下安裝oracle
請登入後發表評論 登入
全部評論

相關文章