SUN CLUSTER 安裝on solaris 10 X86

liuzhen_basis發表於2014-10-09

    環境準備

    使用vmware workstation

    安裝兩個x86 solaris作業系統,hostname分別為cluster01 cluster02

    三塊網路卡和交換,配置如下圖

    clip_image001

    clip_image002



    vnet配置如下

    clip_image003



    一、安裝叢集軟體

    在cluster01上安裝叢集軟體

    clip_image004

    clip_image005

    clip_image006

    clip_image007

    clip_image008

    clip_image009

    clip_image010

    在cluster02上執行以上同樣的叢集安裝操作

    二、建立叢集

    在cluster01上面執行./scinstall

    clip_image011

    輸入1

    clip_image012

    clip_image013

    補充步驟

    在每個叢集節點上,用root使用者登陸,新增 /.rhosts檔案,檔案內容為:

    # cat /.rhosts

    +

    相互新增hosts,只新增221網段即可

    clip_image014

    clip_image015

    繼續,選在typical安裝

    clip_image016

    輸入叢集的名字,此處命名為sap

    clip_image017

    輸入節點的hostname

    clip_image018

    ctrl+d 結束輸入

    clip_image019

    預設

    >>> Authenticating Requests to Add Nodes <<<

    Once the first node establishes itself as a single node cluster, other

    nodes attempting to add themselves to the cluster configuration must

    be found on the list of nodes you just provided. You can modify this

    list by using claccess(1CL) or other tools once the cluster has been

    established.

    By default, nodes are not securely authenticated as they attempt to

    add themselves to the cluster configuration. This is generally

    considered adequate, since nodes which are not physically connected to

    the private cluster interconnect will never be able to actually join

    the cluster. However, DES authentication is available. If DES

    authentication is selected, you must configure all necessary

    encryption keys before any node will be allowed to join the cluster

    (see keyserv(1M), publickey(4)).

    Do you need to use DES authentication (yes/no) [no]?

    預設

    >>> Minimum Number of Private Networks <<<

    Each cluster is typically configured with at least two private

    networks. Configuring a cluster with just one private interconnect

    provides less availability and will require the cluster to spend more

    time in automatic recovery if that private interconnect fails.

    Should this cluster use at least two private networks (yes/no) [yes]?

    預設

    >>> Point-to-Point Cables <<<

    The two nodes of a two-node cluster may use a directly-connected

    interconnect. That is, no cluster switches are configured. However,

    when there are greater than two nodes, this interactive form of

    scinstall assumes that there will be exactly one switch for each

    private network.

    Does this two-node cluster use switches (yes/no) [yes]?

    預設

    >>> Cluster Switches <<<

    All cluster transport adapters in this cluster must be cabled to a

    "switch". And, each adapter on a given node must be cabled to a

    different switch. Interactive scinstall requires that you identify one

    switch for each private network in the cluster.

    What is the name of the first switch in the cluster [switch1]?

    What is the name of the second switch in the cluster [switch2]?

    選擇第一個switch

    >>> Cluster Transport Adapters and Cables <<<

    Transport adapters are the adapters that attach to the private cluster

    interconnect.

    Select the first cluster transport adapter:

    1) e1000g1

    2) e1000g2

    3) Other

    Option: 1

    Will this be a dedicated cluster transport adapter (yes/no) [yes]?

    Adapter "e1000g1" is an Ethernet adapter.

    Searching for any unexpected network traffic on "e1000g1" ... done

    Verification completed. No traffic was detected over a 10 second

    sample period.

    The "dlpi" transport type will be set for this cluster.

    Name of the switch to which "e1000g1" is connected [switch1]? e1000g1

    Unknown switch.

    Name of the switch to which "e1000g1" is connected [switch1]?

    選擇第二個switch

    Each adapter is cabled to a particular port on a switch. And, each

    port is assigned a name. You can explicitly assign a name to each

    port. Or, for Ethernet and Infiniband switches, you can choose to

    allow scinstall to assign a default name for you. The default port

    name assignment sets the name to the node number of the node hosting

    the transport adapter at the other end of the cable.

    Use the default port name for the "e1000g1" connection (yes/no) [yes]?

    Select the second cluster transport adapter:

    1) e1000g1

    2) e1000g2

    3) Other

    Option: 2

    Will this be a dedicated cluster transport adapter (yes/no) [yes]?

    Adapter "e1000g2" is an Ethernet adapter.

    Searching for any unexpected network traffic on "e1000g2" ... done

    Verification completed. No traffic was detected over a 10 second

    sample period.

    The "dlpi" transport type will be set for this cluster.

    Name of the switch to which "e1000g2" is connected [switch2]?

    Use the default port name for the "e1000g2" connection (yes/no) [yes]?

    >>> Network Address for the Cluster Transport <<<

    The cluster transport uses a default network address of 172.16.0.0. If

    this IP address is already in use elsewhere within your enterprise,

    specify another address from the range of recommended private

    addresses (see RFC 1918 for details).

    The default netmask is 255.255.240.0. You can select another netmask,

    as long as it minimally masks all bits that are given in the network

    address.

    The default private netmask and network address result in an IP

    address range that supports a cluster with a maximum of 32 nodes, 10

    private networks, and 12 virtual clusters.

    Is it okay to accept the default network address (yes/no) [yes]?

    Is it okay to accept the default netmask (yes/no) [yes]?

    Plumbing network address 172.16.0.0 on adapter e1000g1 >> NOT DUPLICATE ... done

    Plumbing network address 172.16.0.0 on adapter e1000g2 >> NOT DUPLICATE ... done

    預設

    >>> Global Devices File System <<<

    Each node in the cluster must have a local file system mounted on

    /global/.devices/node@ before it can successfully participate

    as a cluster member. Since the "nodeID" is not assigned until

    scinstall is run, scinstall will set this up for you.

    You must supply the name of either an already-mounted file system or a

    raw disk partition which scinstall can use to create the global

    devices file system. This file system or partition should be at least

    512 MB in size.

    Alternatively, you can use a loopback file (lofi), with a new file

    system, and mount it on /global/.devices/node@.

    If an already-mounted file system is used, the file system must be

    empty. If a raw disk partition is used, a new file system will be

    created for you.

    If the lofi method is used, scinstall creates a new 100 MB file system

    from a lofi device by using the file /.globaldevices. The lofi method

    is typically preferred, since it does not require the allocation of a

    dedicated disk slice.

    The default is to use lofi.

    Is it okay to use this default (yes/no) [yes]?

    三、配置共享儲存及表決盤

    使用openfiler定義iscsi共享儲存

    clip_image020



    檢查solaris pkg 安裝情況

    # pkginfo SUNWiscsiu SUNWiscsir

    system SUNWiscsir Sun iSCSI Device Driver (root)

    system SUNWiscsiu Sun iSCSI Management Utilities (usr)

    分別執行以下命令

    iscsiadm add static-config iqn.2006-01.com.openfiler:tsn.b44c980dd213,192.168.221.99:3260

    iscsiadm modify discovery -s enable

    devfsadm -i iscsi

    iscsiadm list target



    若要移除iscsi,執行下面命令

    iscsiadm remove discovery-address 192.168.221.99:3260



    檢視共享裝置

    # ./scdidadm -L

    1 cluster01:/dev/rdsk/c0d0 /dev/did/rdsk/d1

    1 cluster02:/dev/rdsk/c0d0 /dev/did/rdsk/d1

    #

    #


    上面發現共享盤沒有認到、執行掃描共享裝置

    # ./scgdevs

    Configuring DID devices

    /usr/cluster/bin/scdidadm: Inquiry on device "/dev/rdsk/c0d0s2" failed.

    did instance 2 created.

    did subpath cluster01:/dev/rdsk/c3t1d0 created for instance 2.

    Configuring the /dev/global directory (global devices)

    obtaining access to all attached disks



    重新檢視共享裝置

    # ./scdidadm -L

    1 cluster01:/dev/rdsk/c0d0 /dev/did/rdsk/d1

    1 cluster02:/dev/rdsk/c0d0 /dev/did/rdsk/d1

    2 cluster01:/dev/rdsk/c3t1d0 /dev/did/rdsk/d2

    2 cluster02:/dev/rdsk/c3t1d0 /dev/did/rdsk/d2

    重新認可以認到了,只在1上執行一遍,兩個節點都能認到了。



    配置表決盤

    # ./scsetup

    >>> Initial Cluster Setup <<<

    This program has detected that the cluster "installmode" attribute is

    still enabled. As such, certain initial cluster setup steps will be

    performed at this time. This includes adding any necessary quorum

    devices, then resetting both the quorum vote counts and the

    "installmode" property.

    Please do not proceed if any additional nodes have yet to join the

    cluster.

    Is it okay to continue (yes/no) [yes]? yes

    Do you want to add any quorum devices (yes/no) [yes]? yes

    Following are supported Quorum Devices types in Oracle Solaris

    Cluster. Please refer to Oracle Solaris Cluster documentation for

    detailed information on these supported quorum device topologies.

    What is the type of device you want to use?

    1) Directly attached shared disk

    2) Network Attached Storage (NAS) from Network Appliance

    3) Quorum Server

    q) Return to the quorum menu

    Option: 1

    >>> Add a SCSI Quorum Disk <<<

    A SCSI quorum device is considered to be any Oracle Solaris Cluster

    supported attached storage which connected to two or more nodes of the

    cluster. Dual-ported SCSI-2 disks may be used as quorum devices in

    two-node clusters. However, clusters with more than two nodes require

    that SCSI-3 PGR disks be used for all disks with more than two

    node-to-disk paths.

    You can use a disk containing user data or one that is a member of a

    device group as a quorum device.

    For more information on supported quorum device topologies, see the

    Oracle Solaris Cluster documentation.

    Is it okay to continue (yes/no) [yes]? yes

    Which global device do you want to use (d)? d2

    Is it okay to proceed with the update (yes/no) [yes]? yes

    scconf -a -q globaldev=d2

    Command completed successfully.

    Press Enter to continue:

    Do you want to add another quorum device (yes/no) [yes]? no

    Once the "installmode" property has been reset, this program will skip

    "Initial Cluster Setup" each time it is run again in the future.

    However, quorum devices can always be added to the cluster using the

    regular menu options. Resetting this property fully activates quorum

    settings and is necessary for the normal and safe operation of the

    cluster.

    Is it okay to reset "installmode" (yes/no) [yes]? yes

    scconf -c -q reset

    scconf -a -T node=.

    Cluster initialization is complete.

    Type ENTER to proceed to the main menu:

    配置表決盤成功



    檢視cluster狀態

    # ./scstat -p

    ------------------------------------------------------------------

    -- Cluster Nodes --

    Node name Status

    --------- ------

    Cluster node: cluster01 Online

    Cluster node: cluster02 Online

    ------------------------------------------------------------------

    -- Cluster Transport Paths --

    Endpoint Endpoint Status

    -------- -------- ------

    Transport path: cluster01:e1000g2 cluster02:e1000g2 Path online

    Transport path: cluster01:e1000g1 cluster02:e1000g1 Path online

    ------------------------------------------------------------------

    -- Quorum Summary from latest node reconfiguration --

    Quorum votes possible: 3

    Quorum votes needed: 2

    Quorum votes present: 3

    -- Quorum Votes by Node (current status) --

    Node Name Present Possible Status

    --------- ------- -------- ------

    Node votes: cluster01 1 1 Online

    Node votes: cluster02 1 1 Online

    -- Quorum Votes by Device (current status) --

    Device Name Present Possible Status

    ----------- ------- -------- ------

    Device votes: /dev/did/rdsk/d2s2 1 1 Online

    ------------------------------------------------------------------

    suncluster基本配置完成,具體根據資源配置需要進一步配置,如for oracle 、for sap等等

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/27771627/viewspace-1292968/,如需轉載,請註明出處,否則將追究法律責任。

相關文章