oracle ovm配置使用
1 環境概要
環境概要說明:因此次資源不足,僅有兩臺伺服器進行Oracle OVM測試安裝,故部分功能無法實現,如線上遷移、異常OVS異常模擬等。
1.1 硬體資訊
物理伺服器 2 臺, HP DL 360 G7,顆 4核 CPU(型號不詳),32G記憶體,2×300G SAS 盤,伺服器沒有 HBA 卡,儲存不支援 iSCSI,因此沒辦法提供 SAN 儲存。
1.2 安裝配置
此次我們採用linux下的KVM方式進行ovm和oem的虛擬配置,如下:
實體伺服器 |
實體機ip |
實體機系統 |
虛擬機器ip |
用途 |
作業系統 |
伺服器一 |
10.0.57.11 |
ORL6.5(OVS) |
10.0.57.12--17 |
hypervisor |
隨虛擬機器而定 |
伺服器二 |
10.0.57.7 |
ORL6.3 |
10.0.57.8 |
OVM 管理機 |
ORL6.5 |
10.0.57.9 |
OEM 管理機 |
ORL6.5 |
申明:本文件基於ORACLE OVM 3.2版本編寫,與部分版本在介面上存在一定的差異,功能上類似。
OVM以及OVS的安裝過程此處不做介紹,下面直接進入使用介紹。
2 介面窗體概要
2.1 登陸視窗
登陸OVM控制檯,在配置中,埠資訊採用預設埠7002.資料賬號、密碼即可,預設管理賬號admin,密碼為安裝時設定密碼。
登陸後最初顯示
2.2 Servers and VMs 介面
2.3 Respositories 介面
2.4 Networkings介面:
2.5 Storage介面:
2.6 Tools and resource介面:
2.7 Jobs介面:
3 各介面使用詳解
3.1 health介面
顯示為綠色表示OVS執行ok,負載等均在閥值範圍內,橙色表色警告,紅色表示負載等嚴重超過閥值,假設有兩臺OVS則會顯示如下圖示表示其中一臺出現嚴重問題。
例如:
3.2 Statistics介紹
點解介面的statistics出現如下介面資訊,可以看到伺服器池,伺服器池上的虛擬機器、重新整理頻率、狀態資訊。
3.3 Servers and VMs介面介紹
左側為服務池以及建立服務池等按鈕,依次為:discovery servers、create VNCs、create server pool、create virtual machine、find。
同樣右擊serverpools也可以看到相應的資訊。
右擊server pool中相應的server pool可以得到如下資訊:
同樣在server pool池中的OVS上右擊看到如下資訊:
3.4 Respositories介紹
在資源庫中可以看到VM模板、上傳的ios映象檔案,虛擬磁碟、VM檔案等資訊。
同時也可以建立新的資料庫:
在右側主視窗中可以看到上傳ISO檔案選項:
新增新的映象檔案,這裡支援HTTP和FTP 協議的匯入方式:
3.5 Networks介紹
網路情況點選Networks,顯示如下資訊:
點選VLAN Groups 顯示如下資訊,此外+資訊為create New Vlan Goup如下:
點選Virtual NICs顯示如下資訊,可以看到該處就可以虛擬多個網路卡資訊,可批次建立:
3.6 儲存池介紹
左側視窗中+依次表示:
右擊相應的SAN儲存有如下資訊:
3.7 tools and sources 介紹
在tools and sources中有如下資訊下圖為Tags資訊:
點選NTP按鈕有如下資訊:
Yum管理:
Preferences資訊:
3.8 Jobs資訊
Recurring:
4 虛擬機器的建立
4.1 模板建立虛擬機器
4.2 映象檔案安裝
在使用映象檔案安裝虛擬機器時需要在disk選項中加入CD/ROM,載入ISO檔案如下所示
4.3 利用RAC模板建立RAC環境
4.3.1 建立RAC虛擬機器
在repositories下的模板庫中選定模板,然後點選clone or move template按鈕即可:
4.3.2 修改網路卡
因預設根據模板建立的虛擬機器中只包含一個網路卡資訊、此時我們需要增加額外的網路卡資訊已符合oracle RAC建立的需求。
4.3.3 新增共享儲存
同樣在RAC模板應用安裝中,我們需要給各節點新增共享儲存,以邊ORACLE RAC votedisk以及資料儲存使用,首先建立共享儲存:
分配共享儲存給個節點伺服器,如下圖所示:
根據同樣步驟一次新增其他節點的共享儲存,即可完成共享儲存的建立,
注意:全虛擬化PVM的情況下,最多新增的儲存個數不能超過3
4.3.4 各檔案配置
接著在管理機中編輯配置檔案,完成oracle RAC的模板建立,注意此處需要我們自行到官方網址下載相應的安裝指令碼以完成ORACLE RAC 的配置安裝過程,如下所示:
注意根據檔名判斷自己需要的配置檔案模板:
[root@ovmm utils]# more netconfig-sample64-11g.ini
# Node specific information -------------節點網路配置
NODE1=test13 --------------節點名稱
NODE1IP=192.168.1.231 --------------節點ip
NODE1PRIV=test13-priv ------------節點私有名稱
NODE1PRIVIP=10.10.10.231 ------------節點私有ip
NODE1VIP=test13-vip ------------節點vip名稱
NODE1VIPIP=192.168.1.233 ------------節點vip
NODE2=test14
NODE2IP=192.168.1.232
NODE2PRIV=test14-priv
NODE2PRIVIP=10.10.10.232
NODE2VIP=test14-vip
NODE2VIPIP=192.168.1.234
# Common data
PUBADAP=eth0 ----------節點通用網路卡2
PUBMASK=255.255.255.0 ----------子網掩碼
PUBGW=192.168.1.1 ---------閘道器
PRIVADAP=eth1 ----------網路卡2
PRIVMASK=255.255.255.0
RACCLUSTERNAME=crs64bitR2
DOMAINNAME=localdomain # May be blank
DNSIP= # Starting from 2013 Templates allows multi value
# Device used to transfer network information to second node
# in interview mode
NETCONFIG_DEV=/dev/xvdc ---------網路配置裝置
# 11gR2 specific data
SCANNAME=test13-14-scan ----------scan 名稱
SCANIP=192.168.1.235 ----------scan ip
# Single Instance (description in params.ini)
# CLONE_SINGLEINSTANCE=yes # Setup Single Instance -------
# CLONE_SINGLEINSTANCE_HA=yes # Setup Single Instance/HA (Oracle Restart)
根據自己的需求修改該檔案中的資訊,建議修改之前事先備份該檔案或者cp一個自己的配置檔案:例如:
4.3.4.1 網路檔案的配置
[root@ovmm utils]# more netconfig-my.ini ------------選擇網路的配置檔案.
# Node specific information
NODE1=RAC-1
NODE1IP=9.9.9.1
NODE1PRIV=RAC-1-Priv
NODE1PRIVIP=11.11.11.1
NODE1VIP=RAC-1-VIP
NODE1VIPIP=9.9.9.3
NODE2=RAC-2
NODE2IP=9.9.9.2
NODE2PRIV=RAC-2-Priv
NODE2PRIVIP=11.11.11.2
NODE2VIP=RAC-2-VIP
NODE2VIPIP=9.9.9.4
# Common data
PUBADAP=eth0
PUBMASK=255.255.255.0
PUBGW=9.9.9.10
PRIVADAP=eth1
PRIVMASK=255.255.255.0
RACCLUSTERNAME=my-cluster
DOMAINNAME=localdomain # May be blank
DNSIP= # Starting from 2013 Templates allows multi value
# Device used to transfer network information to second node
# in interview mode
NETCONFIG_DEV=/dev/xvdc
# 11gR2 specific data
SCANNAME=SCAN-my-cluster
SCANIP=9.9.9.11
# Single Instance (description in params.ini)
# CLONE_SINGLEINSTANCE=yes # Setup Single Instance
# CLONE_SINGLEINSTANCE_HA=yes # Setup Single Instance/HA (Oracle Restart)
4.3.4.2 引數檔案的配置
[root@ovmm utils]# more params-sample11g.ini ----------選擇11g引數檔案配置
#
#/* Copyright 2009-2013, Oracle. All rights reserved. */
#
#
# WRITTEN BY: Oracle.
# v1.6: Jul-2013 Add Single Instance, Policy managed DB, Low memory support & DB on Filesystem
# v1.5: Aug-2012 Add resolver options
# v1.4: May-2012 Add colored logfile & unlock accounts
# v1.3: Aug-2011 Document Clusterware only
# v1.2: Jun-2011 Relink on major OS change & Post SQL scripts
# v1.1: Feb-2011 Added options for multicast checking
# v1.0: Jul-2010 Creation
#
#
# Oracle DB/RAC 11gR2 OneCommand for Oracle VM - Generic configuration file
# For Single Instance, Single Instance HA (Oracle Restart) and Oracle RAC
#
##############################################
#
# Generic Parameters
#
# NOTE: The first section holds more advanced parameters that
# should be modified by advanced users or if instructed by Oracle.
#
# See further down this file for the basic user modifiable parameters.
#
##############################################
#
# Temp directory (for OUI), optional
# Default: /tmp
TMPDIR="/tmp"
#
# Progress logfile location
# Default: $TMPDIR/progress-racovm.out
LOGFILE="$TMPDIR/progress-racovm.out"
#
# Must begin with a "+", see "man 1 date" for valid date formats, optional.
# Default: "+%Y-%m-%d %T"
LOGFILE_DATE_FORMAT=""
#
# Should 'clone.pl' be used (default no) or direct 'attach home' (default yes)
# to activate the Grid & RAC homes.
# Attach is possible in the VM since all relinking was done already
# Certain changes may still trigger a clone/relink operation such as switching
# from role to non-role separation.
# Default: yes
CLONE_ATTACH_DBHOME=yes
CLONE_ATTACH_GIHOME=yes
#
# Should a re-link be done on the Grid & RAC homes. Default is no,
# since the software was relinked in VM already. Setting it to yes
# forces a relink on both homes, and overrides the clone/attach option
# above by forcing clone operation (clone.pl)
# Default: no
CLONE_RELINK=no
#
# Should a re-link be done on the Grid & RAC homes in case of a major
# OS change; Default is yes. In case the homes are attached to a different
# major OS than they were linked against, a relink will be automatically
# performed. For example, if the homes were linked on OL5 and then used
# with an OL6 OS, or vice versa, a relink will be performed. To disable
# this automated relinking during install (cloning step), set this
# value to no (not recommended)
# Default: yes
CLONE_RELINK_ON_MAJOR_OS_CHANGE=yes
#
# The root of the oracle install must be an absolute path starting with a /
# Default: /u01/app
RACROOT="/u01/app"
#
# The location of the Oracle Inventory
# Default: $RACROOT/oraInventory
RACINVENTORYLOC="${RACROOT}/oraInventory"
#
# The location of the SOFTWARE base
# In role separated configuration GIBASE may be defined to set the location
# of the Grid home which defaults to $RACROOT/$GRIDOWNER.
# Default: $RACROOT/$RACOWNER
RACBASE="${RACROOT}/oracle"
#
# The location of the Grid home, must be set in RAC or Single Instance HA deployments
# Default: $RACROOT/11.2.0/grid
GIHOME="${RACROOT}/11.2.0/grid"
#
# The location of the DB RAC home, must be set in non-Clusterware only deployments
# Default: ${RACBASE}/product/11.2.0/dbhome_1
DBHOME="${RACBASE}/product/11.2.0/dbhome_1"
#
# The disk string used to discover ASM disks, it should cover all disks
# on all nodes, even if their physical names differ. It can also hold
# ASMLib syntax, e.g. ORCL:VOL*, and have as many elements as needed
# separated by space, tab or comma.
# Do not remove the "set -/+o noglob" options below, they are required
# so that discovery string don't expand on assignment.
set -o noglob
RACASMDISKSTRING="/dev/xvd[c-g]1"
set +o noglob
#
# Provide list of devices or actual partitions to use. If actual
# partition number is specified no partitioning will be done, otherwise specify
# top level device name and the disk will automatically be partitioned with
# one partition using 'parted'. For example, if /dev/xvdh4 is listed
# below it will be used as is, if it does not exist an error will be raised.
# However, if /dev/xvdh is listed it will be automatically partitioned
# and /dev/xvdh1 will be used.
# Minimum of 5 devices or partitions are recommended (see ASM_MIN_DISKS).
#注意修改下面的資訊,防止磁碟不夠或者名稱不一致。
ALLDISKS="/dev/xvdc /dev/xvdd /dev/xvde /dev/xvdf /dev/xvdg"
#
# Provide list of ASMLib disks to use. Can be either "diskname" or
# "ORCL:diskname". They must be manually configured in ASMLib by
# mapping them to correct block device (this part is not yet automated).
# If you include any disks here they should also be included
# in RACASMDISKSTRING setting above (discovery string).
ALLDISKS_ASMLIB=""
#
# By default 5 disks for ASM are recommended to provide higher redundancy
# for OCR/Voting files. If for some reason you want to use less
# disks, then uncomment ASM_MIN_DISKS below and set to the new minimum.
# Make needed adjustments in ALLDISKS and/or ALLDISKS_ASMLIB above.
# Default: 5
#ASM_MIN_DISKS=5 -------注意安裝時一定要手動修改,防止磁碟不夠。
#
# By default, whole disks specified in ALLDISKS will be partitioned with
# one partition. If you prefer not to partition and use whole disk, set
# PARTITION_WHOLE_DISKS to no. Keep in mind that if at a later time
# someone will repartition the disk, data may be lost. Probably better
# to leave it as "yes" and signal it's used by having a partition created.
# Default: yes
PARTITION_WHOLE_DISKS=yes
#
# By default, disk *names* are assumed to exist with same name on all nodes, i.e
# all nodes will have /dev/xvdc, /dev/xvdd, etc. It doesn't mean that the *ordering*
# is also identical, i.e. xvdc can really be xvdd on the other node.
# If such persistent naming (not ordering) is not the case, i.e node1 has
# xvdc,xvdd but node2 calls them: xvdn,xvdm then PERSISTENT_DISKNAMES should be
# set to NO. In the case where disks are named differently on each node, a
# stamping operation should take place (writing to second sector on disk)
# to verify if all nodes see all disks.
# Stamping only happens on the node the build is running from, and backup
# is taken to $TMPDIR/StampDisk-backup-diskname.dd. Remote nodes read the stamped
# data and if all disks are discovered on all nodes the disk configuration continues.
# Default: yes
PERSISTENT_DISKNAMES=yes
#
# This parameter decides whether disk stamping takes place or not to discover and verify
# that all nodes see all disks. Stamping is the only way to know 100% that the disks
# are actually the same ones on all nodes before installation begins.
# The master node writes a unique uuid to each disk on the second sector of the disk,
# then remote nodes read and discover all disks.
# If you prefer not to stamp the disks, set DISCOVER_VERIFY_REMOTE_DISKS_BY_STAMPING to
# no. However, in that case, PERSISTENT_DISKNAMES must be set to "yes", otherwise, with
# both parameters set to "no" there is no way to calculate the remote disk names.
# The default for stamping is "yes" since in Virtual machine environments, scsi_id(8)
# doesn't return data for disks.
# Default: yes
DISCOVER_VERIFY_REMOTE_DISKS_BY_STAMPING=yes
#
# Permissions and ownership files, EL4 uses PERMISSIONFILE, EL5 uses UDEVFILE
UDEVFILE="/etc/udev/rules.d/99-oracle.rules"
PERMISSIONFILE="/etc/udev/permissions.d/10-oracle.permissions"
#
# Disk permissions to be set on ASM disks use if want to override the below default
# Default: "660" (owner+group: read+write)
# It may be possible in Non-role separation to use "640" (owner: read+write, group: read)
# however, that is not recommended since if a new database OS user
# is added at a later time in the future, it will not be able to write to the disks.
#DISKPERMISSIONS="660"
#
# ASM's minimum allocation unit (au_size) for objects/files/segments/extents of the first
# diskgroup, in some cases increasing to higher values may help performance (at the
# potential of a bit of space wasting). Legal values are 1,2,4,8,16,32 and 64 MB.
# Not recommended to go over 8MB. Currently if initial diskgroup holds OCR/Voting then it's
# maximum possible au_size is 16MB. Do not change unless you understand the topic.
# Most releases default to 1MB (Exadata's default: 4MB)
#RACASM_AU_SIZE=1
#
# Should we align the ASM disks to a 1MB boundary.
# Default: yes
ALIGN_PARTITIONS=yes
#
# Should partitioned disks use the GPT partition table
# which supported devices larger than 2TB.
# Default: msdos
PARTITION_TABLE_GPT=no
#
# These are internal functions that check if a disk/partition is held
# by any component. They are run in parallel on all nodes, but in sequence
# within a node. Do not modify these unless explicitly instructed to by Oracle.
HELDBY_FUNCTIONS=(HeldByRaid HeldByAsmlib HeldByPowerpath HeldByDeviceMapper HeldByUser HeldByFilesystem HeldBySwap)
#
##### STORAGE: Filesystem: DB/RAC: (shared) filesystem
#
# NOTE1: To not configure ASM unset RACASMGROUPNAME
# NOTE2: Not all operations/verification take place in a
# FS configuration.
# For example:
# - The mount points are not automatically created/mounted
# - Best effort verification is done that the correct
# mount options are used.
#
# The filesystem directory to hold Database files (control, logfile, etc.)
# For RAC it must be a shared location (NFS, OCFS or in 12c ACFS),
# otherwise it may be a local filesystem (e.g. ext4).
# For NFS make sure mount options are correct as per docs
# such as Note:359515.1
# Default: None (Single Instance: $RACBASE/oradata)
#FS_DATAFILE_LOCATION=/nfs/160
#
# Should the database be created in the FS location mentioned above.
# If value is unset or set to no, the database is created in ASM.
# Default: no (Single Instance: yes)
#DATABASE_ON_FS=no
#
# Should the above directory be cleared from Clusterware and Database
# files during a 'clean' or 'cleanlocal' operation.
# Default: no
#CLONE_CLEAN_FS_LOCATIONS=no
#
# Names of OCR/VOTE disks, could be in above FS Datafile location
# or a different properly mounted (shared) filesystem location
# Default: None
#CLONE_OCR_DISKS=/nfs/160/ocr1,/nfs/160/ocr2,/nfs/160/ocr3
#CLONE_VOTING_DISKS=/nfs/160/vote1,/nfs/160/vote2,/nfs/160/vote3
#
# Location of OCR/VOTE disks. Value of "yes" means inside ASM
# whereas any other value means the OCR/Voting reside in CFS
# (above locations must be supplied)
# Default: yes
#CLONE_OCRVOTE_IN_ASM=yes
#
# Should addnodes operation COPY the entire Oracle Homes to newly added
# nodes. By default no copy is done to speed up the process, however
# if existing cluster members have changed (patches applied) compared
# to the newly created nodes (using the template), then a copy
# of the Oracle Homes might be desired so that the newly added node will
# get all the latest modifications from the current members.
# Default: no
CLONE_ADDNODES_COPY=no
#
# Should an add node operation fully clean the new node before adding
# it to the cluster. Setting to yes means that any lingering running
# Oracle processes on the new node are killed before the add node is
# started as well as all logs/traces are cleared from that node.
# Default: no
CLONE_CLEAN_ON_ADDNODES=no
#
# Should a remove node operation fully clean the removed node after removing
# it from the cluster. Setting to yes means that any lingering running
# Oracle processes on the removed node are killed after the remove node is
# completed as well as all logs/traces are cleared from that node.
# Default: no
CLONE_CLEAN_ON_REMNODES=no
#
# Should 'cleanlocal' request prompt for confirmation if processes are running
# Note that a global 'clean' will fail if this is set to 'yes' and processes are running
# this is a designed safegaurd to protect environment from accidental removal.
# Default: yes
CLONE_CLEAN_CONFIRM_WHEN_RUNNING=yes
#
# Should the recommended oracle-validated or oracle-rdbms-server-*-preinstall
# be checked for existance and dependencies during check step. If any missing
# rpms are found user will need to use up2date or other methods to resolve dependencies
# The RPM may be obtained from Unbreakable Linux Network or
# Default: yes
CLONE_ORACLE_PREREQ_RPM_REQD=yes
#
# Should the "verify" actions of the above RPM be run during buildcluster.
# These adjust kernel parameters. In the VM everything is pre-configured hence
# default is not to run.
# Default: no
CLONE_ORACLE_PREREQ_RPM_RUN=no
#
# By default after clusterware installation CVU (Cluster Verification Utility)
# is executed to make sure all is well. Setting to 'yes' will skip this step.
# Set CLONE_SKIP_CVU_POSTHAS for SIHA (Oracle Restart) environments
# Default: no
#CLONE_SKIP_CVU_POSTCRS=no
#
# Allows to skip minimum disk space checks on the
# Oracle Homes (recommended not to skip)
# Default: no
CLONE_SKIP_DISKSPACE_CHECKS=no
#
# Allows to skip minimum memory checks (recommended not to skip)
# Default: no
CLONE_SKIP_MEMORYCHECKS=no
#
# On systems with extreme memory limitations, e.g. VirtualBox, it may be needed
# to disable some Clusterware components to release some memory. Workload
# Management, Cluster Health Monitor & Cluster Verification Utility are
# disabled if this option is set to yes.
# This is only supported for production usage with Clusterware only installation.
# Default: no
CLONE_LOW_MEMORY_CONFIG=no
#
# By default on systems with less than 4GB of RAM the /dev/shm will
# automatically resize to fit the specified configuration (ASM, DB).
# This is done because the default of 50% of RAM may not be enough. To
# disable this functionality set CLONE_TMPFS_SHM_RESIZE_NEVER=yes.
# Default: no
CLONE_TMPFS_SHM_RESIZE_NEVER=no
#
# To disable the modification of /etc/fstab with the calculated size of
# /dev/shm, set CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=no. This may mean that
# some instances may not properly start following a system reboot.
# Default: yes
CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=yes
#
# Setting CLONE_CLUSTERWARE_ONLY to yes allows Clusterware only installation
# any operation to create a database or reference the DB home are ignored.
# Default: no
#CLONE_CLUSTERWARE_ONLY=no
#
# As described in the 11.2.0.2 README as well as Note:1212703.1 mutlicasting
# is required to run Oracle RAC starting with 11.2.0.2. If this check fails
# review the note, and remove any firewall rules from Dom0, or re-configure
# the switch servicing the private network to allow multicasting from all
# nodes to all nodes.
# Default: yes
CLONE_MULTICAST_CHECK=yes
#
# Should a multicast check failure cause the build to stop. It's possible to
# perform the multicast check, but not stop on failures.
# Default: yes
CLONE_MULTICAST_STOP_ON_FAILURE=yes
#
# List of multicast addresses to check. By default 11.2.0.2 supports
# only 230.0.1.0, however with fix for bug 9974223 or bundle 1 and higher
# the software also supports multicast address 244.0.0.251. If future
# software releases will support more addresses, modify this list as needed.
# Default: "230.0.1.0 224.0.0.251"
CLONE_MULTICAST_ADDRESSLIST="230.0.1.0 224.0.0.251"
#
# The text specified in the NETCONFIG_RESOLVCONF_OPTIONS variable is written to
# the "options" field in the /etc/resolv.conf file during initial network setup.
# This variable can be set here in params.ini, or in netconfig.ini having the same
# effect. It should be a space separated options as described in "man 5 resolv.conf"
# under the "options" heading. Some useful options are:
# "single-request-reopen attempts:x timeout:x" x being a digit value.
# The 'single-request-reopen' option may be helpful in some environments if
# in-bound ssh slowness occur.
# Note that minimal validation takes place to verify the options are correct.
# Default: ""
#NETCONFIG_RESOLVCONF_OPTIONS=""
#
##################################################
#
# The second section below holds basic parameters
#
##################################################
#
# Configures a Single Instance environment, including a database as
# specified in BUILD_SI_DATABASE. In this mode, no Clusterware or ASM will be
# configured, hence all related parameters (e.g. ALLDISKS) are not relevant.
# The database must reside on a filesystem.
# This parameter may be placed in netconfig.ini for simpler deployment.
# Default: no
#CLONE_SINGLEINSTANCE=no
#
# Configures a Single Instance/HA environment, aka Oracle Restart, including
# a database as specified in BUILD_SI_DATABASE. The database may reside in
# ASM (if RACASMGROUPNAME is defined), or on a filesystem.
# This parameter may be placed in netconfig.ini for simpler deployment.
# Default: no
#CLONE_SINGLEINSTANCE_HA=no
#
# OS USERS AND GROUPS FOR ORACLE SOFTWARE
#
# SYNTAX for user/group are either (VAR denotes the variable names below):
# VAR=username:uid OR: VAR=username
# VARID=uid
# VAR=groupname:gid OR: VAR=groupname
# VARID=gid
#
# If uid/gid are omitted no checks are made nor users created if need be.
# If uid/gid are supplied they should be numeric and not clash
# with existing uid/gids defined on the system already.
# NOTE: In RAC usernames and uid/gid must match on all cluster nodes,
# the verification process enforces that only if uid/gid's
# are given below.
#
# If incorrect configuration is detected, changes to users and groups are made to
# correct them. If this is set to "no" then errors are reported
# without an attempt to fix them.
# (Users/groups are never dropped, only added or modified.)
# Default: yes
CREATE_MODIFY_USERS_GROUPS=yes
#
# NON-ROLE SEPARATED:------------預設的角色不分離
# No Grid user is defined and all roles are set to 'dba'
RACOWNER=oracle:1101
OINSTALLGROUP=oinstall:1000
GIOSASM=dba:1031
GIOSDBA=dba:1031
GIOSOPER=dba:1031
DBOSDBA=dba:1031
DBOSOPER=dba:1031
#
# ROLE SEPARATION: (uncomment lines below)---角色分離,需手動撤銷登出
# See Note:1092213.1
# (Numeric changes made to uid/gid to reduce the footprint and possible clashes
# with existing users/groups)
#
##GRIDOWNER=grid:1100
##RACOWNER=oracle:1101
##OINSTALLGROUP=oinstall:1000
##GIOSASM=asmadmin:1020
##GIOSDBA=asmdba:1021
##GIOSOPER=asmoper:1022
##DBOSDBA=dba:1031
##DBOSOPER=oper:1032
#
# The name for the Grid home in the inventory
# Default: OraGrid11gR2
#GIHOMENAME="OraGrid11gR2"
#
# The name for the DB/RAC home in the inventory
# Default: OraRAC11gR2 (Single Instance: OraDB11gR2)
#DBHOMENAME="OraRAC11gR2"
#
# The name of the ASM diskgroup, default "DATA"
# If unset ASM will not be configured (see filesystem section above)
# Default: DATA
RACASMGROUPNAME="DATA"
#
# The ASM Redundancy for the diskgroup above
# Valid values are EXTERNAL, NORMAL or HIGH
# Default: NORMAL (if unset)
RACASMREDUNDANCY="EXTERNAL"
#
# Allows running the Clusterware with a different timezone than the system's timezone.
# If CLONE_CLUSTERWARE_TIMEZONE is not set, the Clusterware Timezone will
# be set to the system's timezone of the node running the build. System timezone is
# defined in /etc/sysconfig/clock (ZONE variable), if not defined or file missing
# comparison of /etc/localtime file is made against the system's timezone database in
# /usr/share/zoneinfo, if no match or /etc/localtime is missing GMT is used. If you
# want to override the above logic, simply set CLONE_CLUSTERWARE_TIMEZONE to desired
# timezone. Note that a complete timezone is needed, e.g. "PST" or "EDT" is not enough
# needs to be full timezone spec, e.g. "PST8PDT" or "America/New_York".
# This variable is only honored in 11.2.0.2 or above
# Default: OS
#CLONE_CLUSTERWARE_TIMEZONE="America/Los_Angeles"
#
# Create an ACFS volume?
# Default: no
ACFS_CREATE_FILESYSTEM=no
#
# If ACFS volume is to be created, this is the mount point.
# It will automatically get created on all nodes.
# Default: /myacfs
ACFS_MOUNTPOINT="/myacfs"
#
# Name of ACFS volume to optionally create.
# Default: MYACFS
ACFS_VOLNAME="MYACFS"
#
# Size of ACFS volume in GigaBytes.
# Default: 3
ACFS_VOLSIZE_GB="3"
#
# NOTE: In the OVM3 enhanced RAC Templates when using deploycluster
# tool (outside of the VMs). The correct and secure way to transfer/set the
# passwords is to remove them from this file and use the -P (--params)
# flag to transfer this params.ini during deploy operation, in which
# case the passwords will be prompted, and sent to all VMs in a secure way.
# The password that will be set for the ASM and RAC databases
# as well as EM DB Console and the oracle OS user.
# If not defined here they will be prompted for (only once)
# at the start of the build. Required to be set here or environment
# for silent mode.
# Use single quote to prevent shell parsing of special characters.
RACPASSWORD='oracle'
GRIDPASSWORD='oracle'
#
# Password for 'root' user. If not defined here it will be prompted
# for (only once) at the start of the build.
# Assumed to be same on both nodes and required to be set here or
# environment for silent mode.
# Use single quote to prevent shell parsing of special characters.
ROOTUSERPASSWORD='ovsroot'
# 上面資訊是root的預設秘密,注意時候修改或者提前設定
# Build Database? The BUILD_RAC_DATABASE will build a RAC database and
# BUILD_SI_DATABASE a single instance database (also in a RAC environment)
# Default: yes
BUILD_RAC_DATABASE=yes
#BUILD_SI_DATABASE=yes
#
# Allows for database and listener to be started automatically at next
# system boot. This option is only applicable in Single Instance mode.
# In Single Instance/HA or RAC mode, the Clusterware starts up all
# resources (listener, ASM, databases).
# Default: yes
CLONE_SI_DATABASE_AUTOSTART=yes
#
# Comma separated list of name value pairs for database initialization parameters
# Use with care, no validation takes place.
# For example: "sort_area_size=99999,control_file_record_keep_time=99"
# Default: none
#DBCA_INITORA_PARAMETERS=""
#
# Should a Policy Managed database be created taking into account the
# options below. If set to 'no' an Admin Managed database is created.
# Default: no
DBCA_DATABASE_POLICY=no
#
# Create Server Pools (Policy Managed database).
# Default: yes
CLONE_CREATE_SERVERPOOLS=yes
#
# Recreate Server Pools; if already exist (Policy Managed database).
# Default: no
CLONE_RECREATE_SERVERPOOLS=no
#
# List of server pools to create (Policy Managed database).
# Syntax is poolname:category:min:max
# All except name can be omitted. Category can be Hub or Leaf (12c only).
# Default: mypool
CLONE_SERVERPOOLS="mypool"
#
# List of Server Pools to be used by the created database (Policy Managed database).
# The server pools listed in DBCA_SERVERPOOLS must appear in CLONE_SERVERPOOLS
# (and CLONE_CREATE_SERVERPOOLS set to yes), OR must be manually pre-created for
# the create database to succeed.
# Default: mypool
DBCA_SERVERPOOLS="mypool"
#
# Database character set.
# Default: WE8MSWIN1252 (previous default was AL32UTF8)
# DATABASE_CHARACTERSET="WE8MSWIN1252"
#
# Use this DBCA template name, file must exist under $DBHOME/assistants/dbca/templates
# Default: "General_Purpose.dbc"
DBCA_TEMPLATE_NAME="General_Purpose.dbc"
#
# Should the database include the sample schema
# Default: no
DBCA_SAMPLE_SCHEMA=no
#
# Certain patches applied to the Oracle home require execution of some SQL post
# database creation for the fix to be applied completely. These files are located
# under patches/postsql subdirectory. It is possible to run them serially (adds
# to overall build time), or in the background which is the default.
# Note that when running in background these scripts may run a little longer after
# the RAC Cluster + Database are finished building, however that should not cause
# any issues. If overall build time is not a concern change this to NO and have
# the scripts run as part of the actual build in serial.
# Default: yes
DBCA_POST_SQL_BG=yes
#
# An optional user custom SQL may be executed post database creation, default name of
# script is user_custom_postsql.sql, it is located under patches/postsql subdirectory.
# Default: user_custom_postsql.sql
DBCA_POST_SQL_CUSTOM=user_custom_postsql.sql
#
# The Database Name
# Default: ORCL
DBNAME="ORCL"
#
# The Instance name, may be different than database name. Limited in length of
# 1 to 8 for a RAC DB & 1 to 12 for Single Instance DB of alphanumeric characters.
# Ignored for Policy Managed DB.
# Default: ORCL
SIDNAME="ORCL"
#
# Configure EM DB Console
# Default: no
CONFIGURE_DBCONSOLE=no
#
# Enable HA (high availability) for EM DB Console by starting up
# a dbconsole instance on each node of the cluster, so that if one
# is down, others can service the requests, default: No
# Default: no
DBCONSOLE_HA=no
#
# DB Console port number. If left at the default, a free port will be assigned at
# runtime, otherwise the port should be unused on all network adapters.
# Default: 1158
#DBCONTROL_HTTP_PORT=1158
#
# SCAN (Single Client Access Name) port number
# Default: 1521
SCANPORT=1521
#
# Local Listener port number
# Default: 1521
LISTENERPORT=1521
#
# Allows color coding of log messages, errors (red), warning (yellow),
# info (green). By default no colors are used.
# Default: NO
CLONE_LOGWITH_COLORS=no
#
# END OF FILE
#
[root@ovmm utils]#
4.3.5 開始安裝
上述資訊配置完成後,可執行如下命令進行安裝:
[root@ovmm deploycluster]# ./deploycluster.py -u admin -p Oracle123 -H localhost -M RAC-1,RAC-2 -P utils/params-my.ini -N utils/netconfig-my.ini
監控安裝過程
在目標節點上進行安裝過程的監控
4.3.6 監控日誌資訊
在日誌資訊中,此處貼出全部的日誌以供觀察研究安裝的詳細過程。
4.3.7 驗證安裝結果
5 模板的clone
在模板clone之前,我們應事先建立好需要製作成模板的伺服器、資料庫伺服器、中介軟體伺服器等模板源。
5.1 模板clone
此處以已安裝的資料庫VM虛擬伺服器DB-Template-11gR2-OL6u5 為例進行,詳細步驟如下:
然後點選OK即可完成模板製作。
5.1.1 模板Clone注意事項:
1、 在模板clone中一定要將光碟機消除或者調整引導順序改為disk最先引導,否則透過模板clone的伺服器仍會提示安裝
2、 Clone模板前需要將樣本的網路資訊登出,防止ip衝突。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28612416/viewspace-1368705/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- oracle ovm deploycluster工具簡介Oracle
- ovm搭建注意事項
- Guidelines for Successful SoC Verification in OVM/UVMGUIIDE
- ovm安裝過程及中斷處理
- 使用ORACLE ASMFD配置ORACLE儲存標準化OracleASM
- Oracle Stream的安裝、配置和使用Oracle
- Oracle PGA記憶體的配置和使用Oracle記憶體
- Oracle 11g DRCP配置與使用(上)Oracle
- Oracle 11g DRCP配置與使用(下)Oracle
- Oracle GoldenGate 學習教程二、配置和使用OracleGo
- nagios使用check_oracle_health配置文件iOSOracle
- Oracle在Linux下使用非同步IO配置OracleLinux非同步
- Oracle Stream配置詳細步驟(使用者模式)Oracle模式
- Oracle安全配置Oracle
- oracle MTS 配置Oracle
- oracle 配置 oemOracle
- 【BUILD_ORACLE】Oracle 19c RAC搭建(三)使用UDEV配置共享儲存UIOracledev
- ORACLE10g Stream使用者級複製配置Oracle
- Oracle管理監控之sql developer配置與簡單使用OracleSQLDeveloper
- Oracle在Linux下使用非同步IO(aio)配置(轉)OracleLinux非同步AI
- Linux Oracle OCM安裝、配置、使用一小記LinuxOracle
- 【Services】Oracle 11g RAC使用Manual和Policy Managed方法配置和使用ServicesOracle
- 在CentOS6.0上配置Oracle資料庫可以遠端使用的遠端配置。CentOSOracle資料庫
- 寶付oracle配置Oracle
- oracle dataguard broker 配置Oracle
- Hibernate配置OracleOracle
- oracle安裝配置Oracle
- oracle網路配置Oracle
- 配置Oracle physical DataGuardOracle
- Oracle Undo 的配置Oracle
- oracle引數配置Oracle
- Oracle Data Guard配置Oracle
- oracle goldengate 配置OracleGo
- [Oracle] -- 配置Oracle環境變數Oracle變數
- 使用註解配置、使用java程式碼配置Java
- oracle STREAM 單向使用者配置流程步驟總結Oracle
- [轉載]使用snmp 監控 Oracle 10g(10.2.0.4) --- 配置Oracle 10g
- 【oifcfg】使用Oracle介面配置工具(oifcfg)調整RAC公用和私有互聯IP子網配置Oracle