vmware+ocfs2+rac
1.Use Vmware create 2 virtual PC with 2 network cards and setup redhat as4U2(This can free download easily,so pick this version) with development tools.
2.Install vmware-toos on each PC
From vmware tool click vm->Install vmware tools
[rot@rac1 ~]# cd /mnt
[root@rac1 mnt]# mkdir cdrom
[root@rac1 mnt]# mount /dev/cdrom /mnt/cdrom
mount: block device /dev/cdrom is write-protected, mounting read-only
[root@rac1 cdrom]# cd /tmp
[root@rac1 tmp]# tar zxvf /mnt/cdrom/VMwareTools-6.0.1-55017.tar.gz
[root@rac1 tmp]# cd vmware-tools-distrib/
[root@rac1 vmware-tools-distrib]# ./vmware-install.pl
3.Configure the two nodes:
check kernel:
[root@rac1 ~]# uname -a
Linux rac1 2.6.9-22.EL #1 Mon Sep 19 18:20:28 EDT 2005 i686 i686 i386 GNU/Linux
Configure 3 ipaddresses for both nodes. /etc/hosts:
192.168.1.11 rac1
192.168.1.12 rac1_vip
10.10.10.1 rac1_priv
192.168.1.21 rac2
192.168.1.22 rac2_vip
10.10.10.2 rac2_priv
Ensure these packages have been successfully installed:
[root@rac1 ~]# rpm -q binutils compat-db control-center gcc gcc-c++ glibc gnome- libs libstdc++ libstdc++-devel make openmotif21
binutils-2.15.92.0.2-15
compat-db-4.1.25-9
control-center-2.8.0-12.rhel4.2
gcc-3.4.4-2
gcc-c++-3.4.4-2
glibc-2.3.4-2.13
package gnome- is not installed
package libs is not installed
libstdc++-3.4.4-2
libstdc++-devel-3.4.4-2
make-3.80-5
openmotif21-2.1.30-11.RHEL4.4
4.Configure vmx file at each virtual PC,add this parameter: disk.locking = "FALSE"
5.Shutdown 2 PCs and from one add a new harddisk and from another add exist harddisk which is new added by the first PC.
[root@rac1 tmp]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 25 200781 83 Linux
/dev/sda2 26 156 1052257+ 82 Linux swap
/dev/sda3 157 2610 19711755 83 Linux
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
[root@rac1 tmp]# fdisk /dev/sdb
The number of cylinders for this disk is set to 2610.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): +4096M
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (500-2610, default 500):
Using default value 500
Last cylinder or +size or +sizeM or +sizeK (500-2610, default 2610): +10240M
Command (m for help): p
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 499 4008186 83 Linux
/dev/sdb2 500 1745 10008495 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
6.Create groups and users on both nodes:
[root@rac1 ~]# /usr/sbin/groupadd oinstall
[root@rac1 ~]# /usr/sbin/groupadd dba
[root@rac1 ~]# /usr/sbin/useradd -m -g oinstall -G dba oracle
[root@rac1 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rac1 ~]# id oracle
uid=500(oracle) gid=501(oinstall) groups=501(oinstall),502(dba)
[root@rac2 selinux]# /usr/sbin/groupadd -g 501 oinstall
[root@rac2 selinux]# /usr/sbin/groupadd -g 502 dba
[root@rac2 selinux]# /usr/sbin/useradd -m -u 500 -g oinstall -G dba oracle
[root@rac2 selinux]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rac2 selinux]# id oracle
uid=500(oracle) gid=501(oinstall) groups=501(oinstall),502(dba)
note:Ensure that groupID and userID are uniform.
7.Configure kernel parameter and some profiles of oracle user:
[root@rac1 ~]# cat >> /etc/sysctl.conf <
> kernel.shmmax = 2147483648
> kernel.shmmni = 4096
> kernel.sem = 250 32000 100 128
> fs.file-max = 65536
> net.ipv4.ip_local_port_range = 1024 65000
> net.core.rmem_default=262144
> net.core.wmem_default=262144
> net.core.rmem_max=262144
> net.core.wmem_max=262144
> EOF
[root@rac1 ~]# /sbin/sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_max = 262144
[root@rac1 ~]# cat >> /etc/security/limits.conf <
> oracle hard nproc 16384
> oracle soft nofile 1024
> oracle hard nofile 65536
> EOF
[root@rac1 ~]# cat >> /etc/pam.d/login <
> EOF
[root@rac1 ~]# cat >> /etc/profile <
> if [ \$SHELL = "/bin/ksh" ]; then
> ulimit -p 16384
> ulimit -n 65536
> else
> ulimit -u 16384 -n 65536
> fi
> umask 022
> fi
> EOF
[root@rac1 ~]# cat >> /etc/csh.login <
> limit maxproc 16384
> limit descriptors 65536
> umask 022
> endif
> EOF
[root@rac1 ~]# modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
[root@rac1 ~]# cat >> /etc/rc.d/rc.local <
> EOF
8.Configure equivalent validation.
Generate rsa and dsa on both nodes:
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
Login rac1 ONLY as oracle user:
ssh rac1 cat /home/oracle/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac1 cat /home/oracle/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh rac2 cat /home/oracle/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac2 cat /home/oracle/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp authorized_keys rac2:/home/oracle/.ssh/
chmod 600 ~/.ssh/authorized_keys
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
[oracle@rac1 .ssh]$ ssh rac2 date
Sun May 18 15:19:28 CST 2008
Login rac2 and execute:
chmod 600 ~/.ssh/authorized_keys
9.Setup ocfs2 and configure o2cb on both nodes:
[root@rac2 tmp]# rpm -ivh ocfs2-tools-1.2.7-1.el4.i386.rpm
Preparing... ########################################### [100%]
1:ocfs2-tools ########################################### [100%]
[root@rac2 tmp]# rpm -ivh ocfs2-tools-devel-1.2.7-1.el4.i386.rpm
Preparing... ########################################### [100%]
1:ocfs2-tools-devel ########################################### [100%]
[root@rac2 tmp]# rpm -ivh ocfs2-2.6.9-22.EL-1.2.7-1.el4.i686.rpm
Preparing... ########################################### [100%]
1:ocfs2-2.6.9-22.EL ########################################### [100%]
[root@rac2 tmp]# rpm -ivh ocfs2console-1.2.7-1.el4.i386.rpm
Preparing... ########################################### [100%]
1:ocfs2console ########################################### [100%]
[root@rac1 tmp]# ocfs2console(one node)
task ---> format
dev/sdb1 ---> oraHome
dev/sdb2 ---> oraData
cluster --- > configure nodes
cluster --- > proragaet configuration
Propagating cluster configuration to rac2...
root@rac2's password:
Finished!
[root@rac1 init.d]# ./o2cb configure(both nodes)
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
will abort.
Load O2CB driver on boot (y/n) [y]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
O2CB cluster ocfs2 already online
[root@rac1 ocfs2]# cat /etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 192.168.1.11
number = 0
name = rac1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.1.21
number = 1
name = rac2
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
10.Create mount point on both nodes:
mkdir -p /oracle/orahome
mkdir -p /oracle/oradata
mkdir -p /oracle/crs
chown -R root:oinstall /oracle/crs
chmod 775 /oracle/crs
chown oracle:oinstall /oracle/orahome
chmod 775 /oracle/orahome
chown oracle:oinstall /oracle/oradata
chmod 775 /oracle/oradata
mount -t ocfs2 /dev/sdb1 /oracle/orahome
mount -t ocfs2 -o datavolume,nointr /dev/sdb2 /oracle/oradata
Add this configuration to /etc/fstab:
/dev/sdb1 /oracle/orahome ocfs2 _netdev 0 0
/dev/sdb2 /oracle/oradata ocfs2 _netdev,datavolume,nointr 0 0
Check mount result:
[root@rac2 init.d]# mounted.ocfs2 -f
Device FS Nodes
/dev/sdb1 ocfs2 rac1, rac2
/dev/sdb2 ocfs2 rac1, rac2
[root@rac2 init.d]# ./o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Active
11.setup software:
[oracle@rac1 ~]$ exec /usr/bin/ssh-agent $SHELL
[oracle@rac1 ~]$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa:
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
[oracle@rac1 ~]$ cd /tmp
[oracle@rac1 tmp]$ cd clusterware/
[oracle@rac1 clusterware]$ ./runInstaller -ignoreSysPrereqs
......
12.appendix(some problems during setup process):
1.counld not start cluster stack.This must be resolved before any OCFS2 filesystem can be mounted
This problem can be caused by different version of ocfs2 lib and redhat kernel.selinux is allowed is another possible reason.
tail -n100 /var/log/messages:
May 18 12:10:27 rac1 kernel: SELinux: initialized (dev configfs, type configfs), not configured for labeling
May 18 12:10:27 rac1 kernel: audit(1211083827.759:7): avc: denied { mount } for pid=12346 comm="mount" name="/" dev=configfs ino=44504 scontext=root:system_r:initrc_t tcontext=system_u:object_r:unlabeled_t tclass=filesystem
May 18 12:10:30 rac1 dbus: Can't send to audit system: USER_AVC pid=2642 uid=81 loginuid=-1 message=avc: denied { send_msg } for scontext=root:system_r:unconfined_t tcontext=user_u:system_r:initrc_t tclass=dbus
May 18 12:11:05 rac1 last message repeated 7 times
May 18 12:12:10 rac1 last message repeated 13 times
[root@rac1 /]#vi /etc/selinux/config
#SELINUX=enforcing
SELINUX=disabled
[root@rac1 /]# setenforce 0
setenforce: SELinux is disabled
2.The cluster stack has been started. It needs to be running for any clustering functionality to happen.
Please run "/etc/init.d/o2cb enable" to have it started upon bootup.
o2cb_ctl: Unable to access cluster service while creating node Could not add node rac1
[root@rac1 init.d]# ./o2cb enable
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: Failed
Cluster ocfs2 created
o2cb_ctl: Configuration error discovered while populating cluster ocfs2. None of its nodes were considered local.
A node is considered local when its node name in the configuration matches this machine's host name.
Stopping O2CB cluster ocfs2: OK
[root@rac1 ocfs2]# pwd
/etc/ocfs2
[root@rac1 ocfs2]# ls
cluster.conf
[root@rac1 ocfs2]# mv cluster.conf cluster.conf.bak
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/756652/viewspace-277809/,如需轉載,請註明出處,否則將追究法律責任。