Oracle RAC環境新增新的network和listener(未完成)

PiscesCanon發表於2018-04-06
作業系統版本:
  1. [oracle@rac2 ~]$ uname -a
  2. Linux rac2.example.com 2.6.32-431.el6.x86_64 #1 SMP Sun Nov 10 22:19:54 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
  3. [oracle@rac2 ~]$ lsb_release -a
  4. LSB Version:    :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
  5. Distributor ID: RedHatEnterpriseServer
  6. Description:    Red Hat Enterprise Linux Server release 6.5 (Santiago)
  7. Release:        6.5
  8. Codename:       Santiago

資料庫版本:
  1. SYS@proc2> select * from v$version where rownum=1;

  2. BANNER
  3. --------------------------------------------------------------------------------
  4. Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

環境說明:
  1. [root@rac2 ~]# ifconfig
  2. eth0 Link encap:Ethernet HWaddr 00:0C:29:ED:B0:97
  3.           inet addr:192.168.28.200 Bcast:192.168.28.255 Mask:255.255.255.0
  4.           inet6 addr: fe80::20c:29ff:feed:b097/64 Scope:Link
  5.           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  6.           RX packets:3154 errors:0 dropped:0 overruns:0 frame:0
  7.           TX packets:2360 errors:0 dropped:0 overruns:0 carrier:0
  8.           collisions:0 txqueuelen:1000
  9.           RX bytes:283889 (277.2 KiB) TX bytes:445528 (435.0 KiB)

  10. eth0:1 Link encap:Ethernet HWaddr 00:0C:29:ED:B0:97
  11.           inet addr:192.168.28.222 Bcast:192.168.28.255 Mask:255.255.255.0
  12.           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

  13. eth1 Link encap:Ethernet HWaddr 00:0C:29:ED:B0:A1
  14.           inet addr:10.0.0.200 Bcast:10.0.0.255 Mask:255.255.255.0
  15.           inet6 addr: fe80::20c:29ff:feed:b0a1/64 Scope:Link
  16.           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  17.           RX packets:92201 errors:0 dropped:0 overruns:0 frame:0
  18.           TX packets:72680 errors:0 dropped:0 overruns:0 carrier:0
  19.           collisions:0 txqueuelen:1000
  20.           RX bytes:66845524 (63.7 MiB) TX bytes:42997425 (41.0 MiB)

  21. eth1:1 Link encap:Ethernet HWaddr 00:0C:29:ED:B0:A1
  22.           inet addr:169.254.227.158 Bcast:169.254.255.255 Mask:255.255.0.0
  23.           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

  24. ##新新增的public網路卡資訊##
  25. eth2 Link encap:Ethernet HWaddr 00:0C:29:ED:B0:AB
  26.           inet addr:20.20.20.200 Bcast:20.20.20.255 Mask:255.255.255.0
  27.           inet6 addr: fe80::20c:29ff:feed:b0ab/64 Scope:Link
  28.           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  29.           RX packets:115 errors:0 dropped:0 overruns:0 frame:0
  30.           TX packets:154 errors:0 dropped:0 overruns:0 carrier:0
  31.           collisions:0 txqueuelen:1000
  32.           RX bytes:10309 (10.0 KiB)  TX bytes:16597 (16.2 KiB)
  1. [root@rac2 ~]# cat /etc/hosts   ##背景色標紅為新網路卡資訊以及預新增的vip資訊##
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

  4. ##public##
  5. 192.168.28.100 rac1.example.com rac1
  6. 192.168.28.200 rac2.example.com rac2
  7. 20.20.20.100    rac1_2
  8. 20.20.20.200    rac2_2

  9. ##private##
  10. 10.0.0.100 rac1-priv.example.com rac1-priv
  11. 10.0.0.200 rac2-priv.example.com rac2-priv


  12. ##vip##
  13. 192.168.28.111 rac1-vip.example.com rac1-vip
  14. 192.168.28.222 rac2-vip.example.com rac2-vip
  15. 20.20.20.111 rac1-vip2.example.com rac1-vip2
  16. 20.20.20.222 rac2-vip2.example.com rac2-vip2


  17. ##scan##
  18. 192.168.28.233 scan-ip

過程步驟:
按照metalink文件 ID 1063571.1往叢集新增network的步驟,是會有問題的,具體過程如下:
  1. [root@rac2 ~]# srvctl add network -k 2 -S 20.20.20.0/255.255.255.0/eth2
  2. [root@rac2 ~]# srvctl config network
  3. Network exists: 1/192.168.28.0/255.255.255.0/eth0, type static
  4. Network exists: 2/20.20.20.0/255.255.255.0/eth2, type static

  1. [root@rac2 ~]# srvctl status vip -n rac1
  2. VIP 20.20.20.111 is enabled
  3. VIP 20.20.20.111 is not running
  4. VIP rac1-vip is enabled
  5. VIP rac1-vip is running on node: rac1
  6. [root@rac2 ~]# srvctl status vip -n rac2
  7. VIP 20.20.20.222 is enabled
  8. VIP 20.20.20.222 is not running
  9. VIP rac2-vip is enabled
  10. VIP rac2-vip is running on node: rac2
  11. [root@rac2 ~]#
  12. [root@rac2 ~]# srvctl start vip -n rac1
  13. PRKO-2420 : VIP is already started on node(s): rac1
  14. [root@rac2 ~]# srvctl start vip -i 20.20.20.111
  15. PRKO-2420 : VIP is already started on node(s): rac2
  16. [oracle@rac2 ~]$ ifconfig eth2:1
  17. eth2:1 Link encap:Ethernet HWaddr 00:0C:29:ED:B0:AB
  18.           inet addr:20.20.20.111 Bcast:20.20.20.255 Mask:255.255.255.0
  19.           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
跑節點2上去了,讓它跑回節點1:
  1. [root@rac2 admin]# crsctl stop res ora.rac1-vip2.vip
  2. CRS-2673: Attempting to stop 'ora.rac1-vip2.vip' on 'rac2'
  3. CRS-2677: Stop of 'ora.rac1-vip2.vip' on 'rac2' succeeded
  4. [root@rac2 admin]# crsctl start res ora.rac1-vip2.vip -n rac1
  5. CRS-2672: Attempting to start 'ora.net2.network' on 'rac1'
  6. CRS-2676: Start of 'ora.net2.network' on 'rac1' succeeded
  7. CRS-2672: Attempting to start 'ora.rac1-vip2.vip' on 'rac1'
  8. CRS-2676: Start of 'ora.rac1-vip2.vip' on 'rac1' succeeded

  1. [root@rac2 admin]# srvctl start vip -n rac2
  2. PRKO-2420 : VIP is already started on node(s): rac2
  3. [root@rac2 admin]# ifconfig eth2:1
  4. eth2:1 Link encap:Ethernet HWaddr 00:0C:29:ED:B0:AB
  5.           inet addr:20.20.20.222 Bcast:20.20.20.255 Mask:255.255.255.0
  6.           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

  1. [root@rac2 admin]# srvctl config vip -n rac1
  2. VIP exists: /20.20.20.111/20.20.20.111/20.20.20.0/255.255.255.0/eth2, hosting node rac1
  3. VIP exists: /rac1-vip/192.168.28.111/192.168.28.0/255.255.255.0/eth0, hosting node rac1
  4. [root@rac2 admin]# srvctl config vip -n rac2
  5. VIP exists: /20.20.20.222/20.20.20.222/20.20.20.0/255.255.255.0/eth2, hosting node rac2
  6. VIP exists: /rac2-vip/192.168.28.222/192.168.28.0/255.255.255.0/eth0, hosting node rac2
  7. [root@rac2 admin]# srvctl status vip -n rac1
  8. VIP 20.20.20.111 is enabled
  9. VIP 20.20.20.111 is running on node: rac1
  10. VIP rac1-vip is enabled
  11. VIP rac1-vip is running on node: rac1
  12. [root@rac2 admin]# srvctl status vip -n rac2
  13. VIP 20.20.20.222 is enabled
  14. VIP 20.20.20.222 is running on node: rac2
  15. VIP rac2-vip is enabled
  16. VIP rac2-vip is running on node: rac2

  1. [root@rac2 admin]# crsctl stat res -t
  2. ---省略部分內容---
  3. --------------------------------------------------------------------------------
  4. Cluster Resources
  5. --------------------------------------------------------------------------------
  6. ora.LISTENER_SCAN1.lsnr
  7.       1 ONLINE ONLINE rac2
  8. ora.cvu
  9.       1 ONLINE ONLINE rac2
  10. ora.oc4j
  11.       1 ONLINE ONLINE rac1
  12. ora.proc.db
  13.       1 ONLINE ONLINE rac1 Open
  14.       2 ONLINE ONLINE rac2 Open
  15. ora.rac1-vip2.vip
  16.       1 ONLINE INTERMEDIATE rac1 FAILED OVER
  17. ora.rac1.vip
  18.       1 ONLINE ONLINE rac1
  19. ora.rac2-vip2.vip
  20.       1 ONLINE INTERMEDIATE rac2 FAILED OVER
  21. ora.rac2.vip
  22.       1 ONLINE ONLINE rac2
  23. ora.scan1.vip
  24.       1 ONLINE ONLINE rac2
檢視日誌並無明顯錯誤資訊。

  1. [root@rac2 admin]# srvctl stop vip -n rac1
  2. PRCR-1014 : Failed to stop resource ora.rac1.vip
  3. PRCR-1065 : Failed to stop resource ora.rac1.vip
  4. CRS-2529: Unable to act on 'ora.rac1.vip' because that would require stopping or relocating 'ora.LISTENER.lsnr', but the force option was not specified
  5. [root@rac2 admin]#
  6. [root@rac2 admin]# srvctl stop vip -n rac1 -f
  7. PRCC-1017 : 20.20.20.111 was already stopped on rac1
  8. PRCR-1005 : Resource ora.rac1-vip2.vip is already stopped
  9. [root@rac2 admin]# srvctl stop vip -n rac2 -f
  10. [root@rac2 admin]#
  11. [root@rac2 admin]# crsctl stop res ora.net2.network
  12. CRS-2673: Attempting to stop 'ora.net2.network' on 'rac2'
  13. CRS-2673: Attempting to stop 'ora.net2.network' on 'rac1'
  14. CRS-2677: Stop of 'ora.net2.network' on 'rac1' succeeded
  15. CRS-2677: Stop of 'ora.net2.network' on 'rac2' succeeded
  16. [root@rac2 admin]# srvctl config vip -n rac1
  17. VIP exists: /20.20.20.111/20.20.20.111/20.20.20.0/255.255.255.0/eth2, hosting node rac1
  18. VIP exists: /rac1-vip/192.168.28.111/192.168.28.0/255.255.255.0/eth0, hosting node rac1
  19. [root@rac2 admin]# srvctl remove vip -i 20.20.20.111
  20. Please confirm that you intend to remove the VIPs 20.20.20.111 (y/[n]) y
  21. [root@rac2 admin]# srvctl remove vip -i 20.20.20.222
  22. Please confirm that you intend to remove the VIPs 20.20.20.222 (y/[n]) y
  23. [root@rac2 admin]# srvctl remove network -k 2
  24. PRCR-1001 : Resource ora.net2.network does not exist
  25. [root@rac2 admin]#
















































來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30174570/viewspace-2152633/,如需轉載,請註明出處,否則將追究法律責任。

相關文章