ORACLE EXADATA升級—從11.2.3.1.0到11.2.3.3.0–(4)升級儲存節點
升級儲存節點的IMAGE之前,需要對環境做check。這裡選擇使用計算節點作為主要操作物件。
1.檢查各個cells節點之間root使用者的安全信任關係
[root@gxx2db01 tmp]# dcli -g all_group -l root date
gxx2db01: Sat Sep 6 12:14:41 CST 2014
gxx2db02: Sat Sep 6 12:14:40 CST 2014
gxx2cel01: Sat Sep 6 12:14:41 CST 2014
gxx2cel02: Sat Sep 6 12:14:41 CST 2014
gxx2cel03: Sat Sep 6 12:14:41 CST 2014
[root@gxx2db01 tmp]# dcli -g cell_group -l root 'hostname -i'
gxx2cel01: 10.100.84.104
gxx2cel02: 10.100.84.105
gxx2cel03: 10.100.84.106
2.檢測磁碟組屬性disk_repair_time配置
[grid@gxx2db02 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.3.0 Production on Sat Sep 6 12:20:14 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> select dg.name,a.value from v$asm_diskgroup dg, v$asm_attribute a where dg.group_number=a.group_number and a.name='disk_repair_time';
NAME VALUE
------- -----
DATA_GXX2 3.6h
DBFS_DG 3.6h
RECO_GXX2 3.6h
這裡的時間是3.6個小時,修改這個主要是為了避免升級過程中達到預設的3.6小時後在cell節點執行刪除griddisk的操作。如果發生刪除了griddisk的情況,那麼,需要升級完成後手工新增這些磁碟組。這裡先把它修改成24個小時吧。
SQL> alter diskgroup DATA_GXX2 set attribute 'disk_repair_time'='24h';
Diskgroup altered.
SQL> alter diskgroup DBFS_DG set attribute 'disk_repair_time'='24h';
Diskgroup altered.
SQL> alter diskgroup RECO_GXX2 set attribute 'disk_repair_time'='24h';
Diskgroup altered.
SQL> select dg.name,a.value from v$asm_diskgroup dg, v$asm_attribute a
where dg.group_number=a.group_number and a.name='disk_repair_time';
NAME VALUE
------- -----
DATA_GXX2 24h
DBFS_DG 24h
RECO_GXX2 24h
3.檢查作業系統的核心版本
root@gxx2db01 tmp]# dcli -g all_group -l root 'uname -a'
gxx2db01: Linux gxx2db01.gx.csg.cn 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
gxx2db02: Linux gxx2db02.gx.csg.cn 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
gxx2cel01: Linux gxx2cel01.gx.csg.cn 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
gxx2cel02: Linux gxx2cel02.gx.csg.cn 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
gxx2cel03: Linux gxx2cel03.gx.csg.cn 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
4.檢查作業系統版本
[root@gxx2db01 tmp]# dcli -g all_group -l root 'cat /etc/oracle-release'
gxx2db01: Oracle Linux Server release 5.7
gxx2db02: Oracle Linux Server release 5.7
gxx2cel01: Oracle Linux Server release 5.7
gxx2cel02: Oracle Linux Server release 5.7
gxx2cel03: Oracle Linux Server release 5.7
5.檢查IMAGE版本
[root@gxx2db01 tmp]# dcli -g all_group -l root 'imageinfo'
gxx2db01:
gxx2db01: Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
gxx2db01: Image version: 11.2.3.1.0.120304
gxx2db01: Image activated: 2002-05-03 22:47:44 +0800
gxx2db01: Image status: success
gxx2db01: System partition on device: /dev/mapper/VGExaDb-LVDbSys1
gxx2db01:
gxx2db02:
gxx2db02: Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
gxx2db02: Image version: 11.2.3.1.0.120304
gxx2db02: Image activated: 2012-05-03 11:29:41 +0800
gxx2db02: Image status: success
gxx2db02: System partition on device: /dev/mapper/VGExaDb-LVDbSys1
gxx2db02:
gxx2cel01:
gxx2cel01: Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
gxx2cel01: Cell version: OSS_11.2.3.1.0_LINUX.X64_120304
gxx2cel01: Cell rpm version: cell-11.2.3.1.0_LINUX.X64_120304-1
gxx2cel01:
gxx2cel01: Active image version: 11.2.3.1.0.120304
gxx2cel01: Active image activated: 2012-05-03 03:00:13 -0700
gxx2cel01: Active image status: success
gxx2cel01: Active system partition on device: /dev/md6
gxx2cel01: Active software partition on device: /dev/md8
gxx2cel01:
gxx2cel01: In partition rollback: Impossible
gxx2cel01:
gxx2cel01: Cell boot usb partition: /dev/sdm1
gxx2cel01: Cell boot usb version: 11.2.3.1.0.120304
gxx2cel01:
gxx2cel01: Inactive image version: 11.2.2.3.5.110815
gxx2cel01: Inactive image activated: 2011-10-19 16:15:42 -0700
gxx2cel01: Inactive image status: success
gxx2cel01: Inactive system partition on device: /dev/md5
gxx2cel01: Inactive software partition on device: /dev/md7
gxx2cel01:
gxx2cel01: Boot area has rollback archive for the version: 11.2.2.3.5.110815
gxx2cel01: Rollback to the inactive partitions: Possible
gxx2cel02:
gxx2cel02: Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
gxx2cel02: Cell version: OSS_11.2.3.1.0_LINUX.X64_120304
gxx2cel02: Cell rpm version: cell-11.2.3.1.0_LINUX.X64_120304-1
gxx2cel02:
gxx2cel02: Active image version: 11.2.3.1.0.120304
gxx2cel02: Active image activated: 2012-05-03 02:59:52 -0700
gxx2cel02: Active image status: success
gxx2cel02: Active system partition on device: /dev/md6
gxx2cel02: Active software partition on device: /dev/md8
gxx2cel02:
gxx2cel02: In partition rollback: Impossible
gxx2cel02:
gxx2cel02: Cell boot usb partition: /dev/sdm1
gxx2cel02: Cell boot usb version: 11.2.3.1.0.120304
gxx2cel02:
gxx2cel02: Inactive image version: 11.2.2.3.5.110815
gxx2cel02: Inactive image activated: 2011-10-19 16:26:30 -0700
gxx2cel02: Inactive image status: success
gxx2cel02: Inactive system partition on device: /dev/md5
gxx2cel02: Inactive software partition on device: /dev/md7
gxx2cel02:
gxx2cel02: Boot area has rollback archive for the version: 11.2.2.3.5.110815
gxx2cel02: Rollback to the inactive partitions: Possible
gxx2cel03:
gxx2cel03: Kernel version: 2.6.18-274.18.1.0.1.el5 #1 SMP Thu Feb 9 19:07:16 EST 2012 x86_64
gxx2cel03: Cell version: OSS_11.2.3.1.0_LINUX.X64_120304
gxx2cel03: Cell rpm version: cell-11.2.3.1.0_LINUX.X64_120304-1
gxx2cel03:
gxx2cel03: Active image version: 11.2.3.1.0.120304
gxx2cel03: Active image activated: 2012-05-03 02:58:38 -0700
gxx2cel03: Active image status: success
gxx2cel03: Active system partition on device: /dev/md6
gxx2cel03: Active software partition on device: /dev/md8
gxx2cel03:
gxx2cel03: In partition rollback: Impossible
gxx2cel03:
gxx2cel03: Cell boot usb partition: /dev/sdm1
gxx2cel03: Cell boot usb version: 11.2.3.1.0.120304
gxx2cel03:
gxx2cel03: Inactive image version: 11.2.2.3.5.110815
gxx2cel03: Inactive image activated: 2011-10-19 16:26:59 -0700
gxx2cel03: Inactive image status: success
gxx2cel03: Inactive system partition on device: /dev/md5
gxx2cel03: Inactive software partition on device: /dev/md7
gxx2cel03:
gxx2cel03: Boot area has rollback archive for the version: 11.2.2.3.5.110815
gxx2cel03: Rollback to the inactive partitions: Possible
[root@gxx2db01 tmp]# dcli -g all_group -l root 'imagehistory'
gxx2db01: Version : 11.2.3.1.0.120304
gxx2db01: Image activation date : 2002-05-03 22:47:44 +0800
gxx2db01: Imaging mode : fresh
gxx2db01: Imaging status : success
gxx2db01:
gxx2db02: Version : 11.2.3.1.0.120304
gxx2db02: Image activation date : 2012-05-03 11:29:41 +0800
gxx2db02: Imaging mode : fresh
gxx2db02: Imaging status : success
gxx2db02:
gxx2cel01: Version : 11.2.2.3.5.110815
gxx2cel01: Image activation date : 2011-10-19 16:15:42 -0700
gxx2cel01: Imaging mode : fresh
gxx2cel01: Imaging status : success
gxx2cel01:
gxx2cel01: Version : 11.2.3.1.0.120304
gxx2cel01: Image activation date : 2012-05-03 03:00:13 -0700
gxx2cel01: Imaging mode : out of partition upgrade
gxx2cel01: Imaging status : success
gxx2cel01:
gxx2cel02: Version : 11.2.2.3.5.110815
gxx2cel02: Image activation date : 2011-10-19 16:26:30 -0700
gxx2cel02: Imaging mode : fresh
gxx2cel02: Imaging status : success
gxx2cel02:
gxx2cel02: Version : 11.2.3.1.0.120304
gxx2cel02: Image activation date : 2012-05-03 02:59:52 -0700
gxx2cel02: Imaging mode : out of partition upgrade
gxx2cel02: Imaging status : success
gxx2cel02:
gxx2cel03: Version : 11.2.2.3.5.110815
gxx2cel03: Image activation date : 2011-10-19 16:26:59 -0700
gxx2cel03: Imaging mode : fresh
gxx2cel03: Imaging status : success
gxx2cel03:
gxx2cel03: Version : 11.2.3.1.0.120304
gxx2cel03: Image activation date : 2012-05-03 02:58:38 -0700
gxx2cel03: Imaging mode : out of partition upgrade
gxx2cel03: Imaging status : success
gxx2cel03:
6.檢查ofa版本
[root@gxx2db01 tmp]# dcli -g all_group -l root 'rpm -qa | grep ofa'
gxx2db01: ofa-2.6.18-274.18.1.0.1.el5-1.5.1-4.0.58
gxx2db02: ofa-2.6.18-274.18.1.0.1.el5-1.5.1-4.0.58
gxx2cel01: ofa-2.6.18-274.18.1.0.1.el5-1.5.1-4.0.58
gxx2cel02: ofa-2.6.18-274.18.1.0.1.el5-1.5.1-4.0.58
gxx2cel03: ofa-2.6.18-274.18.1.0.1.el5-1.5.1-4.0.58
7.檢查硬體裝置
[root@gxx2db01 tmp]# dcli -g all_group -l root 'dmidecode -s system-product-name'
gxx2db01: SUN FIRE X4170 M2 SERVER
gxx2db02: SUN FIRE X4170 M2 SERVER
gxx2cel01: SUN FIRE X4270 M2 SERVER
gxx2cel02: SUN FIRE X4270 M2 SERVER
gxx2cel03: SUN FIRE X4270 M2 SERVER
8.檢查cells節點的日誌
gxx2cel01: 36 2014-08-29T08:54:27+08:00 info "This is a test trap"
gxx2cel02: 40_1 2014-08-28T20:01:24+08:00 warning "Oracle Exadata Storage Server failed to auto-create cell disk and grid disks on the newly inserted physical disk. Physical Disk : 20:4 Status : normal Manufacturer : SEAGATE Model Number : ST360057SSUN600G Size : 600G Serial Number : E4CK7V Firmware : 0B25 Slot Number : 4 "
gxx2cel02: 41 2014-08-29T08:54:04+08:00 info "This is a test trap"gxx2cel03: 27_3 2014-08-13T18:28:11+08:00 clear "Hard disk replaced. Status : NORMAL Manufacturer : HITACHI Model Number : HUS1560SCSUN600G Size : 600G Serial Number : K7UL6N Firmware : A700 Slot Number : 11 Cell Disk : CD_11_gxx2cel03 Grid Disk : DATA_GXX2_CD_11_gxx2cel03, RECO_GXX2_CD_11_gxx2cel03, DBFS_DG_CD_11_gxx2cel03"
gxx2cel03: 28 2014-08-29T08:54:43+08:00 info "This is a test trap"
9.檢查是否存在offline的grid盤
[root@gxx2db01 tmp]# dcli -g cell_group -l root "cellcli -e "LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'" "
10. 驗證cell節點網路配置資訊與cell.conf保持一致
[root@gxx2db01 tmp]# dcli -g cell_group -l root /opt/oracle.cellos/ipconf -verify
gxx2cel01: Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf
gxx2cel01: Done. Configuration file /opt/oracle.cellos/cell.conf passed all verification checks
gxx2cel02: Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf
gxx2cel02: Done. Configuration file /opt/oracle.cellos/cell.conf passed all verification checks
gxx2cel03: Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf
gxx2cel03: Done. Configuration file /opt/oracle.cellos/cell.conf passed all verification checks
11.停止CRS和儲存節點的服務
[root@gxx2db01 tmp]dcli -g dbs_group -l root "/u01/app/11.2.0.3/grid/bin/crsctl stop crs -f"
[root@gxx2db01 tmp]dcli -g dbs_group -l root "ps -ef | grep d.bin"
[root@gxx2db01 tmp]dcli -g cell_group -l root "cellcli -e alter cell shutdown services all"
12.解壓安裝介質和解壓外掛
[root@gxx2db01 ExaImage]# unzip p16278923_112330_Linux-x86-64.zip
[root@gxx2db01 ExaImage]# unzip -d patch_11.2.3.3.0.131014.1/plugins/ p17938410_112330_Linux-x86-64.zip -x Readme.txt
[root@gxx2db01 ExaImage]# chmod +x patch_11.2.3.3.0.131014.1/plugins/*
13. 清理之前patchmgr執行後的環境
[root@gxx2db01 patch_11.2.3.3.0.131014.1]# ./patchmgr -cells /tmp/cell_group -reset_force
2014-09-06 13:48:44 +0800 DONE: reset_force
[root@gxx2db01 patch_11.2.3.3.0.131014.1]# ./patchmgr -cells /tmp/cell_group -cleanup
2014-09-06 13:49:51 +0800 DONE: Cleanup
14.預安裝檢查
[root@gxx2db01 patch_11.2.3.3.0.131014.1]# ./patchmgr -cells /tmp/cell_group -patch_check_prereq
2014-09-06 14:27:26 +0800 :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
2014-09-06 14:27:27 +0800 :SUCCESS: DONE: Check cells have ssh equivalence for root user.
2014-09-06 14:27:27 +0800 :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
2014-09-06 14:27:49 +0800 :SUCCESS: DONE: Initialize files, check space and state of cell services.
2014-09-06 14:27:49 +0800 :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
2014-09-06 14:28:17 +0800 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2014-09-06 14:28:18 +0800 :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
2014-09-06 14:28:18 +0800 :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
2014-09-06 14:29:01 +0800 :SUCCESS: DONE: Check prerequisites on all cells.
2014-09-06 14:29:01 +0800 :Working: DO: Execute plugin check for Patch Check Prereq ...
2014-09-06 14:29:01 +0800 :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.1. Details in logfile /backup/ExaImage/patch_11.2.3.3.0.131014.1/patchmgr.stdout.
2014-09-06 14:29:01 +0800 :INFO: This plugin checks dbhomes across all nodes with oracle-user ssh equivalence, but only for those known to the local system. dbhomes that exist only on remote nodes must be checked manually.
2014-09-06 14:29:01 +0800 :SUCCESS: No exposure to bug 17854520 with non-rolling patching
2014-09-06 14:29:01 +0800 :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
15.升級儲存節點
[root@gxx2db01 patch_11.2.3.3.0.131014.1]# ./patchmgr -cells /tmp/cell_group -patch
NOTE Cells will reboot during the patch or rollback process.
NOTE For non-rolling patch or rollback, ensure all ASM instances using
NOTE the cells are shut down for the duration of the patch or rollback.
NOTE For rolling patch or rollback, ensure all ASM instances using
NOTE the cells are up for the duration of the patch or rollback.
WARNING Do not start more than one instance of patchmgr.
WARNING Do not interrupt the patchmgr session.
WARNING Do not alter state of ASM instances during patch or rollback.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot cells or alter cell services during patch or rollback.
WARNING Do not open log files in editor in write mode or try to alter them.
NOTE All time estimates are approximate. Timestamps on the left are real.
NOTE You may interrupt this patchmgr run in next 60 seconds with control-c.
2014-09-06 14:32:49 +0800 :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
2014-09-06 14:32:50 +0800 :SUCCESS: DONE: Check cells have ssh equivalence for root user.
2014-09-06 14:32:50 +0800 :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
2014-09-06 14:33:32 +0800 :SUCCESS: DONE: Initialize files, check space and state of cell services.
2014-09-06 14:33:32 +0800 :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
2014-09-06 14:34:00 +0800 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2014-09-06 14:34:01 +0800 :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
2014-09-06 14:34:01 +0800 :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
2014-09-06 14:34:43 +0800 :SUCCESS: DONE: Check prerequisites on all cells.
2014-09-06 14:34:43 +0800 :Working: DO: Copy the patch to all cells. Up to 3 minutes ...
2014-09-06 14:35:15 +0800 :SUCCESS: DONE: Copy the patch to all cells.
2014-09-06 14:35:17 +0800 :Working: DO: Execute plugin check for Patch Check Prereq ...
2014-09-06 14:35:17 +0800 :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.1. Details in logfile /backup/ExaImage/patch_11.2.3.3.0.131014.1/patchmgr.stdout.
2014-09-06 14:35:17 +0800 :INFO: This plugin checks dbhomes across all nodes with oracle-user ssh equivalence, but only for those known to the local system. dbhomes that exist only on remote nodes must be checked manually.
2014-09-06 14:35:17 +0800 :SUCCESS: No exposure to bug 17854520 with non-rolling patching
2014-09-06 14:35:18 +0800 :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
2014-09-06 14:35:18 +0800 1 of 5 :Working: DO: Initiate patch on cells. Cells will remain up. Up to 5 minutes ...
2014-09-06 14:35:30 +0800 1 of 5 :SUCCESS: DONE: Initiate patch on cells.
2014-09-06 14:35:30 +0800 2 of 5 :Working: DO: Waiting to finish pre-reboot patch actions. Cells will remain up. Up to 45 minutes ...
2014-09-06 14:36:30 +0800 Wait for patch pre-reboot procedures
2014-09-06 15:03:13 +0800 2 of 5 :SUCCESS: DONE: Waiting to finish pre-reboot patch actions.
2014-09-06 15:03:13 +0800 :Working: DO: Execute plugin check for Patching ...
2014-09-06 15:03:13 +0800 :SUCCESS: DONE: Execute plugin check for Patching.
2014-09-06 15:03:13 +0800 3 of 5 :Working: DO: Finalize patch on cells. Cells will reboot. Up to 5 minutes ...
2014-09-06 15:03:33 +0800 3 of 5 :SUCCESS: DONE: Finalize patch on cells.
2014-09-06 15:03:33 +0800 4 of 5 :Working: DO: Wait for cells to reboot and come online. Up to 120 minutes ...
2014-09-06 15:04:33 +0800 Wait for patch finalization and reboot
||||| Minutes left 076
2014-09-06 16:01:39 +0800 4 of 5 :SUCCESS: DONE: Wait for cells to reboot and come online.
2014-09-06 16:01:39 +0800 5 of 5 :Working: DO: Check the state of patch on cells. Up to 5 minutes ...
2014-09-06 16:02:14 +0800 5 of 5 :SUCCESS: DONE: Check the state of patch on cells.
2014-09-06 16:02:14 +0800 :Working: DO: Execute plugin check for Post Patch ...
2014-09-06 16:02:14 +0800 :INFO: /backup/ExaImage/patch_11.2.3.3.0.131014.1/plugins/001-post_11_2_3_3_0 - 17718598: Correct /etc/oracle-release.
2014-09-06 16:02:14 +0800 :INFO: /backup/ExaImage/patch_11.2.3.3.0.131014.1/plugins/001-post_11_2_3_3_0 - 17908298: Preserve password quality policies where applicable.
2014-09-06 16:02:15 +0800 :SUCCESS: DONE: Execute plugin check for Post Patch.
執行完成升級指令碼後,系統會在螢幕上輸出一系列的WORKING,SUCCESS等,如果執行到某一個地方出現Failed,則升級會中斷,此時需要去解決這個問題。儲存節點在升級的時候會自動重啟,我們在計算節點可以看到下列日誌:“SUCCESS: DONE: Wait for cells to reboot and come online.”最終在計算節點升級指令碼執行完畢,一般需要1個半小時以上的時間。然後我可以檢查下image的版本,判斷是否升級成功。這期間要保證網路不斷,因為我們是從計算節點發起的升級操作。所以最好使用vnc軟體來執行升級,免得終端突然斷掉引起不可預知的問題.
(本文轉自“新炬網路”官網技術分享欄目)
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29960155/viewspace-1396208/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- ORACLE EXADATA升級—從11.2.3.1.0到11.2.3.3.0–(6)升級計算節點Oracle
- ORACLE EXADATA升級—從11.2.3.1.0到11.2.3.3.0–(8)升級交換機Oracle
- ORACLE EXADATA升級—從11.2.3.1.0到11.2.3.3.0–(1)升級簡介Oracle
- ORACLE EXADATA升級—從11.2.3.1.0到11.2.3.3.0–(7)升級Bundle Patch 23Oracle
- ORACLE EXADATA升級—從11.2.3.1.0到11.2.3.3.0–(9)升級後的檢查Oracle
- ORACLE EXADATA升級—從11.2.3.1.0到11.2.3.3.0–(2)備份環境Oracle
- ORACLE EXADATA升級—從11.2.3.1.0到11.2.3.3.0–(5)釋放Solaris空間Oracle
- oracle從10.2.0.4升級到11.2.0.1的三種升級方法Oracle
- mongodb單機從3.2升級到4.0.4升級MongoDB
- spring升級到3.1.1 hibernate升級到4備忘Spring
- 靜默升級oracle 11g (從11.2.0.1升級到11.2.0.4)Oracle
- oracle 10 rac 升級 10.2.0.1升級到10.2.0.5Oracle
- ABP Framework 手動升級指南:從6.0.1升級到7.0.0Framework
- oracle 升級到 11.2.0.2Oracle
- oracle版本升級:從11.2.0.1到11.2.0.3Oracle
- oracle資料庫升級11.2.0.3升級到11.2.0.4Oracle資料庫
- TDengine 的儲存引擎升級之路儲存引擎
- 單節點oracle升級10.2.0.1.0-10.2.0.4.0Oracle
- MongoDB升級--從3.4到3.6MongoDB
- PHP版本升級:從php7.1升級到php7.2PHP
- 從細節理解鎖的升級
- openGauss-指定節點升級
- 9.2.0.4 升級到10.2.0.5升級後 Oracle Ultra Search 元件NO SCRIPTOracle元件
- windows 下oracle從10.2.0.1升級到10.2.0.4WindowsOracle
- ds5020儲存升級
- 生產庫升級:oracle 9.2.0.1升級oracle 9.2.0.8Oracle
- exadata vmwate 安裝儲存節點
- oracle 資料庫從10.2.0.4升級到11.2.0.3Oracle資料庫
- Oracle 11.2.0.4升級到12.2.0.1Oracle
- Oracle 11.2.0.1 升級到11.2.0.3Oracle
- ORACLE 11.2.0.1升級到11.2.0.3Oracle
- Oracle 11.2.0.1升級到11.2.0.3Oracle
- Oracle 10.2.0.1 升級到 10.2.0.4Oracle
- 靜默升級oracle到10.2.0.4Oracle
- oracle 10.2.0.2升級到10.2.0.4Oracle
- 升級 ubuntu,從 18.04 到 22.04Ubuntu
- RHEL AS4下升級oracle10g到10.2.0.3Oracle
- Windows升級到oracle 11g的異機物理升級文件(冷備)WindowsOracle