oracle 系統搬遷案例(zw3)
20150212 zw3遷移詳細步驟
1.老賬務3區資訊備份
2.停掉老系統(注:遮蔽定時任務);
3.啟動新系統
4.修改/etc/hosts 內容,跟現網保持一致;
5.修改新系統的SCAN VIP 和SCAN LISTENER
7.停新系統的GRID
8.OS層修改PUBLIC IP、PREVITE IP (注:本遷移網路介面、網段均與老系統一樣,因此不需要做變更,僅OS層變更後修改/etc/hosts即可)
9.啟動新系統GRID 啟庫;
10.變更檢查
11.取消先前遮蔽的定時任務
##################################################老賬務庫資訊備份############################################
老賬務/etc/hosts
127.0.0.1 loopback localhost # loopback (lo0) name/address
10.17.248.15 c4ozw3a
10.17.248.16 c4ozw3b
10.19.243.81 tres3
########## Backup Server #################
10.19.205.103 backupsvr4
########## For Oracle 11g RAC Database ##########
10.17.248.115 c4ozw3a-vip
10.17.248.116 c4ozw3b-vip
172.17.248.15 c4ozw3a-priv
172.17.248.16 c4ozw3b-priv
10.17.248.212 c4ozw3-scan
########## End For Oracle 11g RAC Database ######
10.19.243.198 tnimsvr1
::1 localhost loopback
###########NIM Server##########
10.19.205.80 snimsvr1
10.19.243.198 tnimsvr1
10.19.243.199 tnimsvr2
老賬務3 地址
[oracle@c4ozw3a] /oracle> ifconfig -a
en8: flags=1e080863,c0
inet 10.17.248.15 netmask 0xffffff00 broadcast 10.17.248.255
inet 10.17.248.115 netmask 0xffffff00 broadcast 10.17.248.255
inet 10.17.248.212 netmask 0xffffff00 broadcast 10.17.248.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
en9: flags=1e080863,c0
inet 172.17.248.15 netmask 0xffffff00 broadcast 172.17.248.255
inet 169.254.210.192 netmask 0xffff0000 broadcast 169.254.255.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
lo0: flags=e08084b,c0
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1%1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
Below is an example using the following configuration:
The name of the SCAN is sales-scan.example.com
subnet of the public network is 10.100.10.0
netmask for the public network is 255.255.255.0
name of the public interface is eth1
old IP addresses: 10.100.10.81, 10.100.10.82 & 10.100.10.83
new IP addresses: 10.100.10.121, 10.100.10.122 & 10.100.10.123
Stopping & starting the SCAN VIPs/listeners can be done by the grid user, however the 'srvctl modify scan' command must be executed by root user, so it's practical to execute all steps as the root user.
#############################################SCAN IP 變更#################################################3
實施方案:
1.監檢SCAN ip 配置
[oracle@c4ozw3a] /oracle> $GRID_HOME/bin/srvctl config scan
SCAN name: c4ozw3-scan, Network: 1/10.17.248.0/255.255.255.0/en8
SCAN VIP name: scan1, IP: /c4ozw3-scan/10.17.248.212
[oracle@c4ozw3b] /oracle> $GRID_HOME/bin/srvctl config scan
SCAN name: c4ozw3-scan, Network: 1/10.17.248.0/255.255.255.0/en8
SCAN VIP name: scan1, IP: /c4ozw3-scan/10.17.248.212
2.更改/etc/hosts檔案
3.停止SCAN 監聽和SCAN IP資源
Stop the SCAN listener and the SCAN VIP resources:
# $GRID_HOME/bin/srvctl stop scan_listener
# $GRID_HOME/bin/srvctl stop scan
# $GRID_HOME/bin/srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
SCAN VIP scan2 is enabled
SCAN VIP scan2 is not running
SCAN VIP scan3 is enabled
SCAN VIP scan3 is not running
# $GRID_HOME/bin/srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is not running
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is not running
4、檢視SCAN IP 注:此時SCAN IP仍然是老的
The SCAN VIP resources still show the old IP addresses:
# $GRID_HOME/bin/srvctl config scan
SCAN name: sales-scan, Network: 1/10.100.10.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /sales-scan.example.com/10.100.10.81
SCAN VIP name: scan2, IP: /sales-scan.example.com/10.100.10.82
SCAN VIP name: scan3, IP: /sales-scan.example.com/10.100.10.83
5、更新SCAN IP
Now tell CRS to update the SCAN VIP resources:
# $GRID_HOME/bin/srvctl modify scan -n c4ozw3-scan
To verify that the change was successful, check the SCAN configuration again:
6、檢查更新是否成功
# $GRID_HOME/bin/srvctl config scan
SCAN name: sales-scan, Network: 1/10.100.10.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /sales-scan.example.com/10.100.10.121
SCAN VIP name: scan2, IP: /sales-scan.example.com/10.100.10.122
SCAN VIP name: scan3, IP: /sales-scan.example.com/10.100.10.123
$GRID_HOME/bin/srvctl status scan_listener
7、開啟SCAN & SCAN_LISTENER
Start SCAN and the SCAN listener:
# $GRID_HOME/bin/srvctl start scan
# $GRID_HOME/bin/srvctl start scan_listener
8、如果SCAN IP 更新成功,需要更新SCAN 監聽
$GRID_HOME/bin/srvctl modify scan_listener -u
##################################################PUBLIC IP 變更#############################################
實施方案:
1.由於新老機器的PUBLIC IP屬於同一個網段,因此CLUSTERWARE層不需要做更改,所有的操作全部在OS層;
2.關閉GRID;
3.主機人員修改IP地址;
4.開啟GRID;
備選方案:(如果網路節點變更)
如果網口或更改不同的網段地址使用如下方式(注:做此步時必須確保所有的節點均開啟clusterware)
1.獲到老賬務3區
[oracle@c4ozw3a] /oracle> $GRID_HOME/bin/oifcfg getif -global
en8 10.17.248.0 global public
en9 172.17.248.0 global cluster_interconnect
2.修改新賬務3區為老區地址
$GRID_HOME/bin/oifcfg delif -global en8/10.17.248.0
$GRID_HOME/bin/oifcfg setif -global en9/172.17.248.0:public
3.然後變更OS的地址
這一步不需要重啟clusterware
##################################################VIP 變更#####################################################
實施方案:
由於新老機器的PUBLIC IP屬於同一個網段,因此CLUSTERWARE層不需要做更改,所有的操作全部在OS層;
備選方案:(如果網路介面做變更)
1.收集當前老賬務的vip資訊
[oracle@c4ozw3a] /oracle> srvctl config nodeapps -a
Network exists: 1/10.17.248.0/255.255.255.0/en8, type static
VIP exists: /c4ozw3a-vip/10.17.248.115/10.17.248.0/255.255.255.0/en8, hosting node c4ozw3a
VIP exists: /c4ozw3b-vip/10.17.248.116/10.17.248.0/255.255.255.0/en8, hosting node c4ozw3b
You have mail in /usr/spool/mail/oracle
2.檢查VIP狀態
crsctl stat res -t
- it should show VIPs are ONLINE
$ ifconfig -a
(netstat -in for HP and ipconfig /all for Windows)
- VIP logical interface is bound to the public network interface
老賬務3資訊
en8: flags=1e080863,c0
inet 10.17.248.15 netmask 0xffffff00 broadcast 10.17.248.255
inet 10.17.248.115 netmask 0xffffff00 broadcast 10.17.248.255
inet 10.17.248.212 netmask 0xffffff00 broadcast 10.17.248.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
3.停止資源
Stop the nodeapps resources (and all dependent resources ASM/DB only if required):
$ srvctl stop instance -d orazw3 -n c4ozw3b
$ srvctl stop vip -n orazw3 -f
4.檢查vip 資源,正常情況下應該是OFFLINE狀態
Verify VIP is now OFFLINE and the interface is no longer bound to the public network interface
$ crs_stat -t (or $ crsctl stat res -t for 11gR2)
$ ifconfig -a
(netstat -in for HP and ipconfig /all for windows)
5.修改VIP資源(注:必須是root執行)
Modify the VIP resource, as root user:
# srvctl modify nodeapps -n -A //
srvctl modify nodeapps -n c4ozw3a -A 10.17.248.115/255.255.255.0/en8
srvctl modify nodeapps -n c4ozw3b -A 10.17.248.116/255.255.255.0/en8
6.檢查是否改變了
Verify the change
$ srvctl config nodeapps -n -a (10g and 11gR1)
$ srvctl config nodeapps -a (11gR2)
$ srvctl config nodeapps -n c4ozw3a -a
-n option has been deprecated.
Network exists: 1/10.17.248.0/255.255.255.0/en8, type static
VIP exists: /c4ozw3a-vip/10.17.248.115/10.17.248.0/255.255.255.0/en8, hosting node c4ozw3a
% srvctl config nodeapps -n c4ozw3b -a
Network exists: 1/10.17.248.0/255.255.255.0/en8, type static
VIP exists: /c4ozw3b-vip/10.17.248.116/10.17.248.0/255.255.255.0/en8, hosting node c4ozw3b
7.啟動VIP資源
11gR2, as Grid Infrastructure owner:
$ srvctl start vip -n
$ srvctl start listener -n
$ srvctl start instance -d -n (optional)
eg,
$ srvctl start vip -n c4ozw3a
$ srvctl start vip -n c4ozw3b
$ srvctl start listener -n c4ozw3a
$ srvctl start listener -n c4ozw3b
$ srvctl start instance -d orazw3 -n c4ozw3a
$ srvctl start instance -d orazw3 -n c4ozw3b
8.檢查vip 成功與否
Verify the new VIP is ONLINE and bind to the public network interface
$ crs_stat -t (or $ crsctl stat res -t for 11gR2)
$ ifconfig -a
(netstat -in for HP or ipconfig /all for windows)
9.執行上述同樣的步驟在叢集裡的另一個節點上(注:只需要對部分特定的內容進行修改)
10.修改listener.ora、tnsnames.ora 的與VIP相關的 LOCAL_LISTENER/REMOTE_LISTENER的引數(如果需要)
#############################################private ip 變更############################################
Case II. Changing private IP only without changing network interface, subnet and netmask
因為本次賬務3遷移,private ip 的節點名、網段均一樣,因此不需要在叢集層更改,只需要在OS層更改即可;
1.關閉clusterware;
2.os 層修改private ip
3.開啟clusterware;
4.檢查是否改變
oifcfg getif
##############################################後續檢查###################################################
1.修改listener.ora、tnsnames.ora 的與VIP相關的 LOCAL_LISTENER/REMOTE_LISTENER的引數(如果需要)
2.檢查GRID 元件狀態
crsctl stat rest -t
3.檢查vip是否修改成功
srvctl config nodeapps -a
4.檢視vip 是否已經在PUBLIC介面上
ifconfig -a
5.檢查SCAN VIP 是否成功修改
srvctl config scan
6.檢查SCAN LISTENER 狀態
$GRID_HOME/bin/srvctl status scan_listener
$GRID_HOME/bin/srvctl config scan_listener
7.取消先前遮蔽的定時任務
1.老賬務3區資訊備份
2.停掉老系統(注:遮蔽定時任務);
3.啟動新系統
4.修改/etc/hosts 內容,跟現網保持一致;
5.修改新系統的SCAN VIP 和SCAN LISTENER
7.停新系統的GRID
8.OS層修改PUBLIC IP、PREVITE IP (注:本遷移網路介面、網段均與老系統一樣,因此不需要做變更,僅OS層變更後修改/etc/hosts即可)
9.啟動新系統GRID 啟庫;
10.變更檢查
11.取消先前遮蔽的定時任務
##################################################老賬務庫資訊備份############################################
老賬務/etc/hosts
127.0.0.1 loopback localhost # loopback (lo0) name/address
10.17.248.15 c4ozw3a
10.17.248.16 c4ozw3b
10.19.243.81 tres3
########## Backup Server #################
10.19.205.103 backupsvr4
########## For Oracle 11g RAC Database ##########
10.17.248.115 c4ozw3a-vip
10.17.248.116 c4ozw3b-vip
172.17.248.15 c4ozw3a-priv
172.17.248.16 c4ozw3b-priv
10.17.248.212 c4ozw3-scan
########## End For Oracle 11g RAC Database ######
10.19.243.198 tnimsvr1
::1 localhost loopback
###########NIM Server##########
10.19.205.80 snimsvr1
10.19.243.198 tnimsvr1
10.19.243.199 tnimsvr2
老賬務3 地址
[oracle@c4ozw3a] /oracle> ifconfig -a
en8: flags=1e080863,c0
inet 10.17.248.15 netmask 0xffffff00 broadcast 10.17.248.255
inet 10.17.248.115 netmask 0xffffff00 broadcast 10.17.248.255
inet 10.17.248.212 netmask 0xffffff00 broadcast 10.17.248.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
en9: flags=1e080863,c0
inet 172.17.248.15 netmask 0xffffff00 broadcast 172.17.248.255
inet 169.254.210.192 netmask 0xffff0000 broadcast 169.254.255.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
lo0: flags=e08084b,c0
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1%1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
Below is an example using the following configuration:
The name of the SCAN is sales-scan.example.com
subnet of the public network is 10.100.10.0
netmask for the public network is 255.255.255.0
name of the public interface is eth1
old IP addresses: 10.100.10.81, 10.100.10.82 & 10.100.10.83
new IP addresses: 10.100.10.121, 10.100.10.122 & 10.100.10.123
Stopping & starting the SCAN VIPs/listeners can be done by the grid user, however the 'srvctl modify scan' command must be executed by root user, so it's practical to execute all steps as the root user.
#############################################SCAN IP 變更#################################################3
實施方案:
1.監檢SCAN ip 配置
[oracle@c4ozw3a] /oracle> $GRID_HOME/bin/srvctl config scan
SCAN name: c4ozw3-scan, Network: 1/10.17.248.0/255.255.255.0/en8
SCAN VIP name: scan1, IP: /c4ozw3-scan/10.17.248.212
[oracle@c4ozw3b] /oracle> $GRID_HOME/bin/srvctl config scan
SCAN name: c4ozw3-scan, Network: 1/10.17.248.0/255.255.255.0/en8
SCAN VIP name: scan1, IP: /c4ozw3-scan/10.17.248.212
2.更改/etc/hosts檔案
3.停止SCAN 監聽和SCAN IP資源
Stop the SCAN listener and the SCAN VIP resources:
# $GRID_HOME/bin/srvctl stop scan_listener
# $GRID_HOME/bin/srvctl stop scan
# $GRID_HOME/bin/srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
SCAN VIP scan2 is enabled
SCAN VIP scan2 is not running
SCAN VIP scan3 is enabled
SCAN VIP scan3 is not running
# $GRID_HOME/bin/srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is not running
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is not running
4、檢視SCAN IP 注:此時SCAN IP仍然是老的
The SCAN VIP resources still show the old IP addresses:
# $GRID_HOME/bin/srvctl config scan
SCAN name: sales-scan, Network: 1/10.100.10.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /sales-scan.example.com/10.100.10.81
SCAN VIP name: scan2, IP: /sales-scan.example.com/10.100.10.82
SCAN VIP name: scan3, IP: /sales-scan.example.com/10.100.10.83
5、更新SCAN IP
Now tell CRS to update the SCAN VIP resources:
# $GRID_HOME/bin/srvctl modify scan -n c4ozw3-scan
To verify that the change was successful, check the SCAN configuration again:
6、檢查更新是否成功
# $GRID_HOME/bin/srvctl config scan
SCAN name: sales-scan, Network: 1/10.100.10.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /sales-scan.example.com/10.100.10.121
SCAN VIP name: scan2, IP: /sales-scan.example.com/10.100.10.122
SCAN VIP name: scan3, IP: /sales-scan.example.com/10.100.10.123
$GRID_HOME/bin/srvctl status scan_listener
7、開啟SCAN & SCAN_LISTENER
Start SCAN and the SCAN listener:
# $GRID_HOME/bin/srvctl start scan
# $GRID_HOME/bin/srvctl start scan_listener
8、如果SCAN IP 更新成功,需要更新SCAN 監聽
$GRID_HOME/bin/srvctl modify scan_listener -u
##################################################PUBLIC IP 變更#############################################
實施方案:
1.由於新老機器的PUBLIC IP屬於同一個網段,因此CLUSTERWARE層不需要做更改,所有的操作全部在OS層;
2.關閉GRID;
3.主機人員修改IP地址;
4.開啟GRID;
備選方案:(如果網路節點變更)
如果網口或更改不同的網段地址使用如下方式(注:做此步時必須確保所有的節點均開啟clusterware)
1.獲到老賬務3區
[oracle@c4ozw3a] /oracle> $GRID_HOME/bin/oifcfg getif -global
en8 10.17.248.0 global public
en9 172.17.248.0 global cluster_interconnect
2.修改新賬務3區為老區地址
$GRID_HOME/bin/oifcfg delif -global en8/10.17.248.0
$GRID_HOME/bin/oifcfg setif -global en9/172.17.248.0:public
3.然後變更OS的地址
這一步不需要重啟clusterware
##################################################VIP 變更#####################################################
實施方案:
由於新老機器的PUBLIC IP屬於同一個網段,因此CLUSTERWARE層不需要做更改,所有的操作全部在OS層;
備選方案:(如果網路介面做變更)
1.收集當前老賬務的vip資訊
[oracle@c4ozw3a] /oracle> srvctl config nodeapps -a
Network exists: 1/10.17.248.0/255.255.255.0/en8, type static
VIP exists: /c4ozw3a-vip/10.17.248.115/10.17.248.0/255.255.255.0/en8, hosting node c4ozw3a
VIP exists: /c4ozw3b-vip/10.17.248.116/10.17.248.0/255.255.255.0/en8, hosting node c4ozw3b
You have mail in /usr/spool/mail/oracle
2.檢查VIP狀態
crsctl stat res -t
- it should show VIPs are ONLINE
$ ifconfig -a
(netstat -in for HP and ipconfig /all for Windows)
- VIP logical interface is bound to the public network interface
老賬務3資訊
en8: flags=1e080863,c0
inet 10.17.248.15 netmask 0xffffff00 broadcast 10.17.248.255
inet 10.17.248.115 netmask 0xffffff00 broadcast 10.17.248.255
inet 10.17.248.212 netmask 0xffffff00 broadcast 10.17.248.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
3.停止資源
Stop the nodeapps resources (and all dependent resources ASM/DB only if required):
$ srvctl stop instance -d orazw3 -n c4ozw3b
$ srvctl stop vip -n orazw3 -f
4.檢查vip 資源,正常情況下應該是OFFLINE狀態
Verify VIP is now OFFLINE and the interface is no longer bound to the public network interface
$ crs_stat -t (or $ crsctl stat res -t for 11gR2)
$ ifconfig -a
(netstat -in for HP and ipconfig /all for windows)
5.修改VIP資源(注:必須是root執行)
Modify the VIP resource, as root user:
# srvctl modify nodeapps -n
srvctl modify nodeapps -n c4ozw3a -A 10.17.248.115/255.255.255.0/en8
srvctl modify nodeapps -n c4ozw3b -A 10.17.248.116/255.255.255.0/en8
6.檢查是否改變了
Verify the change
$ srvctl config nodeapps -n
$ srvctl config nodeapps -a (11gR2)
$ srvctl config nodeapps -n c4ozw3a -a
-n
Network exists: 1/10.17.248.0/255.255.255.0/en8, type static
VIP exists: /c4ozw3a-vip/10.17.248.115/10.17.248.0/255.255.255.0/en8, hosting node c4ozw3a
% srvctl config nodeapps -n c4ozw3b -a
Network exists: 1/10.17.248.0/255.255.255.0/en8, type static
VIP exists: /c4ozw3b-vip/10.17.248.116/10.17.248.0/255.255.255.0/en8, hosting node c4ozw3b
7.啟動VIP資源
11gR2, as Grid Infrastructure owner:
$ srvctl start vip -n
$ srvctl start listener -n
$ srvctl start instance -d
eg,
$ srvctl start vip -n c4ozw3a
$ srvctl start vip -n c4ozw3b
$ srvctl start listener -n c4ozw3a
$ srvctl start listener -n c4ozw3b
$ srvctl start instance -d orazw3 -n c4ozw3a
$ srvctl start instance -d orazw3 -n c4ozw3b
8.檢查vip 成功與否
Verify the new VIP is ONLINE and bind to the public network interface
$ crs_stat -t (or $ crsctl stat res -t for 11gR2)
$ ifconfig -a
(netstat -in for HP or ipconfig /all for windows)
9.執行上述同樣的步驟在叢集裡的另一個節點上(注:只需要對部分特定的內容進行修改)
10.修改listener.ora、tnsnames.ora 的與VIP相關的 LOCAL_LISTENER/REMOTE_LISTENER的引數(如果需要)
#############################################private ip 變更############################################
Case II. Changing private IP only without changing network interface, subnet and netmask
因為本次賬務3遷移,private ip 的節點名、網段均一樣,因此不需要在叢集層更改,只需要在OS層更改即可;
1.關閉clusterware;
2.os 層修改private ip
3.開啟clusterware;
4.檢查是否改變
oifcfg getif
##############################################後續檢查###################################################
1.修改listener.ora、tnsnames.ora 的與VIP相關的 LOCAL_LISTENER/REMOTE_LISTENER的引數(如果需要)
2.檢查GRID 元件狀態
crsctl stat rest -t
3.檢查vip是否修改成功
srvctl config nodeapps -a
4.檢視vip 是否已經在PUBLIC介面上
ifconfig -a
5.檢查SCAN VIP 是否成功修改
srvctl config scan
6.檢查SCAN LISTENER 狀態
$GRID_HOME/bin/srvctl status scan_listener
$GRID_HOME/bin/srvctl config scan_listener
7.取消先前遮蔽的定時任務
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29446986/viewspace-1453591/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- mongodb分片物理搬遷方案MongoDB
- 機房搬遷的流程
- 智慧量化搬磚交易系統開發,多平臺搬磚系統開發
- 遷移案例一: oracle 8i 檔案遷移Oracle
- 機房搬遷工作已經完成
- Oracle 某行系統SQL優化案例(一)OracleSQL優化
- Oracle某行系統SQL優化案例(二)OracleSQL優化
- Oracle某行系統SQL優化案例(三)OracleSQL優化
- Oracle某行系統SQL優化(案例五)OracleSQL優化
- Oracle 跨作業系統 遷移 說明Oracle作業系統
- Oracle 跨作業系統遷移說明Oracle作業系統
- Oracle跨作業系統遷移說明Oracle作業系統
- ZT 遷移案例一: oracle 8i 檔案遷移Oracle
- 今日頭條怎麼搬運影片,短影片搬運案例解析!
- 內容搬遷至 SegmentFault #0476dc
- 量化交易對沖搬磚開發正式版丨量化交易對沖搬磚系統開發(方案詳細)丨原始碼案例原始碼
- Oracle某行系統SQL最佳化(案例四)OracleSQL
- TechFinger遊戲搬磚系統開發demo遊戲
- 異構資料庫系統遷移到Oracle 工具 - Oracle SQL Developer資料庫OracleSQLDeveloper
- oracle 資料遷移案例 從 8.1.7.4到9.2.0.8Oracle
- Oracle某X系統SQL最佳化(案例六)OracleSQL
- 將 CentOS 8 作業系統遷移到 Oracle LinuxCentOS作業系統OracleLinux
- 量化對沖搬磚交易系統開發(開發策略)丨量化對沖搬磚交易系統開發原始碼原始碼
- 使用Azure Site Recovery把VM批量搬遷到Azure
- Oracle 11gRac 測試案例(三)系統測試Oracle
- 多平臺量化搬磚交易系統開發,對沖系統開發
- 遷移windows子系統Windows
- 系統資料遷移
- Windows 遷移系統盤Windows
- TechFinger遊戲搬磚系統搭建開發技術遊戲
- Oracle 遷移到 OB 過程中的函式改造案例Oracle函式
- 伺服器搬遷需要注意的幾個地方伺服器
- Oracle 11g單例項ASM遷移到檔案系統Oracle單例ASM
- Oracle 11gRac 測試案例(二)系統測試(一)Oracle
- oracle回收站,搬運工Oracle
- TechFinger遊戲搬磚平臺開發系統搭建方案遊戲
- Oracle 11gR2 Database和Active Data Guard遷移案例OracleDatabase
- Arch Linux 系統遷移Linux