streams 總結
對於stream複製環境的源資料庫一端,對歸檔日誌的清理需要注意一些問題,不是所有的歸檔都可以隨意刪除,如果誤刪了capture程式還需要讀取的歸檔日誌就會出現capture雖然能正常啟動 status也是enable狀態,但是資料缺法複製。
這裡需要注意一個capture程式的引數REQUIRED_CHECKPOINT_SCN這個參數列示capture程式重新啟動時需要scan的最小scn號,可以透過這個引數找到需要為capture程式保留的歸檔日誌。這裡還需要講的一個引數是capture程式的_CHEKPOINT_FREQUENCY引數這個引數的表示logminer做一次checkpoint需要挖掘的日誌大小,這個引數的單位是M,ORACLE官方建議設定為500M,也就是說當logminer處理了500M大小的redo的時候會做一次logminer的checkpoint,checkpoint之後REQUIRED_CHECKPOINT_SCN被更新,所以透過設_CHEKPOINT_FREQUENCY的大小,可以控制需要保留的歸檔的大小,可以使用dbms_capture_adm.set_parameter過程修改_CHEKPOINT_FREQUENCY引數
這裡需要注意一個capture程式的引數REQUIRED_CHECKPOINT_SCN這個參數列示capture程式重新啟動時需要scan的最小scn號,可以透過這個引數找到需要為capture程式保留的歸檔日誌。這裡還需要講的一個引數是capture程式的_CHEKPOINT_FREQUENCY引數這個引數的表示logminer做一次checkpoint需要挖掘的日誌大小,這個引數的單位是M,ORACLE官方建議設定為500M,也就是說當logminer處理了500M大小的redo的時候會做一次logminer的checkpoint,checkpoint之後REQUIRED_CHECKPOINT_SCN被更新,所以透過設_CHEKPOINT_FREQUENCY的大小,可以控制需要保留的歸檔的大小,可以使用dbms_capture_adm.set_parameter過程修改_CHEKPOINT_FREQUENCY引數
下面是一個測試例子:
透過REQUIRED_CHECKPOINT_SCN確定需要保留的archive log:
SQL> conn strmadmin/strmadmin
Connected.
SQL> COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
SQL> COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
SQL> COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
SQL> COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40
SQL> SELECT r.CONSUMER_NAME,
2 r.SOURCE_DATABASE,
3 r.SEQUENCE#,
4 r.NAME
5 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
6 WHERE r.CONSUMER_NAME = c.CAPTURE_NAME AND
7 r.NEXT_SCN >= c.REQUIRED_CHECKPOINT_SCN;
透過REQUIRED_CHECKPOINT_SCN確定需要保留的archive log:
SQL> conn strmadmin/strmadmin
Connected.
SQL> COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
SQL> COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
SQL> COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
SQL> COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40
SQL> SELECT r.CONSUMER_NAME,
2 r.SOURCE_DATABASE,
3 r.SEQUENCE#,
4 r.NAME
5 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
6 WHERE r.CONSUMER_NAME = c.CAPTURE_NAME AND
7 r.NEXT_SCN >= c.REQUIRED_CHECKPOINT_SCN;
Capture Required
Process Source Sequence Archived Redo Log
Name Database Number File Name
--------------- ---------- -------- ----------------------------------------
CAPTURE_A SOURCE 100 /arch/1_100_714317792.dbf
CAPTURE_A SOURCE 101 /arch/1_101_714317792.dbf
CAPTURE_A SOURCE 102 /arch/1_102_714317792.dbf
CAPTURE_A SOURCE 103 /arch/1_103_714317792.dbf
CAPTURE_A SOURCE 104 /arch/1_104_714317792.dbf
Process Source Sequence Archived Redo Log
Name Database Number File Name
--------------- ---------- -------- ----------------------------------------
CAPTURE_A SOURCE 100 /arch/1_100_714317792.dbf
CAPTURE_A SOURCE 101 /arch/1_101_714317792.dbf
CAPTURE_A SOURCE 102 /arch/1_102_714317792.dbf
CAPTURE_A SOURCE 103 /arch/1_103_714317792.dbf
CAPTURE_A SOURCE 104 /arch/1_104_714317792.dbf
可以看到capture重新啟動的時候需要的日誌是從sequence 100以後的,現在我們首先關閉掉capture然後rename/arch/1_100_714317792.dbf
再重新啟動capture,觀察caputre情況
SQL> exec dbms_capture_adm.stop_capture('capture_a');
PL/SQL procedure successfully completed.
SQL> host
[oracle@primary arch]$ cd /arch
[oracle@primary arch]$ ls
1_100_714317792.dbf 1_51_714317792.dbf 1_60_714317792.dbf 1_69_714317792.dbf 1_78_714317792.dbf 1_87_714317792.dbf 1_96_714317792.dbf
1_101_714317792.dbf 1_52_714317792.dbf 1_61_714317792.dbf 1_70_714317792.dbf 1_79_714317792.dbf 1_88_714317792.dbf 1_97_714317792.dbf
1_102_714317792.dbf 1_53_714317792.dbf 1_62_714317792.dbf 1_71_714317792.dbf 1_80_714317792.dbf 1_89_714317792.dbf 1_98_714317792.dbf
1_103_714317792.dbf 1_54_714317792.dbf 1_63_714317792.dbf 1_72_714317792.dbf 1_81_714317792.dbf 1_90_714317792.dbf 1_99_714317792.dbf
1_104_714317792.dbf 1_55_714317792.dbf 1_64_714317792.dbf 1_73_714317792.dbf 1_82_714317792.dbf 1_91_714317792.dbf
1_47_714317792.dbf 1_56_714317792.dbf 1_65_714317792.dbf 1_74_714317792.dbf 1_83_714317792.dbf 1_92_714317792.dbf
1_48_714317792.dbf 1_57_714317792.dbf 1_66_714317792.dbf 1_75_714317792.dbf 1_84_714317792.dbf 1_93_714317792.dbf
1_49_714317792.dbf 1_58_714317792.dbf 1_67_714317792.dbf 1_76_714317792.dbf 1_85_714317792.dbf 1_94_714317792.dbf
1_50_714317792.dbf 1_59_714317792.dbf 1_68_714317792.dbf 1_77_714317792.dbf 1_86_714317792.dbf 1_95_714317792.dbf
[oracle@primary arch]$ mv 1_100_714317792.dbf 1_100_714317792.dbfdfdfdfd
[oracle@primary arch]$ ls
1_100_714317792.dbfdfdfdfd 1_52_714317792.dbf 1_62_714317792.dbf 1_72_714317792.dbf 1_82_714317792.dbf 1_92_714317792.dbf
1_101_714317792.dbf 1_53_714317792.dbf 1_63_714317792.dbf 1_73_714317792.dbf 1_83_714317792.dbf 1_93_714317792.dbf
1_102_714317792.dbf 1_54_714317792.dbf 1_64_714317792.dbf 1_74_714317792.dbf 1_84_714317792.dbf 1_94_714317792.dbf
1_103_714317792.dbf 1_55_714317792.dbf 1_65_714317792.dbf 1_75_714317792.dbf 1_85_714317792.dbf 1_95_714317792.dbf
1_104_714317792.dbf 1_56_714317792.dbf 1_66_714317792.dbf 1_76_714317792.dbf 1_86_714317792.dbf 1_96_714317792.dbf
1_47_714317792.dbf 1_57_714317792.dbf 1_67_714317792.dbf 1_77_714317792.dbf 1_87_714317792.dbf 1_97_714317792.dbf
1_48_714317792.dbf 1_58_714317792.dbf 1_68_714317792.dbf 1_78_714317792.dbf 1_88_714317792.dbf 1_98_714317792.dbf
1_49_714317792.dbf 1_59_714317792.dbf 1_69_714317792.dbf 1_79_714317792.dbf 1_89_714317792.dbf 1_99_714317792.dbf
1_50_714317792.dbf 1_60_714317792.dbf 1_70_714317792.dbf 1_80_714317792.dbf 1_90_714317792.dbf
1_51_714317792.dbf 1_61_714317792.dbf 1_71_714317792.dbf 1_81_714317792.dbf 1_91_714317792.dbf
[oracle@primary arch]$ cd /arch
[oracle@primary arch]$ ls
1_100_714317792.dbf 1_51_714317792.dbf 1_60_714317792.dbf 1_69_714317792.dbf 1_78_714317792.dbf 1_87_714317792.dbf 1_96_714317792.dbf
1_101_714317792.dbf 1_52_714317792.dbf 1_61_714317792.dbf 1_70_714317792.dbf 1_79_714317792.dbf 1_88_714317792.dbf 1_97_714317792.dbf
1_102_714317792.dbf 1_53_714317792.dbf 1_62_714317792.dbf 1_71_714317792.dbf 1_80_714317792.dbf 1_89_714317792.dbf 1_98_714317792.dbf
1_103_714317792.dbf 1_54_714317792.dbf 1_63_714317792.dbf 1_72_714317792.dbf 1_81_714317792.dbf 1_90_714317792.dbf 1_99_714317792.dbf
1_104_714317792.dbf 1_55_714317792.dbf 1_64_714317792.dbf 1_73_714317792.dbf 1_82_714317792.dbf 1_91_714317792.dbf
1_47_714317792.dbf 1_56_714317792.dbf 1_65_714317792.dbf 1_74_714317792.dbf 1_83_714317792.dbf 1_92_714317792.dbf
1_48_714317792.dbf 1_57_714317792.dbf 1_66_714317792.dbf 1_75_714317792.dbf 1_84_714317792.dbf 1_93_714317792.dbf
1_49_714317792.dbf 1_58_714317792.dbf 1_67_714317792.dbf 1_76_714317792.dbf 1_85_714317792.dbf 1_94_714317792.dbf
1_50_714317792.dbf 1_59_714317792.dbf 1_68_714317792.dbf 1_77_714317792.dbf 1_86_714317792.dbf 1_95_714317792.dbf
[oracle@primary arch]$ mv 1_100_714317792.dbf 1_100_714317792.dbfdfdfdfd
[oracle@primary arch]$ ls
1_100_714317792.dbfdfdfdfd 1_52_714317792.dbf 1_62_714317792.dbf 1_72_714317792.dbf 1_82_714317792.dbf 1_92_714317792.dbf
1_101_714317792.dbf 1_53_714317792.dbf 1_63_714317792.dbf 1_73_714317792.dbf 1_83_714317792.dbf 1_93_714317792.dbf
1_102_714317792.dbf 1_54_714317792.dbf 1_64_714317792.dbf 1_74_714317792.dbf 1_84_714317792.dbf 1_94_714317792.dbf
1_103_714317792.dbf 1_55_714317792.dbf 1_65_714317792.dbf 1_75_714317792.dbf 1_85_714317792.dbf 1_95_714317792.dbf
1_104_714317792.dbf 1_56_714317792.dbf 1_66_714317792.dbf 1_76_714317792.dbf 1_86_714317792.dbf 1_96_714317792.dbf
1_47_714317792.dbf 1_57_714317792.dbf 1_67_714317792.dbf 1_77_714317792.dbf 1_87_714317792.dbf 1_97_714317792.dbf
1_48_714317792.dbf 1_58_714317792.dbf 1_68_714317792.dbf 1_78_714317792.dbf 1_88_714317792.dbf 1_98_714317792.dbf
1_49_714317792.dbf 1_59_714317792.dbf 1_69_714317792.dbf 1_79_714317792.dbf 1_89_714317792.dbf 1_99_714317792.dbf
1_50_714317792.dbf 1_60_714317792.dbf 1_70_714317792.dbf 1_80_714317792.dbf 1_90_714317792.dbf
1_51_714317792.dbf 1_61_714317792.dbf 1_71_714317792.dbf 1_81_714317792.dbf 1_91_714317792.dbf
啟動capture
[oracle@primary arch]$ sqlplus /nolog
[oracle@primary arch]$ sqlplus /nolog
SQL*Plus: Release 10.2.0.3.0 - Production on Wed Mar 31 14:46:39 2010
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
SQL> conn strmadmin/strmadmin
Connected.
SQL> exec dbms_capture_adm.start_capture('capture_a');
Connected.
SQL> exec dbms_capture_adm.start_capture('capture_a');
PL/SQL procedure successfully completed.
SQL>
重新啟動capture日誌中發現下面資訊
Streams CAPTURE C001 started with pid=23, OS id=7950
Wed Mar 31 14:46:55 2010
LOGMINER: Parameters summary for session# = 84
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 3000M
Streams CAPTURE C001 started with pid=23, OS id=7950
Wed Mar 31 14:46:55 2010
LOGMINER: Parameters summary for session# = 84
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 3000M
capture雖然啟動,但是logminer一直沒有開始正常工作。
SQL> SET LINESIZE 200
COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
COLUMN ERROR_MESSAGE HEADING 'Capture|Process|E_MESSAGE' FORMAT A20
SQL> SQL> SQL> SQL> SQL> SQL> SQL> SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS,ERROR_NUMBER,ERROR_MESSAGE
FROM DBA_CAPTURE; 2
Capture Capture Capture Capture
Process Process Positive Negative Process Process
Name Queue Rule Set Rule Set Status ERROR_NUMBER E_MESSAGE
--------------- --------------- --------------- --------------- --------------- ------------ --------------------
CAPTURE_A Q_CAP_QUEUE RULESET$_60 ENABLED
Process Process Positive Negative Process Process
Name Queue Rule Set Rule Set Status ERROR_NUMBER E_MESSAGE
--------------- --------------- --------------- --------------- --------------- ------------ --------------------
CAPTURE_A Q_CAP_QUEUE RULESET$_60 ENABLED
SQL> COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
SQL> COLUMN STATE HEADING 'State' FORMAT A27
SQL> COLUMN STATE_CHANGED HEADING 'State|Change Time'
SQL> COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'
SQL> SELECT CAPTURE_NAME,
2 STATE,
3 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
4 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
5 FROM V$STREAMS_CAPTURE;
SQL> COLUMN STATE HEADING 'State' FORMAT A27
SQL> COLUMN STATE_CHANGED HEADING 'State|Change Time'
SQL> COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'
SQL> SELECT CAPTURE_NAME,
2 STATE,
3 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
4 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
5 FROM V$STREAMS_CAPTURE;
Capture State Last Message
Name State Change Time Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_A DICTIONARY INITIALIZATION 14:49:52 03/31/10
Name State Change Time Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_A DICTIONARY INITIALIZATION 14:49:52 03/31/10
SQL>
可以看到重啟後capture雖然是enable狀態 但是一直是處於DICTIONARY INITIALIZATION中,無法正常同步資料
現在我們把rename了的歸檔檔案rename回原來的名字
[oracle@primary arch]$ ls
1_100_714317792.dbfdfdfdfd 1_52_714317792.dbf 1_62_714317792.dbf 1_72_714317792.dbf 1_82_714317792.dbf 1_92_714317792.dbf
1_101_714317792.dbf 1_53_714317792.dbf 1_63_714317792.dbf 1_73_714317792.dbf 1_83_714317792.dbf 1_93_714317792.dbf
1_102_714317792.dbf 1_54_714317792.dbf 1_64_714317792.dbf 1_74_714317792.dbf 1_84_714317792.dbf 1_94_714317792.dbf
1_103_714317792.dbf 1_55_714317792.dbf 1_65_714317792.dbf 1_75_714317792.dbf 1_85_714317792.dbf 1_95_714317792.dbf
1_104_714317792.dbf 1_56_714317792.dbf 1_66_714317792.dbf 1_76_714317792.dbf 1_86_714317792.dbf 1_96_714317792.dbf
1_47_714317792.dbf 1_57_714317792.dbf 1_67_714317792.dbf 1_77_714317792.dbf 1_87_714317792.dbf 1_97_714317792.dbf
1_48_714317792.dbf 1_58_714317792.dbf 1_68_714317792.dbf 1_78_714317792.dbf 1_88_714317792.dbf 1_98_714317792.dbf
1_49_714317792.dbf 1_59_714317792.dbf 1_69_714317792.dbf 1_79_714317792.dbf 1_89_714317792.dbf 1_99_714317792.dbf
1_50_714317792.dbf 1_60_714317792.dbf 1_70_714317792.dbf 1_80_714317792.dbf 1_90_714317792.dbf
1_51_714317792.dbf 1_61_714317792.dbf 1_71_714317792.dbf 1_81_714317792.dbf 1_91_714317792.dbf
[oracle@primary arch]$ mv 1_100_714317792.dbfdfdfdfd 1_100_714317792.dbf
[oracle@primary arch]$
下面是rename後的日誌
LOGMINER: Parameters summary for session# = 84
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 3000M
LOGMINER: session# = 84, reader process P000 started with pid=13 OS id=7972
LOGMINER: session# = 84, preparer process P002 started with pid=25 OS id=7976
LOGMINER: session# = 84, builder process P001 started with pid=24 OS id=7974
Wed Mar 31 14:52:41 2010
LOGMINER: Begin mining logfile: /arch/1_100_714317792.dbf
Wed Mar 31 14:52:54 2010
LOGMINER: End mining logfile: /arch/1_100_714317792.dbf
Wed Mar 31 14:52:54 2010
LOGMINER: Begin mining logfile: /arch/1_101_714317792.dbf
LOGMINER: Parameters summary for session# = 84
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 3000M
LOGMINER: session# = 84, reader process P000 started with pid=13 OS id=7972
LOGMINER: session# = 84, preparer process P002 started with pid=25 OS id=7976
LOGMINER: session# = 84, builder process P001 started with pid=24 OS id=7974
Wed Mar 31 14:52:41 2010
LOGMINER: Begin mining logfile: /arch/1_100_714317792.dbf
Wed Mar 31 14:52:54 2010
LOGMINER: End mining logfile: /arch/1_100_714317792.dbf
Wed Mar 31 14:52:54 2010
LOGMINER: Begin mining logfile: /arch/1_101_714317792.dbf
可以看到capture已經恢復正常工作
現在我們在來看看capture的狀態
SQL> conn strmadmin/strmadmin
Connected.
SQL> SET LINESIZE 200
SQL> COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
SQL> COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
SQL> COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
SQL> COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
SQL> COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
SQL> COLUMN ERROR_MESSAGE HEADING 'Capture|Process|E_MESSAGE' FORMAT A20
SQL> SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS,ERROR_NUMBER,ERROR_MESSAGE
2 FROM DBA_CAPTURE;
Connected.
SQL> SET LINESIZE 200
SQL> COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
SQL> COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
SQL> COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
SQL> COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
SQL> COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
SQL> COLUMN ERROR_MESSAGE HEADING 'Capture|Process|E_MESSAGE' FORMAT A20
SQL> SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS,ERROR_NUMBER,ERROR_MESSAGE
2 FROM DBA_CAPTURE;
Capture Capture Capture Capture
Process Process Positive Negative Process Process
Name Queue Rule Set Rule Set Status ERROR_NUMBER E_MESSAGE
--------------- --------------- --------------- --------------- --------------- ------------ --------------------
CAPTURE_A Q_CAP_QUEUE RULESET$_60 ENABLED
Process Process Positive Negative Process Process
Name Queue Rule Set Rule Set Status ERROR_NUMBER E_MESSAGE
--------------- --------------- --------------- --------------- --------------- ------------ --------------------
CAPTURE_A Q_CAP_QUEUE RULESET$_60 ENABLED
SQL> COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
SQL> COLUMN STATE HEADING 'State' FORMAT A27
SQL> COLUMN STATE_CHANGED HEADING 'State|Change Time'
SQL> COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'
SQL> SELECT CAPTURE_NAME,
2 STATE,
3 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
4 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
5 FROM V$STREAMS_CAPTURE;
SQL> COLUMN STATE HEADING 'State' FORMAT A27
SQL> COLUMN STATE_CHANGED HEADING 'State|Change Time'
SQL> COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'
SQL> SELECT CAPTURE_NAME,
2 STATE,
3 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
4 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
5 FROM V$STREAMS_CAPTURE;
Capture State Last Message
Name State Change Time Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_A CAPTURING CHANGES 14:55:05 03/31/10 14:55:03 03/31/10
Name State Change Time Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_A CAPTURING CHANGES 14:55:05 03/31/10 14:55:03 03/31/10
可以看到capture程式已經正常 狀態變為CAPTURING CHANGES 資料同步恢復。
由此可見對於stream環境的archive log管理一定要注意保留capture重啟所需要的archive log,以免capture無法啟動
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/25462274/viewspace-1975347/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Node.js Streams 基礎總結Node.js
- Oracle StreamsOracle
- STREAMS筆記(2) 其他建立Streams的方式筆記
- 【譯】Redis喜提新資料結構:Redis StreamsRedis資料結構
- STREAMS MONITORING
- Kafka Streams 剖析Kafka
- 精讀《web streams》Web
- Java 8 Streams filterJavaFilter
- Java 8 Streams map()Java
- Streams 流處理
- Azkarra Streams簡介:Apache Kafka Streams的第一個微框架ApacheKafka框架
- 在Oracle11g Streams測試Streams資料傳輸Oracle
- javaSE總結(轉+總結)Java
- STREAMS筆記(1) step by step 建立一個Streams複製環境筆記
- Node.js Streams(流)Node.js
- 理解 PHP 中的 StreamsPHP
- STREAMS筆記(6) rule筆記
- Oracle Stream(2)--Streams功能Oracle
- Oracle Stream(1)--Streams概述Oracle
- Java Streams 的潛力Java
- 流和向量(Streams and Vectors)
- 總結?
- this總結
- 總結
- Flutter Provider and Streams [翻譯]FlutterIDE
- 聊聊rocketmq-streams的ILeaseServiceMQ
- reactive streams與觀察者模式React模式
- InfoSphere Streams首次安裝使用
- 不能算是總結的年終總結薦
- 「比賽總結」AT ABC 358 總結
- 樹結構總結
- ISP 連結總結
- JavaScript基礎總結(三)——陣列總結JavaScript陣列
- 【總結】二叉樹概念大總結二叉樹
- 團隊總結 - Beta版總結會議
- asyncio非同步IO——Streams詳解非同步
- Kafka Streams開發入門(1)Kafka
- 11.Wagtail streams應用-2AI