[zt] 總結logminer使用及各種問題處理
在前人的工作基礎上,結合自己使用過程中出現的問題及解決方法給個一個新版的<<理解和使用Oracle 8i分析工具LogMiner>>:
理解和使用Oracle 8i分析工具LogMiner
Oracle LogMiner 是Oracle公司從產品8i以後提供的一個實際非常有用的分析工具,使用該工具可以輕鬆獲得Oracle 重作日誌檔案(歸檔日誌檔案)中的具體內容,特別是,該工具可以分析出所有對於資料庫操作的DML(insert、update、delete等)語句,另外還可分析得到一些必要的回滾SQL語句。該工具特別適用於除錯、審計或者回退某個特定的事務。
LogMiner分析工具實際上是由一組PL/SQL包和一些動態檢視(Oracle8i內建包的一部分)組成,它作為Oracle資料庫的一部分來發布,是8i產品提供的一個完全免費的工具。但該工具和其他Oracle內建工具相比使用起來顯得有些複雜,主要原因是該工具沒有提供任何的圖形使用者介面(GUI)。本文將詳細介紹如何安裝以及使用該工具。
一、LogMiner的用途
日誌檔案中存放著所有進行資料庫恢復的資料,記錄了針對資料庫結構的每一個變化,也就是對資料庫操作的所有DML語句。
在Oracle 8i之前,Oracle沒有提供任何協助資料庫管理員來讀取和解釋重作日誌檔案內容的工具。系統出現問題,對於一個普通的資料管理員來講,唯一可以作的工作就是將所有的log檔案打包,然後發給Oracle公司的技術支援,然後靜靜地等待Oracle 公司技術支援給我們最後的答案。然而從8i以後,Oracle提供了這樣一個強有力的工具-LogMiner。
LogMiner 工具即可以用來分析線上,也可以用來分析離線日誌檔案,即可以分析本身自己資料庫的重作日誌檔案,也可以用來分析其他資料庫的重作日誌檔案。
總的說來,LogMiner工具的主要用途有:
1. 跟蹤資料庫的變化:可以離線的跟蹤資料庫的變化,而不會影響線上系統的效能。
2. 回退資料庫的變化:回退特定的變化資料,減少point-in-time recovery的執行。
3. 最佳化和擴容計劃:可透過分析日誌檔案中的資料以分析資料增長模式。
二、安裝LogMiner
要安裝LogMiner工具,必須首先要執行下面這樣兩個指令碼:
l $ORACLE_HOME/rdbms/admin/dbmslm.sql
2 $ORACLE_HOME/rdbms/admin/dbmslmd.sql.
這兩個指令碼必須均以SYS使用者身份執行。其中第一個指令碼用來建立DBMS_LOGMNR包,該包用來分析日誌檔案。第二個指令碼用來建立DBMS_LOGMNR_D包,該包用來建立資料字典檔案。
在windows平臺的安裝示例:
SQL> @c:\oracle\ora81\rdbms\admin\dbmslm.sql
程式包已建立。
授權成功。
SQL> @c:\oracle\ora81\rdbms\admin\dbmslmd.sql
程式包已建立。
程式包主體已建立。
沒有錯誤。
授權成功。
三、使用LogMiner工具
下面將詳細介紹如何使用LogMiner工具。
1、建立資料字典檔案(data-dictionary)
前面已經談到,LogMiner工具實際上是由兩個新的PL/SQL內建包((DBMS_LOGMNR 和 DBMS_ LOGMNR_D)和四個V$動態效能檢視(檢視是在利用過程DBMS_LOGMNR.START_LOGMNR啟動LogMiner時建立)組成:
1、v$logmnr_contents 它給出日誌分析的結果資訊。
2、v$logmnr_dictionary 因logmnr可以有多個字典檔案,該檢視用於顯示這方面資訊。
3、v$logmnr_parameters 它用於顯示logmnr的引數。
4、v$logmnr_logs 它用於顯示用於分析的日誌列表資訊。
在使用LogMiner工具分析redo log檔案之前,可以使用DBMS_LOGMNR_D 包將資料字典匯出為一個文字檔案。該字典檔案是可選的,但是如果沒有它,LogMiner解釋出來的語句中關於資料字典中的部分(如表名、列名等)和數值都將是16進位制的形式,我們是無法直接理解的。例如,下面的sql語句:
INSERT INTO dm_dj_swry (rydm, rymc) VALUES (00005, '張三');
LogMiner解釋出來的結果將是下面這個樣子,
insert into Object#308(col#1, col#2) values (hextoraw('c30rte567e436'), hextoraw('4a6f686e20446f65'));
建立資料字典的目的就是讓LogMiner引用涉及到內部資料字典中的部分時為他們實際的名字,而不是系統內部的16進位制。資料字典檔案是一個文字檔案,使用包DBMS_LOGMNR_D來建立。如果我們要分析的資料庫中的表有變化,影響到庫的資料字典也發生變化,這時就需要重新建立該字典檔案。另外一種情況是在分析另外一個資料庫檔案的重作日誌時,也必須要重新生成一遍被分析資料庫的資料字典檔案。
字典檔案用於存放表及物件ID號之間的對應關係。當使用字典檔案時,它會在表名和物件ID號之間建立一一對應的關係。因此需要注意,如果使用者建立了新表之後,並且將來可能會對該表進行日誌分析,那麼就需要重新建立字典檔案,以將其物件名及物件ID號存放到字典檔案.
首先在init.ora初始化引數檔案中,指定資料字典檔案的位置,也就是新增一個引數UTL_FILE_DIR,該引數值為伺服器中放置資料字典檔案的目錄。如:
UTL_FILE_DIR = (e:\Oracle\logs)
重新啟動資料庫,使新加的引數生效,然後建立資料字典檔案:
SQL> CONNECT SYS
SQL> EXECUTE dbms_logmnr_d.build(
dictionary_filename => ' v816dict.ora',
dictionary_location => 'e:\oracle\logs');
注意:引數dictionary_filename用以指定字典檔案的檔名;引數 dictionary_location用於指定存放字典檔案所在的目錄,該目錄必須與初始化引數UTL_FILE_DIR的值一致。
另外注意,生成字典檔案的步驟可能會出現下標超出限制問題:
SQL> EXECUTE dbms_logmnr_d.build(dictionary_filename => 'v817dict.ora',dictionary_location => 'd:\oradict');
BEGIN dbms_logmnr_d.build(dictionary_filename => 'v817dict.ora',dictionary_location => 'd:\oradict')
*
ERROR 位於第 1 行:
ORA-06532: 下標超出限制
ORA-06512: 在"SYS.DBMS_LOGMNR_D", line 793
ORA-06512: 在line 1
解決方法:
將TYPE col_desc_array IS VARRAY(513) OF col_description;
改成:
TYPE col_desc_array IS VARRAY(713) OF col_description;
儲存檔案,然後執行一遍指令碼:
SQL> @c:\oracle\ora81\rdbms\admin\dbmslmd.sql
程式包已建立。
程式包主體已建立。
沒有錯誤。
授權成功。
再重新編譯DBMS_LOGMNR_D包:
SQL> alter package DBMS_LOGMNR_D compile body;
程式包主體已變更。
最後重新執行dbms_logmnr_d.build:
SQL> EXECUTE dbms_logmnr_d.build(dictionary_filename => 'v817dict.ora',dictionary_location =>'d:\ora
dict');
PL/SQL 過程已成功完成。
字典檔案正常生成,生成的字典檔案和提示下標超出限制的情況下生成的字典檔案大小一樣。
另請注意有的文件中說:
如果指定的字典檔名dict.ora已經存在,則應在執行此操作前將其徹底刪除(從垃圾箱中刪除),否則執行該過程將失敗。
實踐證明這要說法是沒有根據的,在實際操作前並沒有將原來生成的字典檔案刪除掉,但字典檔案是正常生成了。
2、建立要分析的日誌檔案列表
Oracle的重作日誌分為兩種,線上(online)和離線(offline)歸檔日誌檔案,下面就分別來討論這兩種不同日誌檔案的列表建立。
(1)分析線上重作日誌檔案
A. 建立列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' e:\Oracle\oradata\sxf\redo01.log',
Options=>dbms_logmnr.new);
B. 新增其他日誌檔案到列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' e:\Oracle\oradata\sxf\redo02.log',
Options=>dbms_logmnr.addfile);
(2)分析離線日誌檔案
A.建立列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' E:\Oracle\oradata\sxf\archive\ARCARC09108.001',
Options=>dbms_logmnr.new);
B.新增另外的日誌檔案到列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' E:\Oracle\oradata\sxf\archive\ARCARC09109.001',
Options=>dbms_logmnr.addfile);
關於這個日誌檔案列表中需要分析日誌檔案的個數完全由你自己決定,但這裡建議最好是每次只新增一個需要分析的日誌檔案,在對該檔案分析完畢後,再新增另外的檔案。
和新增日誌分析列表相對應,使用過程 'dbms_logmnr.removefile' 也可以從列表中移去一個日誌檔案。下面的例子移去上面新增的日誌檔案e:\Oracle\oradata\sxf\redo02.log。
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' e:\Oracle\oradata\sxf\redo02.log',
Options=>dbms_logmnr. REMOVEFILE);
可以透過動態效能檢視v$logmnr_logs檢視日誌分析列表中有哪些待分析的日誌檔案。
建立了要分析的日誌檔案列表,下面就可以對其進行分析了。
3、使用LogMiner進行日誌分析
(1)無限制條件
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName=>' e:\oracle\logs\ v816dict.ora ');
(2)有限制條件
透過對過程DBMS_ LOGMNR.START_LOGMNR中幾個不同引數的設定(引數含義見表1),可以縮小要分析日誌檔案的範圍。透過設定起始時間和終止時間引數我們可以限制只分析某一時間範圍的日誌。如下面的例子,我們僅僅分析2001年9月18日的日誌:
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName => ' e:\oracle\logs\ v816dict.ora ',
StartTime => to_date('2001-9-18 00:00:00','YYYY-MM-DD HH24:MI:SS')
EndTime => to_date(''2001-9-18 23:59:59','YYYY-MM-DD HH24:MI:SS '));
注意:此過程能否執行成功的關鍵是給出的starttime(起始時間)和endtime(終止時
間)應在一個有效的範圍內。特別是終止時間,應小於或等於歸檔日誌的建立時間;如果大於
歸檔日誌的建立時間,則不能執行分析過程。分析多個歸檔日誌時,這些歸檔日誌最好是連續
也可以透過設定起始SCN和截至SCN來限制要分析日誌的範圍:
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName => ' e:\oracle\logs\ v816dict.ora ',
StartScn => 20,
EndScn => 50);
表1 DBMS_LOGMNR.START__LOGMNR過程引數含義
引數 引數型別 預設值 含義
StartScn 數字型(Number) 0 分析重作日誌中SCN≥StartScn日誌檔案部分
EndScn 數字型(Number) 0 分析重作日誌中SCN≤EndScn日誌檔案部分
StartTime 日期型(Date) 1998-01-01 分析重作日誌中時間戳≥StartTime的日誌檔案部分
EndTime 日期型(Date) 2988-01-01 分析重作日誌中時間戳≤EndTime的日誌檔案部分
DictFileName 字元型(VARCHAR2) 字典檔案,該檔案包含一個資料庫目錄的快照。使用該檔案可以使得到的分析結果是可以理解的文字形式,而非系統內部的16進位制
Options BINARY_INTEGER 0 系統除錯引數,實際很少使用
在執行分析的時候如果提示無效的月份,可以按照下面的步驟去嘗試:
alter session set nls_date_language='AMERICAN';
alter session set nls_date_format='DD-MON-YYYY HH:MI:SS';
執行包(exec dbms_logmnr.start_logmnr(dictfilename=>'');
一定要指名引數dictfilename,因為這個包有五個預設的引數,不指名會預設為第一個。
4、觀察分析結果(v$logmnr_contents)
到現在為止,我們已經分析得到了重作日誌檔案中的內容。動態效能檢視v$logmnr_contents包含LogMiner分析得到的所有的資訊。
SELECT sql_redo FROM v$logmnr_contents;
如果我們僅僅想知道某個使用者對於某張表的操作,可以透過下面的SQL查詢得到,該查詢可以得到使用者DB_ZGXT對錶SB_DJJL所作的一切工作。
SQL> SELECT sql_redo FROM v$logmnr_contents WHERE username='DB_ZGXT' AND seg_name='SB_DJJL';
需要強調一點的是,檢視v$logmnr_contents中的分析結果僅在我們執行過程'dbms_logmrn.start_logmnr'這個會話的生命期中存在。這是因為所有的LogMiner儲存都在PGA記憶體中,所有其他的程式是看不到它的,同時隨著程式的結束,分析結果也隨之消失。
最後,使用過程DBMS_LOGMNR.END_LOGMNR終止日誌分析事務,此時PGA記憶體區域被清除,分析結果也隨之不再存在
5、結束分析:使用EXCUTE DBMS_LOGMNR.END_LOGMNR。
四、其他注意事項
們可以利用LogMiner日誌分析工具來分析其他資料庫例項產生的重作日誌檔案,而不僅僅用來分析本身安裝LogMiner的資料庫例項的redo logs檔案。使用LogMiner分析其他資料庫例項時,有幾點需要注意:
1. LogMiner必須使用被分析資料庫例項產生的字典檔案,而不是安裝LogMiner的資料庫產生的字典檔案,另外必須保證安裝LogMiner資料庫的字符集和被分析資料庫的字符集相同。
2. 被分析資料庫平臺必須和當前LogMiner所在資料庫平臺一樣,也就是說如果我們要分析的檔案是由執行在UNIX平臺上的Oracle 8i產生的,那麼也必須在一個執行在UNIX平臺上的Oracle例項上執行LogMiner,而不能在其他如Microsoft NT上執行LogMiner。當然兩者的硬體條件不一定要求完全一樣。
3. LogMiner日誌分析工具僅能夠分析Oracle 8以後的產品,不過它可以分析Oracle8的日誌。對於8以前的產品,該工具也無能為力。 另外, Oracle8i只能對DML操作進行分析,從Oracle9i開始不僅可以分析DML操作,而且也可以分析DDL操作。在Oracle9i中可使用如下語句查詢DDL操作及具體的操作時間:
SQL>select sql_redo
2 from v$logmnr_contents
3 where sql_redo like '%create%' or sql_redo like '%CREATE%';
LogMiner不支援索引組織表、Long、LOB及集合型別。
MTS的環境也不能使用LogMiner.
五、結語
LogMiner對於資料庫管理員(DBA)來講是個功能非常強大的工具,也是在日常工作中經常要用到的一個工具,藉助於該工具,可以得到大量的關於資料庫活動的資訊。其中一個最重要的用途就是不用全部恢復資料庫就可以恢復資料庫的某個變化。另外,該工具還可用來監視或者審計使用者的活動,如你可以利用LogMiner工具察看誰曾經修改了那些資料以及這些資料在修改前的狀態。我們也可以藉助於該工具分析任何Oracle 8及其以後版本產生的重作日誌檔案。另外該工具還有一個非常重要的特點就是可以分析其他資料庫的日誌檔案。總之,該工具對於資料庫管理員來講,是一個非常有效的工具,深刻理解及熟練掌握該工具,對於每一個資料庫管理員的實際工作是非常有幫助的。
參考文件:
The Oracle8i LogMiner Utility
PURPOSE
This paper details the mechanics of what LogMiner does, as well as detailing
the commands and environment it uses.
SCOPE & APPLICATION
For DBAs requiring further information about LogMiner.
The ability to provide a readable interface to the redo logs has been asked
for by customers for a long time. The ALTER SYTSTEM DUMP LOGFILE interface
has been around for a long time, though its usefulness outside Support is
limited. There have been a number of third party products, e.g. BMC's PATROL
DB-Logmaster (SQL*Trax as was), which provide some functionality in this
area. With Oracle release 8.1 there is a facility in the Oracle kernel to do
the same. LogMiner allows the DBA to audit changes to data and performs
analysis on the redo to determine trends, aid in capacity planning,
Point-in-time Recovery etc.
RELATED DOCUMENTS
[NOTE:117580.1] ORA-356, ORA-353, & ORA-334 Errors When Mining Logs with
Different DB_BLOCK_SIZE
Oracle8i - 8.1 LogMiner:
=========================
1. WHAT DOES LOGMINER DO?
=========================
LogMiner can be used against online or archived logs from either the
'current' database or a 'foreign' database. The reason for this is that it
uses an external dictionary file to access meta-data, rather than the
'current' data dictionary.
It is important that this dictionary file is kept in step with the database
which is being analyzed. If the dictionary used is out of step from the redo
then analysis will be considerably more difficult. Building the external
dictionary will be discussed in detail in section 3.
LogMiner scans the log/logs it is interested in, and generates, using the
dictionary file meta-data, a set of SQL statements which would have the same
effect on the database as applying the corresponding redo record.
LogMiner prints out the 'Final' SQL that would have gone against the
database. For example:
Insert into Table x Values ( 5 );
Update Table x set COLUMN=newvalue WHERE ROWID='<>'
Delete from Table x WHERE ROWID='<>' AND COLUMN=value AND COLUMN=VALUE
We do not actually see the SQL that was issued, rather an executable SQL
statement that would have the same EFFECT. Since it is also stored in the
same redo record, we also generate the undo column which would be necessary
to roll this change out.
For SQL which rolls back, no undo SQL is generated, and the rollback flag is
set. An insert followed by a rollback therefore looks like:
REDO UNDO ROLLBACK
insert sql Delete sql 0
delete sql
Because it operates against the physical redo records, multirow operations
are not recorded in the same manner e.g. DELETE FROM EMP WHERE DEPTNO=30
might delete 100 rows in the SALES department in a single statement, the
corresponding LogMiner output would show one row of output per row in the
database.
2. WHAT IT DOES NOT DO
======================
1. 'Trace' Application SQL - use SQL_Trace/10046
Since LogMiner only generates low-level SQL, not what was issued, you
cannot use LogMiner to see exactly what was being done based on the SQL.
What you can see, is what user changed what data at what time.
2. 'Replicate' an application
LogMiner does not cover everything. Also, since DDL is not supported
(the insert into the tab$ etc. is, however the create table is not).
3. Access data dictionary SQL In a visible form.
Especially UPDATE USER$ SET PASSWORD=
Other Known Current Limitations
===============================
LogMiner cannot cope with Objects.
LogMiner cannot cope with Chained/Migrated Rows.
LogMiner produces fairly unreadable output if there is no record of the
table in the dictionary file. See below for output.
The database where the analysis is being performed must have a block size
of at least equal to that of the originating database. See [NOTE:117580.1].
3. FUNCTIONALITY
================
The LogMiner feature is made up of three procedures in the LogMiner
(dbms_logmnr) package, and one in the Dictionary (dbms_logmnr_d).
These are built by the following scripts: (Run by catproc)
$ORACLE_HOME/rdbms/admin/dbmslogmnrd.sql
$ORACLE_HOME/rdbms/admin/dbmslogmnr.sql
$ORACLE_HOME/rdbms/admin/prvtlogmnr.plb
since 8.1.6:
$ORACLE_HOME/rdbms/admin/dbmslmd.sql
$ORACLE_HOME/rdbms/admin/dbmslm.sql
$ORACLE_HOME/rdbms/admin/prvtlm.plb
1. dbms_logmnr_d.build
This procedure builds the dictionary file used by the main LogMiner
package to resolve object names, and column datatypes. It should be
generated relatively frequently, since otherwise newer objects will not
be recorded.
It is possible to generate a Dictionary file from an 8.0.database and
use it to Analyze Oracle 8.0 redo logs. In order to do this run
"dbmslogmnrd.sql" against the 8.0 database, then follow the procedure as
below. All analysis of the logfiles will have to take place while
connected to an 8.1 database since dbms_logmnr cannot operate against
Oracle 8.0 because it uses trusted callouts.
Any redo relating to tables which are not included in the dictionary
file are dumped RAW. Example: If LogMiner cannot resolve the Table and
column references, then the following is output: (insert statement)
insert into UNKNOWN.objn:XXXX(Col[x],....) VALUES
( HEXTORAW('xxxxxx'), HEXTORAW('xxxxx')......)
PARAMETERS
==========
1. The name of the dictionary file you want to produce.
2. The name of the directory where you want the file produced.
The Directory must be writeable by the server i.e. included in
UTL_FILE_DIR path.
EXAMPLE
=======
BEGIN
dbms_logmnr_d.build(
dictionary_filename=> 'miner_dictionary.dic',
dictionary_location => '/export/home/sme81/aholland/testcases
/logminer'
);
END;
/
The dbms_logmnr package actually performs the redo analysis.
2. dbms_logmnr.add_logfile
This procedure registers the logfiles to be analyzed in this session. It
must be called once for each logfile. This populates the fixed table
X$logmnr_logs (v$logmnr_logs) with a row corresponding to the logfile.
Parameters
===========
1. The logfile to be analyzed.
2. Option
DBMS_LOGMNR.NEW (SESSION) First file to be put into PGA memory.
This initialises the V$logmnr_logs table.
and
DBMS_LOGMNR.ADDFILE
adds another logfile to the v$logmnr_logs PGA memory.
Has the same effect as NEW if there are no rows there
presently.
DBMS_LOGMNR.REMOVEFILE
removes a row from v$logmnr_logs.
Example
=======
Include all my online logs for analysis.........
BEGIN
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo03.log',
DBMS_LOGMNR.NEW );
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo02.log',
DBMS_LOGMNR.ADDFILE );
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo01.log',
DBMS_LOGMNR.ADDFILE );
END;
/
Full Path should be required, though an environment variable
is accepted. This is NOT expanded in V$LOGMNR_LOGS.
3. dbms_logmnr.start_logmnr;
This package populates V$logmnr_dictionary, v$logmnr_parameters,
and v$logmnr_contents.
Parameters
==========
1. StartScn Default 0
2. EndScn Default 0,
3. StartTime Default '01-jan-1988'
4. EndTime Default '01-jan-2988'
5. DictFileName Default '',
6. Options Default 0 Debug flag - uninvestigated as yet
A Point to note here is that there are comparisions made between the
SCNs, the times entered, and the range of values in the file. If the SCN
range OR the start/end range are not wholly contained in this log, then
the start_logmnr command will fail with the general error:
ORA-01280 Fatal LogMiner Error.
4. dbms_logmnr.end_logmnr;
This is called with no parameters.
/* THIS IS VERY IMPORTANT FOR SUPPORT */
This procedure MUST be called prior to exiting the session that was
performing the analysis. This is because of the way the PGA is used to
store the dictionary definitions from the dictionary file, and the
V$LOGMNR_CONTENTS output.
If you do not call end_logmnr, you will silently get ORA-00600 [723] ...
on logoff. This OERI is triggered because the PGA is bigger at logoff
than it was at logon, which is considered a space leak. The main problem
from a support perspective is that it is silent, i.e. not signalled back
to the user screen, because by then they have logged off.
The way to spot LogMiner leaks is that the trace file produced by the
OERI 723 will have A PGA heap dumped with many Chunks of type 'Freeable'
With a description of "KRVD:alh"
4. OUTPUT
=========
Effectively, the output from LogMiner is the contents of V$logmnr_contents.
The output is only visible during the life of the session which runs
start_logmnr. This is because all the LogMiner memory is PGA memory, so it
is neither visible to other sessions, nor is it persistent. As the session
logs off, either dbms_logmnr.end_logmnr is run to clear out the PGA, or an
OERI 723 is signalled as described above.
Typically users are going to want to output sql_redo based on queries by
timestamp, segment_name or rowid.
v$logmnr_contents
Name Null? Type
------------------------------- -------- ----
SCN NUMBER
TIMESTAMP DATE
THREAD# NUMBER
LOG_ID NUMBER
XIDUSN NUMBER
XIDSLT NUMBER
XIDSQN NUMBER
RBASQN NUMBER
RBABLK NUMBER
RBABYTE NUMBER
UBAFIL NUMBER
UBABLK NUMBER
UBAREC NUMBER
UBASQN NUMBER
ABS_FILE# NUMBER
REL_FILE# NUMBER
DATA_BLK# NUMBER
DATA_OBJ# NUMBER
DATA_OBJD# NUMBER
SEG_OWNER VARCHAR2(32)
SEG_NAME VARCHAR2(32)
SEG_TYPE VARCHAR2(32)
TABLE_SPACE VARCHAR2(32)
ROW_ID VARCHAR2(19)
SESSION# NUMBER
SERIAL# NUMBER
USERNAME VARCHAR2(32)
ROLLBACK NUMBER
OPERATION VARCHAR2(32)
SQL_REDO VARCHAR2(4000)
SQL_UNDO VARCHAR2(4000)
RS_ID VARCHAR2(32)
SSN NUMBER
CSF NUMBER
INFO VARCHAR2(32)
STATUS NUMBER
PH1_NAME VARCHAR2(32)
PH1_REDO VARCHAR2(4000)
PH1_UNDO VARCHAR2(4000)
PH2_NAME VARCHAR2(32)
PH2_REDO VARCHAR2(4000)
PH2_UNDO VARCHAR2(4000)
PH3_NAME VARCHAR2(32)
PH3_REDO VARCHAR2(4000)
PH3_UNDO VARCHAR2(4000)
PH4_NAME VARCHAR2(32)
PH4_REDO VARCHAR2(4000)
PH4_UNDO VARCHAR2(4000)
PH5_NAME VARCHAR2(32)
PH5_REDO VARCHAR2(4000)
PH5_UNDO VARCHAR2(4000)
SQL> set heading off
SQL> select scn, username, sql_undo from v$logmnr_contents
where segment_name = 'emp';
12134756 scott insert (...) into emp;
12156488 scott delete from emp where empno = ...
12849455 scott update emp set mgr =
This will return the results of an SQL statement without the column
headings. The columns that you are really going to want to query are the
"sql_undo" and "sql_redo" values because they give the transaction details
and syntax.
5. PLACEHOLDERS
===============
In order to allow users to be able to query directly on specific data
values, there are up to five PLACEHOLDERs included at the end of
v$logmnr_contents. When enabled, a user can query on the specific BEFORE and
AFTER values of a specific field, rather than a %LIKE% query against the
SQL_UNDO/REDO fields. This is implemented via an external file called
"logmnr.opt". (See the Supplied Packages manual entry on dbms_logmnr for
further details.) The file must exist in the same directory as the
dictionary file used, and contains the prototype mappings of the PHx fields
to the fields in the table being analyzed.
Example entry
=============
colmap = SCOTT EMP ( EMPNO, 1, ENAME, 2, SAL, 3 );
In the above example, when a redo record is encountered for the SCOTT.EMP
table, the full Statement redo and undo information populates the SQL_REDO
and SQL_UNDO columns respectively, however the PH3_NAME, PH3_REDO and
PH3_UNDO columns will also be populated with 'SAL' ,
respectively,which means that the analyst can query in the form.
SELECT * FROM V$LOGMNR_CONTENTS
WHERE SEG_NAME ='EMP'
AND PH3_NAME='SAL'
AND PH3_REDO=1000000;
The returned PH3_UNDO column would return the value prior to the update.
This enables much more efficient queries to be run against V$LOGMNR_CONTENTS
view, and if, for instance, a CTAS was issued to store a physical copy, the
column can be indexed.
Search Words:
=============
Log Miner
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/35489/viewspace-591038/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 總結logminer使用及各種問題處理
- Mysql安裝過程問題總結及處理方法MySql
- 二、Git 問題彙總及處理Git
- 3大問題!Redis快取異常及處理方案總結Redis快取
- GridLayout的使用及問題處理
- ArchLinux各種問題彙總Linux
- 瀏覽器相容問題處理總結瀏覽器
- HIVEMapJoin異常問題處理總結Hive
- [zt] SQL Server日誌檔案總結及日誌滿的處理SQLServer
- java日期時間各種變換及處理Java
- [zt] Oracle LogMiner 終結版Oracle
- 近期處理的Oracle資料庫問題總結Oracle資料庫
- 資料結構連結串列各種問題資料結構
- logminer總結
- 各種報錯處理方法
- LaTeX中各種常用盒子的使用總結
- 自然語言處理中的分詞問題總結自然語言處理分詞
- Docker使用Calico網路模式配置及問題處理Docker模式
- 程式猿的年終總結,各種版本各種殘
- 使用Apache POI 處理Miscrosoft Office各種格式檔案ApacheROS
- 5種常見的 DNS 故障診斷及問題處理方法DNS
- caffe中各種cblas的函式使用總結函式
- [zt] Oracle LogMinerOracle
- css各種佈局總結CSS
- js各種驗證總結JS
- oracle 各種遷移總結Oracle
- Java工作中的併發問題處理方法總結Java
- Django執行方式及處理流程總結Django
- [zt] 手工處理Standby 歸檔間隔(gap)的問題
- mysql常用語句及問題處理MySql
- GoldenGate常見問題及處理Go
- ocfs2檔案系統問題總結(zt)
- javascript的物件問題及總結JavaScript物件
- apache工作模式總結及網站訪問緩慢處理記錄Apache模式網站
- oracle兩種認證方式總結(ZT)Oracle
- oracle em節點啟動不成功問題處理總結Oracle
- 前端各種迴圈用法總結前端
- 初次使用EasyUI框架外掛遇到的問題及總結UI框架