consistent gets暴漲的一種情況

westzq1984發表於2010-08-18
Normal 0 7.8 磅 0 2 false false false EN-US ZH-CN X-NONE以前遇到過由於DELETE導致INSERT邏輯讀暴漲,今天又發現了一個邏輯讀暴漲的情況

 

session 1

session 2

 

SQL> SELECT /*+rowid(a)*/* FROM ctais2.test a WHERE ROWID=dbms_rowid.rowid_create(1,6388,6,13,2);

 

        ID NAME

---------- ------------------------------

         2 aa

 

 

Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

          1  consistent gets

          0  physical reads

          0  redo size

        455  bytes sent via SQL*Net to client

        503  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed

SQL> select rowid from test where id=10;

 

ROWID

------------------

AAABj0AAGAAAAANAAK

 

SQL> delete from test where id=1;

 

1 row deleted.

 

 

SQL> set autotrace on stat

SQL> select * from test where rowid='AAABj0AAGAAAAANAAK';

 

        ID NAME

---------- ------------------------------

        10 aa

 

 

Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

          3  consistent gets

          0  physical reads

         52  redo size

        455  bytes sent via SQL*Net to client

        503  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed

SQL> delete from test where id=2;

 

1 row deleted.

 

 

SQL> /

 

        ID NAME

---------- ------------------------------

        10 aa

 

 

Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

          4  consistent gets

          0  physical reads

         52  redo size

        455  bytes sent via SQL*Net to client

        503  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed

SQL> delete test;

 

998 rows deleted.

 

 

SQL> /

 

        ID NAME

---------- ------------------------------

        10 aa

 

 

Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

        616  consistent gets

          0  physical reads

         52  redo size

        455  bytes sent via SQL*Net to client

        503  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed

 

SQL> SELECT DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) FILE#,

  2         DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) BLOCK#,

  3         COUNT(*)

  4    FROM TEST

  5   GROUP BY DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID),

  6       DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID);

 

     FILE#     BLOCK#   COUNT(*)

---------- ---------- ----------

         6         13        614

         6         14        386

 

上面的實驗可以看出,語句的consistent gets一般等於 undo record + 2

嘗試過DUMP UNDO來看,一個REDO塊中包含大概30條這個塊的undo record

嘗試加大arraysizeconsistent gets不變

設定Event 10201 - Dump Consistent Read Undo Application,可以看到ORACLE是一條一條的應用UNDO記錄

設定Event 10200 - Dump Consistent Reads buffer trace,沒有跟蹤出ORACLE到底對那個塊讀了那麼多次

 

很困惑ORACLE到底讀那個塊讀了那麼多,熊哥用OraTracer看了下,發現基本是在讀取undo塊。我裝的win7 x64,跑OraTracer要死……怨念

 

以前看TOP,書上有這麼一段話:

For every SQL statement, the database engine has to guarantee the consistency of the processed

data. For that purpose, based on current blocks and undo information, consistent copies of

blocks might be created at runtime. To execute such an operation, several logical reads are

performed. Therefore, the number of logical reads performed by a SQL statement is strongly

dependent on the number of blocks that have to be reconstructed.

 

一直以為一致性讀很大程度上依賴於需要重構的一致性塊,今天研究undo的時候,意外的發現,其實一致性讀很大程度上依賴於undo record的記錄數

 

以前在ORACLE ERP上捕獲到一系列異常的SQL,這些SQL訪問的表,會以一定的間隔時間,大批量的DELETE資料,在INSERT資料進去。SP報告中平均每次邏輯讀高達2000多,但是我測試的時候發現邏輯讀只有30多,調查了v$sql_plan中的資訊,發現執行計劃也沒什麼不同,列分佈也沒有不均勻,當時這些SQL還不算TOP SQL,也就沒怎麼研究,現在想想,可能是一致性讀的問題

 

如何弱化這個問題?應該首先考慮避免大事務,分階段提交;如果是在高併發的系統中,可以考慮使用更小的塊大小,減少每個塊中的行數,避免有些走索引,訪問很少資料塊的語句也產生很高的邏輯讀

 

 

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/8242091/viewspace-671343/,如需轉載,請註明出處,否則將追究法律責任。

相關文章