ORACLE中seq$表更新頻繁的分析
https://blog.csdn.net/zy5757/article/details/77742917
分析ORACLE的AWR報告時,發現SQL ordered by Executions(記錄了按照SQL的執行次數排序的TOP SQL。該排序可以看出監控範圍內的SQL執行次數)下有一個SQL語句執行非常頻繁,一個小時執行了上萬次:
update seq$ set increment$=:2, minvalue=:3, maxvalue=:4, cycle#=:5, order$=:6, cache=:7, highwater=:8, audit$=:9, flags=:10 where obj#=:1
那麼seq$這個資料字典表是做什麼用的呢? 其實這個資料字典表是儲存的是資料庫下序列物件(SEQUENCE)的相關資訊,而且它用來維護序列的變化。如下所示,我們透過實驗來驗證一下,我們啟用10046事件,跟蹤一下會話(level=4 表示啟用SQL_TRACE並捕捉跟蹤檔案中的繫結變數),我們跟蹤會話建立序列的過程。
下面測試環境為Oracle 11g
SQL> show user
USER is "HR"
SQL> alter session set events '10046 trace name context forever, level 4';
Session altered.
SQL> create sequence my_sequence_test start with 1 increment by 1 maxvalue 999999999 nocache;
Sequence created.
SQL> alter session set events '10046 trace name context off';
Session altered.
使用tkprof將跟蹤檔案轉換成可讀格式的檔案後,你會注意到:在建立序列時,會往資料字典表seq$中插入一條記錄(其實建立序列的本質就是在seq$和obj$中插入了一條記錄),在trace檔案的最後有如下記錄:
********************************************************************************
SQL ID: acd938p9jb374 Plan Hash: 0
insert into seq$(obj#,increment$,minvalue,maxvalue,cycle#,order$,cache,
highwater,audit$,flags)
values
(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 1 1 4 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 1 1 4 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=1 pr=1 pw=0 time=150 us)
********************************************************************************
tkprof格式化後的輸出檔案裡面,沒有繫結變數,在原始跟蹤檔案DBdb_ora_1735.trc中,你可以看到對應繫結變數的值,如下:
=====================
PARSING IN CURSOR #139865487614776 len=132 dep=1 uid=0 oct=2 lid=0 tim=1547602293884543 hv=1393921252 ad='723ed1c8' sqlid='acd938p9jb374'
insert into seq$(obj#,increment$,minvalue,maxvalue,cycle#,order$,cache,highwater,audit$,flags)values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10)
END OF STMT
PARSE #139865487614776:c=127,e=361,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1547602293884542
BINDS #139865487614776:
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=00 fl2=0001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=7f34f8c55cb8 bln=22 avl=04 flg=05
value= 90097
Bind#1
oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=723c7770 bln=22 avl=02 flg=09
value=1
Bind#2
oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=723c7782 bln=22 avl=02 flg=09
value=1
Bind#3
oacdty=02 mxl=22(06) mxlc=00 mal=00 scl=00 pre=00
oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=723c7794 bln=22 avl=06 flg=09
value=999999999
Bind#4
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
kxsbbbfp=7f34f8c55c70 bln=22 avl=01 flg=05
value=0
Bind#5
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=7f34f8c55c88 bln=22 avl=01 flg=01
value=0
Bind#6
oacdty=02 mxl=22(01) mxlc=00 mal=00 scl=00 pre=00
oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=723c77a6 bln=22 avl=01 flg=09
value=0
Bind#7
oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00
oacflg=10 fl2=0001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=723c77b8 bln=22 avl=02 flg=09
value=1
Bind#8
oacdty=01 mxl=32(32) mxlc=00 mal=00 scl=00 pre=00
oacflg=10 fl2=0001 frm=01 csi=852 siz=32 off=0
kxsbbbfp=723c77ca bln=32 avl=32 flg=09
value="--------------------------------"
Bind#9
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=00 fl2=0001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=7f34f8c55c40 bln=22 avl=02 flg=05
value=8
EXEC #139865487614776:c=497,e=602,p=1,cr=1,cu=4,mis=1,r=1,dep=1,og=4,plh=0,tim=1547602293885236
STAT #139865487614776 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=1 pr=1 pw=0 time=150 us)'
CLOSE #139865487614776:c=0,e=2,dep=1,type=3,tim=1547602293885267
=====================
使用下面指令碼,你就會發現這個都是對應序列物件的一些資訊(序列物件的OBJECT_ID、MINVALUE、MAXVALUE、CACHE等等):
SQL> select obj#,increment$,minvalue,maxvalue,cycle#,cache,highwater from seq$ where obj#=90097;
OBJ# INCREMENT$ MINVALUE MAXVALUE CYCLE# CACHE HIGHWATER
---------- ---------- ---------- ---------- ---------- ---------- ----------
90097 1 1 999999999 0 0 1
SQL> select object_type,object_name from dba_objects where object_id=90097;
OBJECT_TYPE OBJECT_NAME
------------------- ------------------------------
SEQUENCE MY_SEQUENCE_TEST
SQL> select * from dba_sequences where sequence_name='MY_SEQUENCE_TEST';
SEQUENCE_OWNER SEQUENCE_NAME MIN_VALUE MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER
------------------------------ ------------------------------ ---------- ---------- ------------ - - ---------- -----------
HR MY_SEQUENCE_TEST 1 999999999 1 N N 0 1
那麼,我們接下來使用SQL TRACE看看使用SEQUENCE時,會對seq$表有啥操作。如下所示,我們在啟用SQL_TRACE後,執行3次該SQL語句
SQL> select my_sequence_test.currval, my_sequence_test.nextval from dual;
CURRVAL NEXTVAL
---------- ----------
1 1
SQL> alter session set sql_trace=true;
Session altered.
SQL> select my_sequence_test.currval, my_sequence_test.nextval from dual;
CURRVAL NEXTVAL
---------- ----------
2 2
SQL> select my_sequence_test.currval, my_sequence_test.nextval from dual;
CURRVAL NEXTVAL
---------- ----------
3 3
SQL> select my_sequence_test.currval, my_sequence_test.nextval from dual;
CURRVAL NEXTVAL
---------- ----------
4 4
SQL> alter session set sql_trace=false;
Session altered.
在跟蹤檔案中(具體過程跟上面檢視跟蹤檔案類似,在此忽略具體過程),你會看到也對seq$做了三次更新,更新HIGHWATER的值。
select my_sequence_test.currval, my_sequence_test.nextval
from
dual
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
cache=:7,highwater=:8,audit$=:9,flags=:10
where
obj#=:1
********************************************************************************
SQL ID: d6jrus83jv6p2 Plan Hash: 1070122491
select my_sequence_test.currval, my_sequence_test.nextval
from
dual
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 0.00 0.00 0 0 0 0
Execute 【3】 0.00 0.00 0 0 0 0
Fetch 6 0.00 0.00 0 0 3 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12 0.00 0.00 0 0 3 3
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 84
Number of plan statistics captured: 3
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SEQUENCE MY_SEQUENCE_TEST (cr=1 pr=0 pw=0 time=249 us)
1 1 1 FAST DUAL (cr=0 pr=0 pw=0 time=1 us cost=2 size=0 card=1)
********************************************************************************
SQL ID: 4m7m0t6fjcs5x Plan Hash: 1935744642
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
cache=:7,highwater=:8,audit$=:9,flags=:10
where
obj#=:1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 【3】 0.00 0.00 0 3 6 3
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 3 6 3
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
********************************************************************************
那麼我們接下來,我們修改序列CACHE屬性的值,然後重複上面操作,如下所示,在跟蹤檔案裡面,你會看到只更新了seq$一次,其實更新seq$的更新次數是跟CACHE的值有關係的。所以適當的使用CACHE,是可以減少更新seq$資料字典表的次數。
SQL> alter sequence my_sequence_test cache 10;
Sequence altered.
SQL> alter session set sql_trace=true;
Session altered.
SQL> select my_sequence_test.currval, my_sequence_test.nextval from dual;
CURRVAL NEXTVAL
---------- ----------
5 5
SQL> select my_sequence_test.currval, my_sequence_test.nextval from dual;
CURRVAL NEXTVAL
---------- ----------
6 6
SQL> select my_sequence_test.currval, my_sequence_test.nextval from dual;
CURRVAL NEXTVAL
---------- ----------
7 7
SQL> alter session set sql_trace=false;
Session altered.
再次檢視跟蹤檔案:
********************************************************************************
SQL ID: d6jrus83jv6p2 Plan Hash: 1070122491
select my_sequence_test.currval, my_sequence_test.nextval
from
dual
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 6 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 0 0 0
Fetch 12 0.00 0.00 0 0 4 6
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 24 0.00 0.00 0 0 4 6
Misses in library cache during parse: 2
Optimizer mode: ALL_ROWS
Parsing user id: 84
Number of plan statistics captured: 5
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SEQUENCE MY_SEQUENCE_TEST (cr=1 pr=0 pw=0 time=155 us)
1 1 1 FAST DUAL (cr=0 pr=0 pw=0 time=1 us cost=2 size=0 card=1)
********************************************************************************
SQL ID: 4m7m0t6fjcs5x Plan Hash: 1935744642
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
cache=:7,highwater=:8,audit$=:9,flags=:10
where
obj#=:1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 4 0.00 0.00 0 4 8 4
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 4 8 4
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
********************************************************************************
可以看到原來對my_sequence_test的執行次數由3次增加到6次,執行了3次;而對seq$表的更新由3次增加到了4次,執行了1次。
那麼我們接下來建立一個表,然後迴圈遞迴呼叫序列,然後生成對應時間段的AWR報告,我們來重現一下生產環境遇到的問題:
SQL> create table test(id number);
Table created.
SQL> begin
2 for row_num in 1 .. 50000 loop
3 insert into test
4 select my_sequence_test.nextval from dual;
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
如下所示,你看到INSERT語句執行了50000次(5萬),而更新seq$執行了5003次(5千03),因為上面測試將序列的CACHE設定為10了,如果沒有設定CACHE,那麼序列被呼叫50000次,更新seq$物件也將更新50000次。
另外,呼叫序列也會有一些redo log開銷,如下測試所示,我們先將序列設定為NOCACHE,然後測試過程發現,每次執行都有900多大小的redo log生成。
SQL> alter sequence my_sequence_test nocache;
Sequence altered.
SQL> set autotrace on;
SQL> select my_sequence_test.nextval from dual;
NEXTVAL
----------
50015
Execution Plan
----------------------------------------------------------
Plan hash value: 1070122491
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | SEQUENCE | MY_SEQUENCE_TEST | | | |
| 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
16 recursive calls
3 db block gets
16 consistent gets
0 physical reads
676 redo size
527 bytes sent via SQL*Net to client
519 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select my_sequence_test.nextval from dual;
NEXTVAL
----------
50016
Execution Plan
----------------------------------------------------------
Plan hash value: 1070122491
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | SEQUENCE | MY_SEQUENCE_TEST | | | |
| 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
1 recursive calls
3 db block gets
1 consistent gets
0 physical reads
676 redo size
527 bytes sent via SQL*Net to client
519 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
如果使用CACHE的sequence物件而言,redo size生成的頻率顯然是低得多。如下所示,測試三次,只有第一次生成了redo log, 當然這個是跟序列的CACHE值有關,當快取的序列值使用完了,生成新的序列值快取時,也會產生redo log。
SQL> alter sequence my_sequence_test cache 10;
Sequence altered.
SQL> select my_sequence_test.nextval from dual;
NEXTVAL
----------
50017
Execution Plan
----------------------------------------------------------
Plan hash value: 1070122491
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | SEQUENCE | MY_SEQUENCE_TEST | | | |
| 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
3 recursive calls
3 db block gets
3 consistent gets
0 physical reads
684 redo size
527 bytes sent via SQL*Net to client
519 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select my_sequence_test.nextval from dual;
NEXTVAL
----------
50018
Execution Plan
----------------------------------------------------------
Plan hash value: 1070122491
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | SEQUENCE | MY_SEQUENCE_TEST | | | |
| 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
527 bytes sent via SQL*Net to client
519 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select my_sequence_test.nextval from dual;
NEXTVAL
----------
50019
Execution Plan
----------------------------------------------------------
Plan hash value: 1070122491
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | SEQUENCE | MY_SEQUENCE_TEST | | | |
| 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
527 bytes sent via SQL*Net to client
519 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
另外一個問題就是,如果序列是NOCACHE,併發呼叫序列時, 那麼也會產生row lock contention, 所以給序列設定一個合適的CACHE值是有很大好處的,既能減少redo log的產生,也能避免減少row lock contention(併發更新seq$同一行記錄)。但是序列設定了CACHE後,也有可能遇到跳號問題。那麼這個就需要根據實際情況酌情考慮處理了。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31397003/viewspace-2561480/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- MySQL如何通過分析binlog日誌找出操作頻繁的表MySql
- WPF頻繁更新UI卡頓問題UI
- 前端效能:股票交易APP頻繁更新怎麼破前端APP
- 多維分析模型頻繁變動的解決方案有哪些?模型
- Oracle 級聯表更新和SQLServer 級聯表更新OracleSQLServer
- RouteOS 頻繁自啟
- Docsm——將你從頻繁的文案修改中解救出來
- 頻繁GC (Allocation Failure)及young gc時間過長分析GCAI
- Oracle 臨時表 OracleDataAdapter 批次更新OracleAPT
- 限制使用者頻繁提交
- 一次對pool的誤用導致的.net頻繁gc的診斷分析GC
- Erlang中頻繁傳送遠端訊息要注意的問題
- [React]setState呼叫過於頻繁的問題React
- 禁止頻繁請求的ip訪問nginxNginx
- 一個完整的RNA-seq分析pipeline
- 好程式設計師web前端分享怎麼對待框架和工具頻繁更新程式設計師Web前端框架
- web前端各類框架工具不斷頻繁更新,我們要怎麼應對?Web前端框架
- 域賬戶頻繁鎖定排查
- 微軟承諾Win10慢速預覽通道更新會更頻繁、更穩定微軟Win10
- seq2seq 的 keras 實現Keras
- Android之Mina頻繁傳送心跳包Android
- REDIS主從頻繁切換事件排查Redis事件
- 統計numpy陣列中最頻繁出現的值陣列
- 與遊戲頻繁掛鉤的SCP是什麼?遊戲
- oracle 觸發器,當一個表更新或插入時將資料同步至另個庫中的某個表中Oracle觸發器
- 深度學習的seq2seq模型深度學習模型
- ORACLE DML執行計劃頻繁變更導致業務響應極慢問題的處理Oracle
- openGauss 更新表中資料
- linux安全篇:禁止頻繁訪問的ip訪問nginxLinuxNginx
- 更熱、更久、更頻繁的海洋熱浪正在上升
- JavaScript:面試頻繁出現的幾個易錯點JavaScript面試
- Seq2Seq那些事:詳解深度學習中的“注意力機制”深度學習
- 被oracle搞死的部分語句(持續更新中)Oracle
- Seq2Seq詳解
- Oracle中left join中右表的限制條件Oracle
- Oracle中刪除表中的重複資料Oracle
- oracle的表Oracle
- oracle表空間增長趨勢分析Oracle