0204UNIFORM SIZE=10M?index?insert分裂
[20170204]UNIFORM SIZE=10M?index?insert分裂.txt
--晚上看了連結上班重複測試看看:
1.環境:
SCOTT@book> @ &r/ver1
PORT_STRING VERSION BANNER
------------------------------ -------------- --------------------------------------------------------------------------------
x86_64/Linux 2.4.xx 11.2.0.4.0 Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
CREATE TABLESPACE TEA DATAFILE
'/mnt/ramdisk/book/tea01.dbf' SIZE 100M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;
2.構造測試例子以及說明:
drop table mytest purge;
create table mytest(x number) tablespace tea;
create index i_mytest_x on mytest(x) tablespace tea;
insert into mytest select level from dual connect by level <= 200126;
commit;
--//注作者文章存在一些筆誤,後面提到216應該是126.這樣插入正好插滿索引塊.
SCOTT@book> select object_id,data_object_id from dba_objects where owner=user and object_name='I_MYTEST_X';
OBJECT_ID DATA_OBJECT_ID
---------- --------------
89714 89714
SCOTT@book> alter session set events 'immediate trace name treedump level 89714';
Session altered.
--//檢視轉儲:
----- begin tree dump
branch: 0x1800581 25167233 (0: nrow: 399, level: 1)
leaf: 0x1800582 25167234 (-1: nrow: 540 rrow: 540)
leaf: 0x1800583 25167235 (0: nrow: 533 rrow: 533)
leaf: 0x1800584 25167236 (1: nrow: 533 rrow: 533)
leaf: 0x1800585 25167237 (2: nrow: 533 rrow: 533)
...
leaf: 0x180070d 25167629 (394: nrow: 500 rrow: 500)
leaf: 0x180070e 25167630 (395: nrow: 500 rrow: 500)
leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
----- end tree dump
--//說明:這樣正好插滿索引.
delete mytest where x<=199000;
commit;
SCOTT@book> alter session set events 'immediate trace name treedump level 89714';
Session altered.
--//檢視轉儲:
*** 2017-02-04 11:49:46.458
----- begin tree dump
branch: 0x1800581 25167233 (0: nrow: 399, level: 1)
leaf: 0x1800582 25167234 (-1: nrow: 540 rrow: 0)
leaf: 0x1800583 25167235 (0: nrow: 533 rrow: 0)
leaf: 0x1800584 25167236 (1: nrow: 533 rrow: 0)
leaf: 0x1800585 25167237 (2: nrow: 533 rrow: 0)
leaf: 0x1800586 25167238 (3: nrow: 533 rrow: 0)
leaf: 0x1800587 25167239 (4: nrow: 533 rrow: 0)
leaf: 0x1800588 25167240 (5: nrow: 533 rrow: 0)
...
leaf: 0x180070c 25167628 (393: nrow: 500 rrow: 0)
leaf: 0x180070d 25167629 (394: nrow: 500 rrow: 0)
leaf: 0x180070e 25167630 (395: nrow: 500 rrow: 126)
leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
----- end tree dump
--//說明:這樣前面的鍵值都被刪除了.僅僅保留後面的鍵值.
SCOTT@book> insert into mytest with x as (select level i from dual connect by level <= 199000) select i from x where mod(i, 250) = 0;
796 rows created.
SCOTT@book> commit ;
Commit complete.
--//說明:插入一些記錄.但是不是連續記錄.250,500,750,...,198750,199000.看看索引現在的情況:
SCOTT@book> alter session set events 'immediate trace name treedump level 89714';
Session altered.
*** 2017-02-04 11:50:42.745
----- begin tree dump
branch: 0x1800581 25167233 (0: nrow: 399, level: 1)
leaf: 0x1800582 25167234 (-1: nrow: 2 rrow: 2)
leaf: 0x1800583 25167235 (0: nrow: 2 rrow: 2)
leaf: 0x1800584 25167236 (1: nrow: 2 rrow: 2)
leaf: 0x1800585 25167237 (2: nrow: 2 rrow: 2)
leaf: 0x1800586 25167238 (3: nrow: 2 rrow: 2)
leaf: 0x1800587 25167239 (4: nrow: 2 rrow: 2)
leaf: 0x1800588 25167240 (5: nrow: 2 rrow: 2)
leaf: 0x1800589 25167241 (6: nrow: 3 rrow: 3)
...
leaf: 0x180070b 25167627 (392: nrow: 2 rrow: 2)
leaf: 0x180070c 25167628 (393: nrow: 2 rrow: 2)
leaf: 0x180070d 25167629 (394: nrow: 2 rrow: 2)
leaf: 0x180070e 25167630 (395: nrow: 128 rrow: 128)
leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
----- end tree dump
--//昏!!oracle索引並沒有集中插入到1個塊中,而是分散基本每塊2條記錄,我的感覺問題可能出在分支0級還記錄了相關資訊.插入時還查詢到.
3.插入1條記錄:
SCOTT@book> alter system flush buffer_cache;
System altered.
SCOTT@book> alter system flush buffer_cache;
System altered.
--//說明:INSERT INTO MYTEST VALUES (200127);插入當前最大的鍵值.
SCOTT@book> @ &r/10046on 12
old 1: alter session set events '10046 trace name context forever, level &1'
new 1: alter session set events '10046 trace name context forever, level 12'
Session altered.
SCOTT@book> INSERT INTO MYTEST VALUES (200127);
1 row created.
SCOTT@book> @ &r/10046off
Session altered.
SCOTT@book> commit ;
Commit complete.
--//檢視轉儲:
=====================
PARSING IN CURSOR #139853026757984 len=34 dep=0 uid=83 oct=2 lid=83 tim=1486180291162867 hv=1063204715 ad='634b0b08' sqlid='5zb5uy0zpydvb'
INSERT INTO MYTEST VALUES (200127)
END OF STMT
PARSE #139853026757984:c=2000,e=2215,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1486180291162862
WAIT #139853026757984: nam='Disk file operations I/O' ela= 41 FileOperation=2 fileno=6 filetype=2 obj#=89713 tim=1486180291163153
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=128 blocks=1 obj#=89713 tim=1486180291163204
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=433 blocks=1 obj#=89713 tim=1486180291163289
WAIT #139853026757984: nam='db file sequential read' ela= 20 file#=6 block#=1409 blocks=1 obj#=89714 tim=1486180291163450
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1808 blocks=1 obj#=89714 tim=1486180291163524
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
WAIT #139853026757984: nam='Disk file operations I/O' ela= 30 FileOperation=2 fileno=3 filetype=2 obj#=0 tim=1486180291163799
WAIT #139853026757984: nam='db file sequential read' ela= 22 file#=3 block#=256 blocks=1 obj#=0 tim=1486180291163853
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=3 block#=1788 blocks=1 obj#=0 tim=1486180291163927
WAIT #139853026757984: nam='db file sequential read' ela= 22 file#=6 block#=1408 blocks=1 obj#=89714 tim=1486180291164121
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=3 block#=144 blocks=1 obj#=0 tim=1486180291164240
WAIT #139853026757984: nam='db file sequential read' ela= 14 file#=6 block#=1805 blocks=1 obj#=89714 tim=1486180291164315
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1410 blocks=1 obj#=89714 tim=1486180291164380
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1805 blocks=1 obj#=89714 tim=1486180291164453
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1804 blocks=1 obj#=89714 tim=1486180291164523
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1803 blocks=1 obj#=89714 tim=1486180291164587
WAIT #139853026757984: nam='db file sequential read' ela= 14 file#=6 block#=1802 blocks=1 obj#=89714 tim=1486180291164661
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1801 blocks=1 obj#=89714 tim=1486180291164724
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1800 blocks=1 obj#=89714 tim=1486180291164807
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1799 blocks=1 obj#=89714 tim=1486180291164877
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1798 blocks=1 obj#=89714 tim=1486180291164942
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1797 blocks=1 obj#=89714 tim=1486180291165008
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1796 blocks=1 obj#=89714 tim=1486180291165076
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1795 blocks=1 obj#=89714 tim=1486180291165172
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1794 blocks=1 obj#=89714 tim=1486180291165240
WAIT #139853026757984: nam='db file sequential read' ela= 14 file#=6 block#=1793 blocks=1 obj#=89714 tim=1486180291165309
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1792 blocks=1 obj#=89714 tim=1486180291165386
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1791 blocks=1 obj#=89714 tim=1486180291165453
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1790 blocks=1 obj#=89714 tim=1486180291165514
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1789 blocks=1 obj#=89714 tim=1486180291165578
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1788 blocks=1 obj#=89714 tim=1486180291165647
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1787 blocks=1 obj#=89714 tim=1486180291165712
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1786 blocks=1 obj#=89714 tim=1486180291165792
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1785 blocks=1 obj#=89714 tim=1486180291165853
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1784 blocks=1 obj#=89714 tim=1486180291165919
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1783 blocks=1 obj#=89714 tim=1486180291165986
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1782 blocks=1 obj#=89714 tim=1486180291166050
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1781 blocks=1 obj#=89714 tim=1486180291166112
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1780 blocks=1 obj#=89714 tim=1486180291166174
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1779 blocks=1 obj#=89714 tim=1486180291166233
...
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1423 blocks=1 obj#=89714 tim=1486180291188373
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1422 blocks=1 obj#=89714 tim=1486180291188431
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1421 blocks=1 obj#=89714 tim=1486180291188492
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1420 blocks=1 obj#=89714 tim=1486180291188552
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1419 blocks=1 obj#=89714 tim=1486180291188610
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1418 blocks=1 obj#=89714 tim=1486180291188669
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1417 blocks=1 obj#=89714 tim=1486180291188727
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1416 blocks=1 obj#=89714 tim=1486180291188798
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1415 blocks=1 obj#=89714 tim=1486180291188859
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1414 blocks=1 obj#=89714 tim=1486180291188919
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1413 blocks=1 obj#=89714 tim=1486180291188976
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1412 blocks=1 obj#=89714 tim=1486180291189036
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1411 blocks=1 obj#=89714 tim=1486180291189100
EXEC #139853026757984:c=25995,e=26545,p=405,cr=3,cu=821,mis=0,r=1,dep=0,og=1,plh=0,tim=1486180291189522
STAT #139853026757984 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=3 pr=405 pw=0 time=26450 us)'
WAIT #139853026757984: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=89714 tim=1486180291189722
*** 2017-02-04 11:51:37.369
WAIT #139853026757984: nam='SQL*Net message from client' ela= 6180190 driver id=1650815232 #bytes=1 p3=0 obj#=89714 tim=1486180297369960
CLOSE #139853026757984:c=0,e=10,dep=0,type=0,tim=1486180297370090
=====================
--//僅僅注意obj#=89714 的行.注意看~的行.
SCOTT@book> select * from dba_extents where segment_name='I_MYTEST_X';
OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME EXTENT_ID FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO
------ -------------------- ------------------ --------------- ---------- ---------- ---------- ---------- ---------- ------------
SCOTT I_MYTEST_X INDEX TEA 0 6 1408 10485760 1280 6
SCOTT@book> @ &r/convrdba 6 1409
RDBA16 RDBA
-------------- ----------
1800581 25167233
--//索引的根節點.
SCOTT@book> @ &r/convrdba 6 1808
RDBA16 RDBA
-------------- ----------
1800710 25167632
--//找到對應的索引塊,也就是最後1塊,發生90-10分裂.訪問dba=6,1408(對應段頭),查詢空閒塊,但是悲劇發生.為了查詢空閒塊,把整個freelist表掃一遍.
--//先掃 dba=6,1410,接著dba=6,1805,接著一次遞減,最後是dba=6,1411.
SCOTT@book> @ &r/convrdba 6 1805
RDBA16 RDBA
-------------- ----------
180070d 25167629
--//插入鍵值到dba=6,1809,這樣的步驟是插入完成的.奇怪跟蹤沒有發現掃描這個塊dba=6,1809.
--//再次轉儲:
SCOTT@book> alter session set events 'immediate trace name treedump level 89714';
Session altered.
*** 2017-02-04 15:05:53.475
branch: 0x1800581 25167233 (0: nrow: 400, level: 1)
leaf: 0x1800582 25167234 (-1: nrow: 2 rrow: 2)
leaf: 0x1800583 25167235 (0: nrow: 2 rrow: 2)
leaf: 0x1800584 25167236 (1: nrow: 2 rrow: 2)
....
leaf: 0x180070d 25167629 (394: nrow: 2 rrow: 2)
leaf: 0x180070e 25167630 (395: nrow: 128 rrow: 128)
leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
leaf: 0x1800711 25167633 (398: nrow: 1 rrow: 1)
----- end tree dump
4.如果繼續插入:
SCOTT@book> alter system flush buffer_cache;
System altered.
SCOTT@book> alter system flush buffer_cache;
System altered.
SCOTT@book> @ &r/10046on 12
old 1: alter session set events '10046 trace name context forever, level &1'
new 1: alter session set events '10046 trace name context forever, level 12'
Session altered.
SCOTT@book> INSERT INTO MYTEST VALUES (200129);
1 row created.
--//因為這個時候索引沒有發生分裂.
SCOTT@book> @ &r/10046off
Session altered.
=====================
PARSING IN CURSOR #140276912372776 len=34 dep=0 uid=83 oct=2 lid=83 tim=1486193188666785 hv=1435680765 ad='7d67c060' sqlid='b0s8xtjat5gzx'
INSERT INTO MYTEST VALUES (200129)
END OF STMT
PARSE #140276912372776:c=1999,e=1353,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1486193188666783
WAIT #140276912372776: nam='Disk file operations I/O' ela= 54 FileOperation=2 fileno=6 filetype=2 obj#=89713 tim=1486193188667043
WAIT #140276912372776: nam='db file sequential read' ela= 14 file#=6 block#=128 blocks=1 obj#=89713 tim=1486193188667099
WAIT #140276912372776: nam='db file sequential read' ela= 11 file#=6 block#=433 blocks=1 obj#=89713 tim=1486193188667173
WAIT #140276912372776: nam='db file sequential read' ela= 11 file#=3 block#=2653 blocks=1 obj#=0 tim=1486193188667249
WAIT #140276912372776: nam='db file sequential read' ela= 11 file#=6 block#=1409 blocks=1 obj#=89714 tim=1486193188667357
WAIT #140276912372776: nam='db file sequential read' ela= 14 file#=6 block#=1809 blocks=1 obj#=89714 tim=1486193188667426
EXEC #140276912372776:c=0,e=588,p=5,cr=1,cu=4,mis=0,r=1,dep=0,og=1,plh=0,tim=1486193188667501
STAT #140276912372776 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=1 pr=5 pw=0 time=552 us)'
WAIT #140276912372776: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=89714 tim=1486193188667641
*** 2017-02-04 15:26:35.242
WAIT #140276912372776: nam='SQL*Net message from client' ela= 6574399 driver id=1650815232 #bytes=1 p3=0 obj#=89714 tim=1486193195242075
CLOSE #140276912372776:c=0,e=12,dep=0,type=0,tim=1486193195242203
PARSE #140276912401472:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=1486193195242284
EXEC #140276912401472:c=0,e=396,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=1486193195242710
5.補充說明:
如果建立的表空間:
CREATE TABLESPACE TEA DATAFILE
'/mnt/ramdisk/book/tea01.dbf' SIZE 10M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;
或者
CREATE TABLESPACE TEA DATAFILE
'/mnt/ramdisk/book/tea01.dbf' SIZE 10M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;
--//我都沒有再現問題.問題視乎在於EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M.繼續探究超出我的能力.
6.測試assm表空間看看:
CREATE TABLESPACE TEA DATAFILE
'/mnt/ramdisk/book/tea01.dbf' SIZE 100M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
--不再重複內容:
=====================
PARSING IN CURSOR #140388084963712 len=34 dep=0 uid=83 oct=2 lid=83 tim=1486194070117967 hv=1063204715 ad='7b4957a0' sqlid='5zb5uy0zpydvb'
INSERT INTO MYTEST VALUES (200127)
END OF STMT
PARSE #140388084963712:c=2000,e=1233,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1486194070117962
WAIT #140388084963712: nam='db file sequential read' ela= 16 file#=6 block#=149 blocks=1 obj#=89720 tim=1486194070118162
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=148 blocks=1 obj#=89720 tim=1486194070118239
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=130 blocks=1 obj#=89720 tim=1486194070118295
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=286 blocks=1 obj#=89720 tim=1486194070118350
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=3 block#=272 blocks=1 obj#=0 tim=1486194070118422
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=3 block#=14742 blocks=1 obj#=0 tim=1486194070118523
WAIT #140388084963712: nam='db file sequential read' ela= 16 file#=6 block#=1430 blocks=1 obj#=89721 tim=1486194070118613
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1836 blocks=1 obj#=89721 tim=1486194070118681
WAIT #140388084963712: nam='db file sequential read' ela= 14 file#=3 block#=256 blocks=1 obj#=0 tim=1486194070118850
WAIT #140388084963712: nam='db file sequential read' ela= 12 file#=3 block#=2966 blocks=1 obj#=0 tim=1486194070118915
WAIT #140388084963712: nam='db file sequential read' ela= 12 file#=6 block#=1429 blocks=1 obj#=89721 tim=1486194070119060
WAIT #140388084963712: nam='db file sequential read' ela= 9 file#=6 block#=1428 blocks=1 obj#=89721 tim=1486194070119118
WAIT #140388084963712: nam='db file sequential read' ela= 9 file#=6 block#=1413 blocks=1 obj#=89721 tim=1486194070119165
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1754 blocks=1 obj#=89721 tim=1486194070119213
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1758 blocks=1 obj#=89721 tim=1486194070119285
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1762 blocks=1 obj#=89721 tim=1486194070119346
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1766 blocks=1 obj#=89721 tim=1486194070119406
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1770 blocks=1 obj#=89721 tim=1486194070119463
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1774 blocks=1 obj#=89721 tim=1486194070119524
WAIT #140388084963712: nam='db file sequential read' ela= 9 file#=6 block#=1778 blocks=1 obj#=89721 tim=1486194070119581
WAIT #140388084963712: nam='db file sequential read' ela= 13 file#=6 block#=1782 blocks=1 obj#=89721 tim=1486194070119658
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1786 blocks=1 obj#=89721 tim=1486194070119727
WAIT #140388084963712: nam='db file sequential read' ela= 12 file#=6 block#=1790 blocks=1 obj#=89721 tim=1486194070119800
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1731 blocks=1 obj#=89721 tim=1486194070119862
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1735 blocks=1 obj#=89721 tim=1486194070119918
...
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1812 blocks=1 obj#=89721 tim=1486194070124952
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1816 blocks=1 obj#=89721 tim=1486194070125007
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1820 blocks=1 obj#=89721 tim=1486194070125061
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1824 blocks=1 obj#=89721 tim=1486194070125115
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1840 blocks=1 obj#=89721 tim=1486194070125183
EXEC #140388084963712:c=6999,e=7283,p=114,cr=2,cu=242,mis=0,r=1,dep=0,og=1,plh=0,tim=1486194070125353
STAT #140388084963712 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=2 pr=114 pw=0 time=7236 us)'
WAIT #140388084963712: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=89721 tim=1486194070125510
*** 2017-02-04 15:41:14.533
WAIT #140388084963712: nam='SQL*Net message from client' ela= 4407807 driver id=1650815232 #bytes=1 p3=0 obj#=89721 tim=1486194074533350
CLOSE #140388084963712:c=0,e=17,dep=0,type=0,tim=1486194074533515
=====================
--//依舊出現相似的情況.不同點在於:MSSM掃描的塊更多.
--//ASSM
$ grep "obj#=89721" /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_14295.trc | grep block#=1|wc
106 1378 12921
--//MSSM
$ grep "obj#=89714" /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_13231.trc | grep blocks=1 |wc
400 5200 48770
--我直接轉抄作者的連結內容:
What happens here is that the last insert causes a so-called 90-10 split, which, unlike the usual 50-50 split, requires
the new row to be inserted into a new empty block. In order to save space, Oracle first tries to reuse one of the blocks
below the high watermark which it assumes free (as they are on the freelist). If you trace it, you'll see that Oracle
ends up examining (and taking them off the freelist) the entire range of blocks cleared by the delete statement one by
one.
I've also tried the partial workaround provided by Saibabu: enabling event 45051, and it worked ("partial" because the
amount of I/O remains anomalously high, but fortunately it's mostly repeated reads of the same blocks, so very little
physical I/O will result from it). Other possible solution (apart from fixing the physical structure of the index by
coalescing or rebuilding it after a large delete, which isn't always convenient on a live system during business hours)
would be to migrate to ASSM, but it's not clear whether this would be much of improvement as ASSM has several somewhat
similar bugs, and I'm not sure how many of them are fixed in recent versions.
--//我沒有做這個測試.
I was unable to confirm whether or not we are hitting this exact bug or something else — some scripts have been
deployed to obtain additional diagnostic information, and if we manage to get a definitive confirmation, I'll post an
update (but due to the very intermittent nature of the problem I have no idea how long this may take).
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/267265/viewspace-2133066/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 0207UNIFORM SIZE=10M index insert分裂2ORMIndex
- Oracle索引分裂(Index Block Split)Oracle索引IndexBloC
- Oracle index索引塊分裂split資訊彙總OracleIndex索引
- insert:key too large to index…Index
- Index column size too large. The maximum column size is 767 bytesIndex
- Index column size too large. The maximum column size is 767 bytes.Index
- TABLE size (including table,index,lob,lobindex)Index
- 索引分裂造成的index contention等待事件的診斷索引Index事件
- 關於insert操作造成索引葉節點分裂的驗證索引
- java.lang.IndexOutOfBoundsException: Index: 0, Size: 0JavaIndexException
- 1709 - Index column size too large. The maximum column size is 767 bytes.Index
- [Err] 1709 - Index column size too large. The maximum column size is 767 bytes.Index
- MySQL建立表失敗:Index column size too large. The maximum column size is 767 bytesMySqlIndex
- 【Spring】java.lang.IndexOutOfBoundsException: Index: 256, Size: 256SpringJavaIndexException
- MySQL 5.7 到 5.6,出現 Index column size too large. The maximum column size is 767 bytesMySqlIndex
- Redo內部解析-Insert Single Row with Index(七)Index
- mysql specified key was too long與Index column size too large. The maximum column size is 767 bytes.MySqlIndex
- zt_如何確定index root block split索引塊分裂的時間點IndexBloC索引
- MySQL資料庫index column size too large. the maximum column size is 767 bytes問題解決MySql資料庫Index
- Maximum insert commit size and Rows Per Batch in OLE DB Destination in SSISMITBAT
- mysql5.7 資料匯入5.6 Index column size too large. The maximum column size is 767 bytes.MySqlIndex
- Oracle NoLogging Append 方式減少批量insert的redo_sizeOracleAPP
- mysql Index column size too large 超過767錯誤解決方案(轉)MySqlIndex
- height_scale = scales[2] IndexError: index 2 is out of bounds for axis 0 with size 0IndexError
- 索引分裂的enq索引ENQ
- Oracle11.2新特性之INSERT提示IGNORE_ROW_ON_DUPKEY_INDEXOracleIndex
- 深圳市恆訊科技分析:10M伺服器為什麼下載速率遠沒有10M?伺服器
- Oracle中的insert/insert all/insert firstOracle
- MySQL 頁分裂中的鎖3MySql
- Size DatabaseDatabase
- DB_BLOCK_SIZE and DB_CACHE_SIZE in OracleBloCOracle
- Oracle 的 INSERT ALL和INSERT FIRSTOracle
- 阿里雲10M頻寬的便宜購買方式阿里
- KEEP INDEX | DROP INDEXIndex
- ORA-27046: file size is not a multiple of logical block sizeBloC
- sort_area_retained_size與sort_area_sizeAI
- Oracle索引塊分裂split資訊彙總Oracle索引
- 資料庫索引分裂 問題分析資料庫索引