stopkey對索引掃描的影響測試
測試思路:構造一個表,共50萬條記錄,前99999條記錄對應的id為1,status為1,第10萬條記錄對應的id也為1,但status為2,
這樣,當以id=1 and status=2 and rownum=1 為條件進行查詢時,如果只在id上建立索引,那麼oracle將從索引中第一條id=1的
記錄開始掃描,每掃描一條索引記錄就返回表中查詢該記錄中的status是否為2,這樣反覆一直掃描到第10萬條記錄,
才能找到第一條複合條件的記錄並返回結果,這種情況下顯然效率不高。而如果在id和status上面建立了複合索引,oracle只需要讀取一條索引
記錄即可找到複合條件的記錄,這種情況下查詢效率與該索引的選擇性實際上是沒有關係的。
SQL> desc t
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
ID CHAR(20)
LEV VARCHAR2(10)
BRANCH VARCHAR2(10)
STATUS VARCHAR2(1)
COMM VARCHAR2(100)
SQL> begin
2 for i in 1 .. 500000 loop
3 if i<100000 then
4 insert into t values('1',mod(100000,5),lpad(mod(i,20),5,'x'),'1','comm');
5 elsif i=100000 then
6 insert into t values('1',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
7 elsif i>100000 and i<=200000 then
8 insert into t values('2',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
9 elsif i>200000 and i<=300000 then
10 insert into t values('3',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
11 elsif i>300000 and i<=400000 then
12 insert into t values('4',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
13 else
14 insert into t values('5',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
15 end if;
16 end loop;
17 end;
18 /
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> select count(*) from t;
COUNT(*)
----------
500000
SQL> select count(*) from(select distinct id,status from t);
COUNT(*)
----------
6
SQL> select count(distinct id) from t;
COUNT(DISTINCTID)
-----------------
5
SQL> create index idx_id on t(id) tablespace users;
Index created.
SQL> create index idx_id_status on t(id,status) tablespace users;
Index created.
SQL> select index_name,column_name,column_position from user_ind_columns where table_name='T';
INDEX_NAME COLUMN_NAME COLUMN_POSITION
------------------------------ ------------------------------ ---------------
IDX_ID ID 1
IDX_ID_STATUS ID 1
IDX_ID_STATUS STATUS 2
SQL> select index_name,distinct_keys from user_indexes where table_name='T';
INDEX_NAME DISTINCT_KEYS
------------------------------ -------------
IDX_ID 5
IDX_ID_STATUS 6
----測試
SQL> select /*+ index(t idx_id) */ id,status,comm from t where id='1' and status='2' and rownum=1;
ID S COMM
-------------------- - ----------------------------------------------------------------------------------------------------
1 2 comm
Execution Plan
----------------------------------------------------------
Plan hash value: 415658150
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 4 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| T | 2 | 56 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IDX_ID | 100K| | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM=1)
2 - filter("STATUS"='2')
3 - access("ID"='1')
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
899 consistent gets
0 physical reads
0 redo size
540 bytes sent via SQL*Net to client
400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
--檢視10046 trace,可以清楚的看到oracle一邊讀索引一邊讀表,obj#=55900為表,obj#=55901為索引
..............
WAIT #1: nam='db file sequential read' ela= 228 file#=4 block#=13227 blocks=1 obj#=55900 tim=1356771811774771
WAIT #1: nam='db file sequential read' ela= 298 file#=4 block#=18829 blocks=1 obj#=55901 tim=1356771811775194
WAIT #1: nam='db file sequential read' ela= 1095 file#=4 block#=13228 blocks=1 obj#=55900 tim=1356771811776411
WAIT #1: nam='db file sequential read' ela= 123 file#=4 block#=18830 blocks=1 obj#=55901 tim=1356771811776828
WAIT #1: nam='db file sequential read' ela= 535 file#=4 block#=13229 blocks=1 obj#=55900 tim=1356771811777482
WAIT #1: nam='db file sequential read' ela= 115 file#=4 block#=13230 blocks=1 obj#=55900 tim=1356771811777755
WAIT #1: nam='db file sequential read' ela= 380 file#=4 block#=18831 blocks=1 obj#=55901 tim=1356771811778220
WAIT #1: nam='db file sequential read' ela= 943 file#=4 block#=13231 blocks=1 obj#=55900 tim=1356771811779412
WAIT #1: nam='db file sequential read' ela= 607 file#=4 block#=18832 blocks=1 obj#=55901 tim=1356771811780184
WAIT #1: nam='db file sequential read' ela= 556 file#=4 block#=13232 blocks=1 obj#=55900 tim=1356771811781034
WAIT #1: nam='db file sequential read' ela= 779 file#=4 block#=18833 blocks=1 obj#=55901 tim=1356771811782041
WAIT #1: nam='db file sequential read' ela= 1215 file#=4 block#=13233 blocks=1 obj#=55900 tim=1356771811783368
WAIT #1: nam='db file sequential read' ela= 522 file#=4 block#=18834 blocks=1 obj#=55901 tim=1356771811784071
WAIT #1: nam='db file sequential read' ela= 283 file#=4 block#=13234 blocks=1 obj#=55900 tim=1356771811784434
WAIT #1: nam='db file sequential read' ela= 400 file#=4 block#=13235 blocks=1 obj#=55900 tim=1356771811784991
WAIT #1: nam='db file sequential read' ela= 252 file#=4 block#=18835 blocks=1 obj#=55901 tim=1356771811785346
WAIT #1: nam='db file sequential read' ela= 323 file#=4 block#=13236 blocks=1 obj#=55900 tim=1356771811785840
WAIT #1: nam='db file sequential read' ela= 313 file#=4 block#=18836 blocks=1 obj#=55901 tim=1356771811786263
WAIT #1: nam='db file sequential read' ela= 149 file#=4 block#=13237 blocks=1 obj#=55900 tim=1356771811786528
WAIT #1: nam='db file sequential read' ela= 256 file#=4 block#=18837 blocks=1 obj#=55901 tim=1356771811786913
WAIT #1: nam='db file sequential read' ela= 277 file#=4 block#=13238 blocks=1 obj#=55900 tim=1356771811787278
WAIT #1: nam='db file sequential read' ela= 317 file#=4 block#=18838 blocks=1 obj#=55901 tim=1356771811787778
WAIT #1: nam='db file sequential read' ela= 272 file#=4 block#=13239 blocks=1 obj#=55900 tim=1356771811788153
..............................
--當使用複合索引查詢時,邏輯讀明顯降低
SQL> select /*+ index(t idx_id_status) */ id,status,comm from t where id='1' and status='2' and rownum=1;
ID S COMM
-------------------- - ----------------------------------------------------------------------------------------------------
1 2 comm
Execution Plan
----------------------------------------------------------
Plan hash value: 3889392193
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 4 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T | 2 | 56 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IDX_ID_STATUS | 80360 | | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM=1)
3 - access("ID"='1' AND "STATUS"='2')
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
540 bytes sent via SQL*Net to client
400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
---此時觀察10046 trace檔案,只有4次db file sequential read的等待產生,正好符合4個邏輯讀的結果
WAIT #1: nam='db file sequential read' ela= 277989 file#=4 block#=7772 blocks=1 obj#=55902 tim=1356771851449041
WAIT #1: nam='db file sequential read' ela= 21429 file#=4 block#=21569 blocks=1 obj#=55902 tim=1356771851471110
WAIT #1: nam='db file sequential read' ela= 6088 file#=4 block#=21365 blocks=1 obj#=55902 tim=1356771851478010
WAIT #1: nam='db file sequential read' ela= 16579 file#=4 block#=13628 blocks=1 obj#=55900 tim=1356771851495606
通過資料塊的dump可以確認,其中前兩行為branch block,第三行為leaf block,第四行為表的資料塊
SQL> select id,status,dbms_rowid.rowid_relative_fno(rowid),dbms_rowid.rowid_block_number(rowid) from t where id='1' and status='2' and rownum=1;
ID S DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
-------------------- - ------------------------------------ ------------------------------------
1 2 4 13628
符合條件的記錄正好位於file_id為4,block_id為13628的資料塊上。
結論:對於這種where條件中有rownum<=n且n較小的查詢,在where條件中所有的列上建立一個複合索引會明顯的提高查詢效能。
這樣,當以id=1 and status=2 and rownum=1 為條件進行查詢時,如果只在id上建立索引,那麼oracle將從索引中第一條id=1的
記錄開始掃描,每掃描一條索引記錄就返回表中查詢該記錄中的status是否為2,這樣反覆一直掃描到第10萬條記錄,
才能找到第一條複合條件的記錄並返回結果,這種情況下顯然效率不高。而如果在id和status上面建立了複合索引,oracle只需要讀取一條索引
記錄即可找到複合條件的記錄,這種情況下查詢效率與該索引的選擇性實際上是沒有關係的。
SQL> desc t
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
ID CHAR(20)
LEV VARCHAR2(10)
BRANCH VARCHAR2(10)
STATUS VARCHAR2(1)
COMM VARCHAR2(100)
SQL> begin
2 for i in 1 .. 500000 loop
3 if i<100000 then
4 insert into t values('1',mod(100000,5),lpad(mod(i,20),5,'x'),'1','comm');
5 elsif i=100000 then
6 insert into t values('1',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
7 elsif i>100000 and i<=200000 then
8 insert into t values('2',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
9 elsif i>200000 and i<=300000 then
10 insert into t values('3',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
11 elsif i>300000 and i<=400000 then
12 insert into t values('4',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
13 else
14 insert into t values('5',mod(100000,5),lpad(mod(i,20),5,'x'),'2','comm');
15 end if;
16 end loop;
17 end;
18 /
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> select count(*) from t;
COUNT(*)
----------
500000
SQL> select count(*) from(select distinct id,status from t);
COUNT(*)
----------
6
SQL> select count(distinct id) from t;
COUNT(DISTINCTID)
-----------------
5
SQL> create index idx_id on t(id) tablespace users;
Index created.
SQL> create index idx_id_status on t(id,status) tablespace users;
Index created.
SQL> select index_name,column_name,column_position from user_ind_columns where table_name='T';
INDEX_NAME COLUMN_NAME COLUMN_POSITION
------------------------------ ------------------------------ ---------------
IDX_ID ID 1
IDX_ID_STATUS ID 1
IDX_ID_STATUS STATUS 2
SQL> select index_name,distinct_keys from user_indexes where table_name='T';
INDEX_NAME DISTINCT_KEYS
------------------------------ -------------
IDX_ID 5
IDX_ID_STATUS 6
----測試
SQL> select /*+ index(t idx_id) */ id,status,comm from t where id='1' and status='2' and rownum=1;
ID S COMM
-------------------- - ----------------------------------------------------------------------------------------------------
1 2 comm
Execution Plan
----------------------------------------------------------
Plan hash value: 415658150
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 4 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| T | 2 | 56 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IDX_ID | 100K| | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM=1)
2 - filter("STATUS"='2')
3 - access("ID"='1')
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
899 consistent gets
0 physical reads
0 redo size
540 bytes sent via SQL*Net to client
400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
--檢視10046 trace,可以清楚的看到oracle一邊讀索引一邊讀表,obj#=55900為表,obj#=55901為索引
..............
WAIT #1: nam='db file sequential read' ela= 228 file#=4 block#=13227 blocks=1 obj#=55900 tim=1356771811774771
WAIT #1: nam='db file sequential read' ela= 298 file#=4 block#=18829 blocks=1 obj#=55901 tim=1356771811775194
WAIT #1: nam='db file sequential read' ela= 1095 file#=4 block#=13228 blocks=1 obj#=55900 tim=1356771811776411
WAIT #1: nam='db file sequential read' ela= 123 file#=4 block#=18830 blocks=1 obj#=55901 tim=1356771811776828
WAIT #1: nam='db file sequential read' ela= 535 file#=4 block#=13229 blocks=1 obj#=55900 tim=1356771811777482
WAIT #1: nam='db file sequential read' ela= 115 file#=4 block#=13230 blocks=1 obj#=55900 tim=1356771811777755
WAIT #1: nam='db file sequential read' ela= 380 file#=4 block#=18831 blocks=1 obj#=55901 tim=1356771811778220
WAIT #1: nam='db file sequential read' ela= 943 file#=4 block#=13231 blocks=1 obj#=55900 tim=1356771811779412
WAIT #1: nam='db file sequential read' ela= 607 file#=4 block#=18832 blocks=1 obj#=55901 tim=1356771811780184
WAIT #1: nam='db file sequential read' ela= 556 file#=4 block#=13232 blocks=1 obj#=55900 tim=1356771811781034
WAIT #1: nam='db file sequential read' ela= 779 file#=4 block#=18833 blocks=1 obj#=55901 tim=1356771811782041
WAIT #1: nam='db file sequential read' ela= 1215 file#=4 block#=13233 blocks=1 obj#=55900 tim=1356771811783368
WAIT #1: nam='db file sequential read' ela= 522 file#=4 block#=18834 blocks=1 obj#=55901 tim=1356771811784071
WAIT #1: nam='db file sequential read' ela= 283 file#=4 block#=13234 blocks=1 obj#=55900 tim=1356771811784434
WAIT #1: nam='db file sequential read' ela= 400 file#=4 block#=13235 blocks=1 obj#=55900 tim=1356771811784991
WAIT #1: nam='db file sequential read' ela= 252 file#=4 block#=18835 blocks=1 obj#=55901 tim=1356771811785346
WAIT #1: nam='db file sequential read' ela= 323 file#=4 block#=13236 blocks=1 obj#=55900 tim=1356771811785840
WAIT #1: nam='db file sequential read' ela= 313 file#=4 block#=18836 blocks=1 obj#=55901 tim=1356771811786263
WAIT #1: nam='db file sequential read' ela= 149 file#=4 block#=13237 blocks=1 obj#=55900 tim=1356771811786528
WAIT #1: nam='db file sequential read' ela= 256 file#=4 block#=18837 blocks=1 obj#=55901 tim=1356771811786913
WAIT #1: nam='db file sequential read' ela= 277 file#=4 block#=13238 blocks=1 obj#=55900 tim=1356771811787278
WAIT #1: nam='db file sequential read' ela= 317 file#=4 block#=18838 blocks=1 obj#=55901 tim=1356771811787778
WAIT #1: nam='db file sequential read' ela= 272 file#=4 block#=13239 blocks=1 obj#=55900 tim=1356771811788153
..............................
--當使用複合索引查詢時,邏輯讀明顯降低
SQL> select /*+ index(t idx_id_status) */ id,status,comm from t where id='1' and status='2' and rownum=1;
ID S COMM
-------------------- - ----------------------------------------------------------------------------------------------------
1 2 comm
Execution Plan
----------------------------------------------------------
Plan hash value: 3889392193
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 4 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T | 2 | 56 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IDX_ID_STATUS | 80360 | | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM=1)
3 - access("ID"='1' AND "STATUS"='2')
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
540 bytes sent via SQL*Net to client
400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
---此時觀察10046 trace檔案,只有4次db file sequential read的等待產生,正好符合4個邏輯讀的結果
WAIT #1: nam='db file sequential read' ela= 277989 file#=4 block#=7772 blocks=1 obj#=55902 tim=1356771851449041
WAIT #1: nam='db file sequential read' ela= 21429 file#=4 block#=21569 blocks=1 obj#=55902 tim=1356771851471110
WAIT #1: nam='db file sequential read' ela= 6088 file#=4 block#=21365 blocks=1 obj#=55902 tim=1356771851478010
WAIT #1: nam='db file sequential read' ela= 16579 file#=4 block#=13628 blocks=1 obj#=55900 tim=1356771851495606
通過資料塊的dump可以確認,其中前兩行為branch block,第三行為leaf block,第四行為表的資料塊
SQL> select id,status,dbms_rowid.rowid_relative_fno(rowid),dbms_rowid.rowid_block_number(rowid) from t where id='1' and status='2' and rownum=1;
ID S DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
-------------------- - ------------------------------------ ------------------------------------
1 2 4 13628
符合條件的記錄正好位於file_id為4,block_id為13628的資料塊上。
結論:對於這種where條件中有rownum<=n且n較小的查詢,在where條件中所有的列上建立一個複合索引會明顯的提高查詢效能。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/10972173/viewspace-1076193/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- PostgreSQL DBA(55) - MVCC#8(對全表掃描的影響)SQLMVCC#
- 索引全掃描和索引快速全掃描的區別索引
- Oracle中rownum對錶的掃描方式效能上的影響深入探究Oracle
- 【Oracle】-【COMMIT對索引的影響】-從trace看COMMIT對索引的影響OracleMIT索引
- shrink 操作對索引的影響索引
- Update操作對索引的影響索引
- Oracle中存取資料掃描Table及索引的方式(全表掃描,索引掃描等)Oracle索引
- 【Oracle】 索引的掃描方式Oracle索引
- MySQL中的全表掃描和索引樹掃描MySql索引
- 人工智慧對軟體測試的影響人工智慧
- 測試SQLPLUS的ARRAYSIZE對效能的影響SQL
- 【MySQL】全索引掃描的bugMySql索引
- oracle 全表掃描,索引範圍掃描與塊的理解Oracle索引
- mysql的DDL操作對業務產生影響測試MySql
- shrink 與rebuild對索引高度的影響對比Rebuild索引
- 【Oracle】-【ROWNUM與索引】-索引對ROWNUM檢索的影響Oracle索引
- delete 刪除資料 全表掃描還是掃描所有塊的測試delete
- 解讀Oracle 索引掃描Oracle索引
- Web應用掃描測試工具VegaWeb
- 走索引掃描的慢查詢索引
- delete語句對索引的影響之分析delete索引
- 索引對直接路徑載入的影響索引
- Oracle 11g 測試停庫對job的影響Oracle
- 測試truncate,delete 對rman 備份集大小的影響delete
- 表資料的儲存對索引的影響索引
- 分割槽表的不同操作對索引的影響索引
- MogDB/openGauss 壞塊測試-對啟動的影響-測試筆記1筆記
- 外來鍵有無索引帶來的影響學習與測試索引
- 滲透測試之主機探測與埠掃描
- oracle點陣圖索引對DML操作的影響Oracle索引
- 索引及排序對執行計劃的影響索引排序
- CONTEXT索引對COMMIT操作的影響 (ZT)Context索引MIT
- 關於Oracle 9i 跳躍式索引掃描(Index Skip Scan)的小測試 (轉)Oracle索引Index
- 測試多分支開發對合並程式碼的影響
- 軟體測試對軟體質量的影響有那些?
- 使用索引快速全掃描(Index FFS)避免全表掃描的若干場景索引Index
- (轉)索引掃描還是全表掃描(Index Scan Or Full Table Scan)索引Index
- 轉)索引掃描還是全表掃描(Index Scan Or Full Table Scan)索引Index