SELECT大表的處理
系統中有一張表是2秒insert一次,一次大概有100多條,一天有幾百萬的資料,目前這張表大概有30GB的資料量了。
需求可能查詢一天的資料,也可能查詢一個月的資料。速度那就不用說了,這樣的資料量,即使使用索引也慢得不行了。而且頻率還挺高,系統出現大量cache buffer chain latch。
考慮將此表修改為按時間的分割槽表,一天一個分割槽,將索引建立為分割槽索引。
實驗如下:
1.建立原始表:
create table EMS.UNIT_POWER_SECOND_T
(
APPLIEDTIME DATE not null,
METERCODE INTEGER not null,
OBJID INTEGER not null,
DATA NUMBER(18,4)
);
2.建立索引:
create index ems.pk_unit_power_second_t on ems.unit_power_second_t(appliedtime,metercode,objid);
3.插入模擬資料
declare
v_d date :=to_date('2010-1-1','yyyy-mm-dd');
metercode number;
objid number;
begin
for i in 1..1000000 loop
v_d:=v_d+2/24/60/60;
metercode :=abs(dbms_random.random mod 9);
objid :=abs(dbms_random.random mod 99);
insert into EMS.UNIT_POWER_SECOND_T values(
v_d,
metercode,
objid,
dbms_random.random);
end loop;
end;
/
4.執行SQL
SQL> SELECT *
FROM ems.UNIT_POWER_SECOND_T
WHERE appliedTime > =to_date('2010-1-1','yyyy-mm-dd')
AND appliedTime < to_date('2010-1-2','yyyy-mm-dd')
ORDER BY appliedTime;
43199 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 991247716
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 31888 | 1494K| 255 (1)| 00:00:04 |
| 1 | TABLE ACCESS BY INDEX ROWID| UNIT_POWER_SECOND_T | 31888 | 1494K| 255 (1)| 00:00:04 |
|* 2 | INDEX RANGE SCAN | PK_UNIT_POWER_SECOND_T | 31888 | | 120 (1)| 00:00:02 |
------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("APPLIEDTIME">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"APPLIEDTIME"
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
10 recursive calls
1 db block gets
6510 consistent gets
398 physical reads
26228 redo size
1500722 bytes sent via SQL*Net to client
32161 bytes received via SQL*Net from client
2881 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
43199 rows processed
5.建立分割槽表:
create table EMS.UNIT_POWER_SECOND_T111
(
APPLIEDTIME DATE not null,
METERCODE INTEGER not null,
OBJID INTEGER not null,
DATA NUMBER(18,4)
)
partition by range (appliedtime)(
partition unit_power_second_t100102 values less than (to_date('2010-1-2','yyyy-mm-dd')),
partition unit_power_second_t100103 values less than (to_date('2010-1-3','yyyy-mm-dd')),
partition unit_power_second_t100104 values less than (to_date('2010-1-4','yyyy-mm-dd')),
partition unit_power_second_t100105 values less than (to_date('2010-1-5','yyyy-mm-dd')),
partition unit_power_second_t100106 values less than (to_date('2010-1-6','yyyy-mm-dd')),
partition unit_power_second_t100107 values less than (to_date('2010-1-7','yyyy-mm-dd')),
partition unit_power_second_t100108 values less than (to_date('2010-1-8','yyyy-mm-dd')),
partition unit_power_second_t100109 values less than (to_date('2010-1-9','yyyy-mm-dd')),
partition unit_power_second_t100110 values less than (to_date('2010-1-10','yyyy-mm-dd')),
partition unit_power_second_t100111 values less than (to_date('2010-1-11','yyyy-mm-dd')),
partition unit_power_second_t100112 values less than (to_date('2010-1-12','yyyy-mm-dd')),
partition unit_power_second_t100113 values less than (to_date('2010-1-13','yyyy-mm-dd')),
partition unit_power_second_t100114 values less than (to_date('2010-1-14','yyyy-mm-dd')),
partition unit_power_second_t100115 values less than (to_date('2010-1-15','yyyy-mm-dd')),
partition unit_power_second_t100116 values less than (to_date('2010-1-16','yyyy-mm-dd')),
partition unit_power_second_t100117 values less than (to_date('2010-1-17','yyyy-mm-dd')),
partition unit_power_second_t100118 values less than (to_date('2010-1-18','yyyy-mm-dd')),
partition unit_power_second_t100119 values less than (to_date('2010-1-19','yyyy-mm-dd')),
partition unit_power_second_t100120 values less than (to_date('2010-1-20','yyyy-mm-dd')),
partition unit_power_second_t100121 values less than (to_date('2010-1-21','yyyy-mm-dd')),
partition unit_power_second_t100122 values less than (to_date('2010-1-22','yyyy-mm-dd')),
partition unit_power_second_t100123 values less than (to_date('2010-1-23','yyyy-mm-dd')),
partition unit_power_second_t100124 values less than (to_date('2010-1-24','yyyy-mm-dd')),
partition unit_power_second_t100125 values less than (to_date('2010-1-25','yyyy-mm-dd')));
6.插入資料:
insert /*+append*/ into ems.unit_power_second_t111 select * from ems.unit_power_second_t;
7.執行查詢:
SQL>SELECT *
FROM ems.UNIT_POWER_SECOND_T111
WHERE appliedTime > =to_date('2010-1-1','yyyy-mm-dd')
AND appliedTime < to_date('2010-1-2','yyyy-mm-dd')
ORDER BY appliedTime 2 3 4 5 ;
43199 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3250869502
-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 43199 | 759K| | 290 (2)| 00:00:04 | | |
| 1 | PARTITION RANGE SINGLE| | 43199 | 759K| | 290 (2)| 00:00:04 | 1 | 1 |
| 2 | SORT ORDER BY | | 43199 | 759K| 2728K| 290 (2)| 00:00:04 | | |
|* 3 | TABLE ACCESS FULL | UNIT_POWER_SECOND_T111 | 43199 | 759K| | 40 (3)| 00:00:01 | 1 | 1 |
-------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("APPLIEDTIME">=TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
162 consistent gets
159 physical reads
0 redo size
1500722 bytes sent via SQL*Net to client
32161 bytes received via SQL*Net from client
2881 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
43199 rows processed
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/23135684/viewspace-660273/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- oracle臨時表空間過大的原因&&處理Oracle
- MySQL的表碎片處理MySql
- 使用spark-sql處理Doris大表關聯SparkSQL
- oracle系統表空間過大問題處理Oracle
- exp,imp 不同表空間大欄位處理方法
- mysql,sqlserver資料庫單表資料過大的處理方式MySqlServer資料庫
- [zt] 處理LOB(大物件)表enqueue HW問題的一個方法物件ENQ
- ZT:處理Oracle資料庫中一張有效的Drop大表Oracle資料庫
- Laravel-admin 處理 select 有 data 鍵的情況Laravel
- 處理表鎖定的情況
- 表熱塊的處理手段分析
- 微軟官方:SELECT語句邏輯處理順序微軟
- PHP表單處理指南PHP
- 大資料處理的基本流程大資料
- 關於大資料量的處理大資料
- oracle 表碎片太多的處理辦法Oracle
- UNDO表空間損壞的處理
- 處理表的行遷移的問題
- java大資料處理:如何使用Java技術實現高效的大資料處理Java大資料
- Redis 如何處理大 KeyRedis
- php表單處理-143PHP
- 分割槽表truncate慢處理
- React 如何來處理表單React
- undo表空間故障處理
- openGauss 處理錯誤表
- 管理大容量匯入的批處理
- 有效的處理較大的點陣圖
- undo 表空間滿了的處理方法
- 處理TEMP表空間滿的問題
- Java對程式碼表的處理技巧薦Java
- bad block表上壞塊的處理BloC
- SYSAUX表空間佔用過大情況下的處理(AWR資訊過多)UX
- 大資料爭論:批處理與流處理的C位之戰大資料
- 大資料常用處理框架大資料框架
- Python處理大檔案Python
- 使用Java處理大檔案Java
- 《轉》ORACLE LOB 大物件處理Oracle物件
- MySQL分表後原分割槽表處理方案MySql