海量資料遷移之通過rowid切分大表

jeanron100發表於2014-08-12

在之前的章節中,討論過了通過 分割槽+並行等方式來進行超大的表的切分,通過這種方式能夠極大的提高資料的平均分佈,但是不是最完美的。
比如在資料量再提高几個層次,我們假設這個表目前有1T的大小。有10個分割槽,最大的分割槽有400G,那麼如果我們想盡可能的平均的匯出資料,使用並行就不一定能夠那麼奏效了。
比方說我們要求每個dump檔案控制在200M總有,那樣的話400G的分割槽就需要800個並行才能完成,在實際的資料庫維護中,我們知道預設的並行數只有64個,提高几倍,也不可能超過800
所以在資料量極大的情況下,如果資源緊張,可能生成的dump就會比較大。

我們考慮使用rowid來滿足我們的需求。
我們可以根據需要來指定需要生成幾個dump檔案。比如表subscriber有600M,那麼如果按照200M為一個單位,我們需要生成3個dump檔案。
如果想資料足夠平均,就需要在rowid上做點功夫。
我們先設定一個引數檔案,如下的格式。
可以看到表memo資料量極大,按照200M一個單位,最大的分割槽(P9_A3000_E5)需要800個並行。
表ICE_AGREEMENT比較小,不是分割槽表,我們以x來臨時作為分割槽表的代名,在處理的時候可以方便的甄別

MEMO                                 P9_A3000_E0                           156
MEMO                                 P9_A3000_E1                           170
MEMO                                 P9_A3000_E2                           190
MEMO                                 P9_A3000_E3                           200
MEMO                                 P9_A3000_E4                           180
MEMO                                 P9_A3000_E5                           800
MEMO                                 PMAXVALUE_AMAXVALUE_EMAXVALUE         1
ICE_AGREEMENT                        x                                    36
CRIBER_HISTORY                       x                                    11

可以使用如下的指令碼來完成rowid的切分。

#### $1 dba conn details
#### $2 table owner
#### $3 table_name
#### $4 subobject_name
#### $5 parallel_no
function normal_split
{
sqlplus -s $1 < set linesize 200
set pages 0
set feedback off
spool list/rowid_range_$3_x.lst
select rownum || ', ' ||' rowid between '||
chr(39)||dbms_rowid.rowid_create( 1, DOI, lo_fno, lo_block, 0 ) ||chr(39)|| ' and  ' ||
chr(39)||dbms_rowid.rowid_create( 1, DOI, hi_fno, hi_block, 1000000 )||chr(39) data
from (
SELECT DISTINCT DOI, grp,
first_value(relative_fno) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) lo_fno,
first_value(block_id ) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) lo_block,
last_value(relative_fno) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) hi_fno,
last_value(block_id+blocks-1) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) hi_block,
SUM(blocks) over (partition BY DOI,grp) sum_blocks,SUBOBJECT_NAME
     FROM(
SELECT   obj.OBJECT_ID,
                 obj.SUBOBJECT_NAME,
                 obj.DATA_OBJECT_ID     as DOI,
                 ext.relative_fno,
         ext.block_id,
         ( SUM(blocks) over () ) SUM,
         (SUM(blocks) over (ORDER BY DATA_OBJECT_ID,relative_fno, block_id)-0.01 ) sum_fno ,
         TRUNC( (SUM(blocks) over (ORDER BY DATA_OBJECT_ID,relative_fno, block_id)-0.01) / (SUM(blocks) over ()/ $5 ) ) grp,
         ext.blocks
FROM     dba_extents ext, dba_objects obj
WHERE    ext.segment_name = UPPER('$3')
AND      ext.owner        = UPPER('$2')
AND      obj.owner       =  ext.owner
AND      obj.object_name     = ext.segment_name
AND      obj.DATA_OBJECT_ID IS NOT NULL
ORDER BY DATA_OBJECT_ID, relative_fno, block_id
) order by  DOI,grp
);
spool off;
EOF
}

function partition_split
{
sqlplus -s $1 < set linesize 200
set pages 0
set feedback off
spool list/rowid_range_$3_$4.lst
select rownum || ', ' ||' rowid between '||
chr(39)||dbms_rowid.rowid_create( 1, DOI, lo_fno, lo_block, 0 ) ||chr(39)|| ' and  ' ||
chr(39)||dbms_rowid.rowid_create( 1, DOI, hi_fno, hi_block, 1000000 )||chr(39) data
from (
SELECT DISTINCT DOI, grp,
first_value(relative_fno) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) lo_fno,
first_value(block_id ) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) lo_block,
last_value(relative_fno) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) hi_fno,
last_value(block_id+blocks-1) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) hi_block,
SUM(blocks) over (partition BY DOI,grp) sum_blocks,SUBOBJECT_NAME
     FROM(
SELECT   obj.OBJECT_ID,
                 obj.SUBOBJECT_NAME,
                 obj.DATA_OBJECT_ID     as DOI,
                 ext.relative_fno,
         ext.block_id,
         ( SUM(blocks) over () ) SUM,
         (SUM(blocks) over (ORDER BY DATA_OBJECT_ID,relative_fno, block_id)-0.01 ) sum_fno ,
         TRUNC( (SUM(blocks) over (ORDER BY DATA_OBJECT_ID,relative_fno, block_id)-0.01) / (SUM(blocks) over ()/ $5 ) ) grp,
         ext.blocks
FROM     dba_extents ext, dba_objects obj
WHERE    ext.segment_name = UPPER('$3')
AND      ext.owner        = UPPER('$2')
AND      obj.owner       =  ext.owner
AND      obj.object_name     = ext.segment_name
AND      obj.DATA_OBJECT_ID IS NOT NULL
AND      obj.subobject_name=UPPER('$4')
ORDER BY DATA_OBJECT_ID, relative_fno, block_id
) order by  DOI,grp
);
spool off
EOF
}

sub_partition_name=$4

if [[ $sub_partition_name = 'x' ]]
then
normal_split $1 $2 $3 x $5
else
partition_split $1 $2 $3 $4 $5
fi

指令碼比較長,需要的引數有5個,因為訪問dba_extents,dba_objects需要一定的許可權,可以使用dba許可權的賬號即可。
第2個引數是表的owner,第3個引數是表名,第4個引數是分割槽表名(如果是分割槽表就是分割槽表名,如果不是就填x),第5個引數就是期望使用的並行度,能夠在一定程度上加快速度
簡單演示一下,可以通過下面的方式來執行指令碼,我們指定生成10個dump這個表不是分割槽表。

ksh gen_rowid.sh n1/n1 prdowner subscriber_history x 10
1,  where  rowid between 'AAB4VPAAJAAD7qAAAA' and  'AAB4VPAAJAAD/R/EJA'
2,  where  rowid between 'AAB4VPAAJAAD/SAAAA' and  'AAB4VPAAKAABV5/EJA'
3,  where  rowid between 'AAB4VPAAKAABV6AAAA' and  'AAB4VPAALAAE/p/EJA'
4,  where  rowid between 'AAB4VPAALAAE/qAAAA' and  'AAB4VPAAMAAFFh/EJA'
5,  where  rowid between 'AAB4VPAAMAAFFiAAAA' and  'AAB4VPAAyAACuh/EJA'
6,  where  rowid between 'AAB4VPAAyAACuiAAAA' and  'AAB4VPAAzAACe5/EJA'
7,  where  rowid between 'AAB4VPAAzAACe6AAAA' and  'AAB4VPAA1AACZR/EJA'
8,  where  rowid between 'AAB4VPAA1AACZSAAAA' and  'AAB4VPAA2AACWR/EJA'
9,  where  rowid between 'AAB4VPAA2AACWSAAAA' and  'AAB4VPAA4AACP5/EJA'
10,  where  rowid between 'AAB4VPAA4AACQCAAAA' and  'AAB4VPAA5AACHx/EJA'
然後我們來看看資料是否足夠平均。
可以類似下面的方式驗證,我們抽第1,2,10個。

SQL> select count(*)from subscriber_history  where  rowid between 'AAB4VPAAJAAD7qAAAA' and  'AAB4VPAAJAAD/R/EJA'
  2  ;

  COUNT(*)
----------
    328759

SQL> select count(*)from  subscriber_history   where  rowid between 'AAB4VPAAJAAD/SAAAA' and  'AAB4VPAAKAABV5/EJA'
  2  /

  COUNT(*)
----------
    318021

SQL> select count(*)from subscriber_history  where  rowid between 'AAB4VPAA4AACQCAAAA' and  'AAB4VPAA5AACHx/EJA';

  COUNT(*)
----------
    332638

可以看到資料還是很平均的,達到了我們的期望。

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/23718752/viewspace-1250185/,如需轉載,請註明出處,否則將追究法律責任。

相關文章