HBase簡易遷移

xuexiaogang發表於2015-08-10

1、準備工作:

1.1 HBase已經建立並且正常

 

1.2 檢查hbase狀態

 

[root@namenode ~]# hbase shell

15/07/13 10:42:47 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.0.0-cdh5.4.3, rUnknown, Wed Jun 24 19:34:50 PDT 2015

 

hbase(main):001:0> status

3 servers, 0 dead, 1.0000 average load

 

hbase(main):002:0>

出現以上情況為hbase可用。

1.3  嘗試建立資料表

hbase(main):069:0> create 'member','member_id','address','info'  

0 row(s) in 1.3860 seconds

1.4 插入模擬資料

hbase(main):032:0> put'member','scutshuxue','info:age','24'

'

 

put'member','xiaofeng','address:province','guangdong'

 

put'member','xiaofeng','address:city','jieyang'

 

put'member','xiaofeng','address:town','xianqiao'

0 row(s) in 0.0790 seconds

 

hbase(main):033:0>

hbase(main):034:0* put'member','scutshuxue','info:birthday','1987-06-17'

0 row(s) in 0.0060 seconds

 

hbase(main):035:0>

hbase(main):036:0* put'member','scutshuxue','info:company','alibaba'

0 row(s) in 0.0220 seconds

 

hbase(main):037:0>

hbase(main):038:0* put'member','scutshuxue','address:contry','china'

0 row(s) in 0.0090 seconds

 

hbase(main):039:0>

hbase(main):040:0* put'member','scutshuxue','address:province','zhejiang'

0 row(s) in 0.0070 seconds

 

hbase(main):041:0>

hbase(main):042:0* put'member','scutshuxue','address:city','hangzhou'

0 row(s) in 0.0070 seconds

 

hbase(main):047:0*

hbase(main):048:0* put'member','xiaofeng','info:birthday','1987-4-17'

0 row(s) in 0.0060 seconds

 

hbase(main):049:0>

hbase(main):050:0* put'member','xiaofeng','info:favorite','movie'

0 row(s) in 0.0060 seconds

 

hbase(main):051:0>

hbase(main):052:0* put'member','xiaofeng','info:company','alibaba'

0 row(s) in 0.0050 seconds

 

hbase(main):053:0>

hbase(main):054:0* put'member','xiaofeng','address:contry','china'

0 row(s) in 0.0070 seconds

 

hbase(main):055:0>

hbase(main):056:0* put'member','xiaofeng','address:province','guangdong'

0 row(s) in 0.0080 seconds

 

hbase(main):057:0>

hbase(main):058:0* put'member','xiaofeng','address:city','jieyang'

0 row(s) in 0.0070 seconds

 

hbase(main):059:0>

hbase(main):060:0* put'member','xiaofeng','address:town','xianqiao'

0 row(s) in 0.0060 seconds

 

1.5 檢索資料

hbase(main):061:0> scan 'member'

ROW                              COLUMN+CELL                                                                               

 scutshuxue                      column=address:city, timestamp=1436753702560, value=hangzhou                              

 scutshuxue                      column=address:contry, timestamp=1436753702509, value=china                               

 scutshuxue                      column=address:province, timestamp=1436753702534, value=zhejiang                          

 scutshuxue                      column=info:age, timestamp=1436753702377, value=24                                        

 scutshuxue                      column=info:birthday, timestamp=1436753702430, value=1987-06-17                           

 scutshuxue                      column=info:company, timestamp=1436753702472, value=alibaba                               

 xiaofeng                        column=address:city, timestamp=1436753702760, value=jieyang                               

 xiaofeng                        column=address:contry, timestamp=1436753702703, value=china                               

 xiaofeng                        column=address:province, timestamp=1436753702729, value=guangdong                         

 xiaofeng                        column=address:town, timestamp=1436753702786, value=xianqiao                              

 xiaofeng                        column=info:birthday, timestamp=1436753702612, value=1987-4-17                            

 xiaofeng                        column=info:company, timestamp=1436753702678, value=alibaba                               

 xiaofeng                        column=info:favorite, timestamp=1436753702644, value=movie                                

2 row(s) in 0.0870 seconds

1.6 資料遷移環境檢查

登陸hbase所在的伺服器,通常來說是namenode那臺伺服器。檢查一下已存在的目錄

[root@namenode ~]# hadoop fs -ls /

Found 3 items

drwxr-xr-x   - hbase hbase               0 2015-07-13 10:02 /hbase

drwxrwxrwt   - hdfs  supergroup          0 2015-07-13 10:12 /tmp

drwxr-xr-x   - hdfs  supergroup          0 2015-07-13 10:12 /user

1.7開始匯出HBase的表

hbase org.apache.hadoop.hbase.mapreduce.Export member /tmp/member

member是表名  後面的是hdfs檔案系統的目錄

2. 資料恢復

2.1將原資料表清空 

hbase(main):063:0> disable 'member'

0 row(s) in 1.3760 seconds

 

hbase(main):065:0> drop 'member'

0 row(s) in 0.8590 seconds

 

hbase(main):066:0> list

TABLE                                                                                                                      

0 row(s) in 0.0100 seconds

可以看出,member表已經不存在了。

 

2.2建立表結構

export匯出的需要用import匯入。但是匯入之前表結構應該先建立。

hbase(main):069:0> create 'member','member_id','address','info'   

0 row(s) in 1.3860 seconds

 

[root@namenode ~]# hbase org.apache.hadoop.hbase.mapreduce.Import member /tmp/member

 

hbase(main):071:0> scan 'member'

ROW                              COLUMN+CELL                                                                               

0 row(s) in 0.0220 seconds

恢復之前

hbase(main):072:0> scan 'member'

ROW                              COLUMN+CELL                                                                                

 scutshuxue                      column=address:city, timestamp=1436753702560, value=hangzhou                              

 scutshuxue                      column=address:contry, timestamp=1436753702509, value=china                               

 scutshuxue                      column=address:province, timestamp=1436753702534, value=zhejiang                          

 scutshuxue                      column=info:age, timestamp=1436753702377, value=24                                         

 scutshuxue                      column=info:birthday, timestamp=1436753702430, value=1987-06-17                           

 scutshuxue                      column=info:company, timestamp=1436753702472, value=alibaba                               

 xiaofeng                        column=address:city, timestamp=1436753702760, value=jieyang                               

 xiaofeng                        column=address:contry, timestamp=1436753702703, value=china                               

 xiaofeng                        column=address:province, timestamp=1436753702729, value=guangdong                         

 xiaofeng                        column=address:town, timestamp=1436753702786, value=xianqiao                              

 xiaofeng                        column=info:birthday, timestamp=1436753702612, value=1987-4-17                            

 xiaofeng                        column=info:company, timestamp=1436753702678, value=alibaba                               

 xiaofeng                        column=info:favorite, timestamp=1436753702644, value=movie                                

2 row(s) in 0.0570 seconds

恢復之後

 

2.3如果是異地則需要將第一次匯出的檔案複製到待恢復的主機上

[root@namenode ~]# hadoop fs -get /tmp/bak /Downloads/new

[root@namenode ~]# cd /Downloads/new/

[root@namenode new]# ll

total 4

drwxr-xr-x 2 root root 4096 Jul 13 11:05 bak

[root@namenode new]# cd bak/

[root@namenode bak]# ll

total 4

-rw-r--r-- 1 root root 771 Jul 13 11:05 part-m-00000

-rw-r--r-- 1 root root   0 Jul 13 11:05 _SUCCESS

 

OS環境下采用多種途徑複製過去。然後執行復制檔案到hdfs上。

[root@namenode bak]# hadoop fs -copyFromLocal /Downloads/new/bak/  /tmp/new

[root@namenode bak]# hadoop fs -ls /tmp/

Found 6 items

drwxrwxrwx   - hdfs   supergroup          0 2015-07-13 11:11 /tmp/.cloudera_health_monitoring_canary_files

drwxr-xr-x   - root   supergroup          0 2015-07-13 11:03 /tmp/bak

drwx-wx-wx   - hive   supergroup          0 2015-07-13 10:06 /tmp/hive

drwxrwxrwt   - mapred hadoop              0 2015-07-13 10:03 /tmp/logs

drwxr-xr-x   - root   supergroup          0 2015-07-13 10:34 /tmp/member

drwxr-xr-x   - root   supergroup          0 2015-07-13 11:11 /tmp/new

 

2.4後續恢復參見第2

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/637517/viewspace-1766829/,如需轉載,請註明出處,否則將追究法律責任。

相關文章