HBase簡易遷移
1、準備工作:
1.1 HBase已經建立並且正常
1.2 檢查hbase狀態
[root@namenode ~]# hbase shell
15/07/13 10:42:47 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.0.0-cdh5.4.3, rUnknown, Wed Jun 24 19:34:50 PDT 2015
hbase(main):001:0> status
3 servers, 0 dead, 1.0000 average load
hbase(main):002:0>
出現以上情況為hbase可用。
1.3 嘗試建立資料表
hbase(main):069:0> create 'member','member_id','address','info'
0 row(s) in 1.3860 seconds
1.4 插入模擬資料
hbase(main):032:0> put'member','scutshuxue','info:age','24'
'
put'member','xiaofeng','address:province','guangdong'
put'member','xiaofeng','address:city','jieyang'
put'member','xiaofeng','address:town','xianqiao'
0 row(s) in 0.0790 seconds
hbase(main):033:0>
hbase(main):034:0* put'member','scutshuxue','info:birthday','1987-06-17'
0 row(s) in 0.0060 seconds
hbase(main):035:0>
hbase(main):036:0* put'member','scutshuxue','info:company','alibaba'
0 row(s) in 0.0220 seconds
hbase(main):037:0>
hbase(main):038:0* put'member','scutshuxue','address:contry','china'
0 row(s) in 0.0090 seconds
hbase(main):039:0>
hbase(main):040:0* put'member','scutshuxue','address:province','zhejiang'
0 row(s) in 0.0070 seconds
hbase(main):041:0>
hbase(main):042:0* put'member','scutshuxue','address:city','hangzhou'
0 row(s) in 0.0070 seconds
hbase(main):047:0*
hbase(main):048:0* put'member','xiaofeng','info:birthday','1987-4-17'
0 row(s) in 0.0060 seconds
hbase(main):049:0>
hbase(main):050:0* put'member','xiaofeng','info:favorite','movie'
0 row(s) in 0.0060 seconds
hbase(main):051:0>
hbase(main):052:0* put'member','xiaofeng','info:company','alibaba'
0 row(s) in 0.0050 seconds
hbase(main):053:0>
hbase(main):054:0* put'member','xiaofeng','address:contry','china'
0 row(s) in 0.0070 seconds
hbase(main):055:0>
hbase(main):056:0* put'member','xiaofeng','address:province','guangdong'
0 row(s) in 0.0080 seconds
hbase(main):057:0>
hbase(main):058:0* put'member','xiaofeng','address:city','jieyang'
0 row(s) in 0.0070 seconds
hbase(main):059:0>
hbase(main):060:0* put'member','xiaofeng','address:town','xianqiao'
0 row(s) in 0.0060 seconds
1.5 檢索資料
hbase(main):061:0> scan 'member'
ROW COLUMN+CELL
scutshuxue column=address:city, timestamp=1436753702560, value=hangzhou
scutshuxue column=address:contry, timestamp=1436753702509, value=china
scutshuxue column=address:province, timestamp=1436753702534, value=zhejiang
scutshuxue column=info:age, timestamp=1436753702377, value=24
scutshuxue column=info:birthday, timestamp=1436753702430, value=1987-06-17
scutshuxue column=info:company, timestamp=1436753702472, value=alibaba
xiaofeng column=address:city, timestamp=1436753702760, value=jieyang
xiaofeng column=address:contry, timestamp=1436753702703, value=china
xiaofeng column=address:province, timestamp=1436753702729, value=guangdong
xiaofeng column=address:town, timestamp=1436753702786, value=xianqiao
xiaofeng column=info:birthday, timestamp=1436753702612, value=1987-4-17
xiaofeng column=info:company, timestamp=1436753702678, value=alibaba
xiaofeng column=info:favorite, timestamp=1436753702644, value=movie
2 row(s) in 0.0870 seconds
1.6 資料遷移環境檢查
登陸hbase所在的伺服器,通常來說是namenode那臺伺服器。檢查一下已存在的目錄
[root@namenode ~]# hadoop fs -ls /
Found 3 items
drwxr-xr-x - hbase hbase 0 2015-07-13 10:02 /hbase
drwxrwxrwt - hdfs supergroup 0 2015-07-13 10:12 /tmp
drwxr-xr-x - hdfs supergroup 0 2015-07-13 10:12 /user
1.7開始匯出HBase的表
hbase org.apache.hadoop.hbase.mapreduce.Export member /tmp/member
member是表名 後面的是hdfs檔案系統的目錄
2. 資料恢復
2.1將原資料表清空
hbase(main):063:0> disable 'member'
0 row(s) in 1.3760 seconds
hbase(main):065:0> drop 'member'
0 row(s) in 0.8590 seconds
hbase(main):066:0> list
TABLE
0 row(s) in 0.0100 seconds
可以看出,member表已經不存在了。
2.2建立表結構
用export匯出的需要用import匯入。但是匯入之前表結構應該先建立。
hbase(main):069:0> create 'member','member_id','address','info'
0 row(s) in 1.3860 seconds
[root@namenode ~]# hbase org.apache.hadoop.hbase.mapreduce.Import member /tmp/member
hbase(main):071:0> scan 'member'
ROW COLUMN+CELL
0 row(s) in 0.0220 seconds
恢復之前
hbase(main):072:0> scan 'member'
ROW COLUMN+CELL
scutshuxue column=address:city, timestamp=1436753702560, value=hangzhou
scutshuxue column=address:contry, timestamp=1436753702509, value=china
scutshuxue column=address:province, timestamp=1436753702534, value=zhejiang
scutshuxue column=info:age, timestamp=1436753702377, value=24
scutshuxue column=info:birthday, timestamp=1436753702430, value=1987-06-17
scutshuxue column=info:company, timestamp=1436753702472, value=alibaba
xiaofeng column=address:city, timestamp=1436753702760, value=jieyang
xiaofeng column=address:contry, timestamp=1436753702703, value=china
xiaofeng column=address:province, timestamp=1436753702729, value=guangdong
xiaofeng column=address:town, timestamp=1436753702786, value=xianqiao
xiaofeng column=info:birthday, timestamp=1436753702612, value=1987-4-17
xiaofeng column=info:company, timestamp=1436753702678, value=alibaba
xiaofeng column=info:favorite, timestamp=1436753702644, value=movie
2 row(s) in 0.0570 seconds
恢復之後
2.3如果是異地則需要將第一次匯出的檔案複製到待恢復的主機上
[root@namenode ~]# hadoop fs -get /tmp/bak /Downloads/new
[root@namenode ~]# cd /Downloads/new/
[root@namenode new]# ll
total 4
drwxr-xr-x 2 root root 4096 Jul 13 11:05 bak
[root@namenode new]# cd bak/
[root@namenode bak]# ll
total 4
-rw-r--r-- 1 root root 771 Jul 13 11:05 part-m-00000
-rw-r--r-- 1 root root 0 Jul 13 11:05 _SUCCESS
在OS環境下采用多種途徑複製過去。然後執行復制檔案到hdfs上。
[root@namenode bak]# hadoop fs -copyFromLocal /Downloads/new/bak/ /tmp/new
[root@namenode bak]# hadoop fs -ls /tmp/
Found 6 items
drwxrwxrwx - hdfs supergroup 0 2015-07-13 11:11 /tmp/.cloudera_health_monitoring_canary_files
drwxr-xr-x - root supergroup 0 2015-07-13 11:03 /tmp/bak
drwx-wx-wx - hive supergroup 0 2015-07-13 10:06 /tmp/hive
drwxrwxrwt - mapred hadoop 0 2015-07-13 10:03 /tmp/logs
drwxr-xr-x - root supergroup 0 2015-07-13 10:34 /tmp/member
drwxr-xr-x - root supergroup 0 2015-07-13 11:11 /tmp/new
2.4後續恢復參見第2章
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/637517/viewspace-1766829/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- SQL 遷移資料庫至ORACLE簡易方法SQL資料庫Oracle
- 小米Kylin平滑遷移HBase實踐
- Hbase實用技巧:全量+增量資料的遷移方法
- 【SQL】【遷移】寫了一個簡單的sequence遷移指令碼SQL指令碼
- 全量、增量資料在HBase遷移的多種技巧實踐
- HBase簡介
- 影像風格遷移(Neural Style)簡史
- 使用RMAN簡單遷移表空間
- KVM線上遷移(動態遷移)
- 一個易遷移、相容性高的 Flutter 富文字方案Flutter
- 最新最簡易的遷移學習方法,人員再識別新模型 | AI一週學術遷移學習模型AI
- iOS CoreData的簡單操作以及版本遷移iOS
- 【遷移】使用rman遷移資料庫資料庫
- 簡單分析Flask 資料庫遷移詳情Flask資料庫
- Oracle跨平臺遷移的簡單總結Oracle
- 直播點贊View(從我的簡書遷移........)View
- 一個簡單的MySQL資料遷移示例MySql
- 遷移公告
- CCU遷移
- 棧遷移
- oracle高可用性遷移方案簡介_轉摘Oracle
- 【遷移】SqlServer 遷移到 MySQL 方法ServerMySql
- Azure ASM到ARM遷移 (三) Reserved IP的遷移ASM
- 遷移案例一: oracle 8i 檔案遷移Oracle
- 查詢行遷移及消除行遷移(chained rows)AI
- Hbase簡介和基本用法
- 賬號遷移
- docker映象遷移Docker
- Redis鍵遷移Redis
- 戶口遷移
- 部落格遷移
- 遷移資料.
- blog遷移
- 空間遷移
- 【資料遷移】RMAN遷移資料庫到ASM(三)遷移onlinelog等到ASM資料庫ASM
- [論文閱讀] 顏色遷移-N維pdf遷移
- ZT 遷移案例一: oracle 8i 檔案遷移Oracle
- 簡易ApiAPI