小丸子學Hadoop系列之——hbase備份與恢復

wxjzqym發表於2016-07-25
1.使用distcp冷備hbase
--檢視原始資料
[hdpusr01@hadoop1 bin]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016


hbase(main):001:0> list
TABLE                                                                                                                                                                  
member                                                                                                                                                                 
1 row(s) in 0.2860 seconds

=> ["member"]
hbase(main):002:0> scan 'member'
ROW                                        COLUMN+CELL                                                                                                                 
 rowkey-1                                  column=common:city, timestamp=1469089923121, value=beijing                                                                  
 rowkey-1                                  column=person:age, timestamp=1469089899438, value=20                                                                        
 rowkey-2                                  column=common:country, timestamp=1469090319844, value=china                                                                 
 rowkey-2                                  column=person:sex, timestamp=1469090247393, value=man                                                                       
2 row(s) in 0.0940 seconds


--關閉hbase
[hdpusr01@hadoop1 bin]$ stop-hbase.sh 
stopping hbase................


--檢視distcp的幫助
[hdpusr01@hadoop1 bin]$ hadoop distcp
usage: distcp OPTIONS [source_path...]
              OPTIONS
 -append                Reuse existing data in target files and append new
                        data to them if possible
 -async                 Should distcp execution be blocking
 -atomic                Commit all changes or none
 -bandwidth       Specify bandwidth per map in MB
 -delete                Delete from target, files missing in source
 -f               List of files that need to be copied
 -filelimit       (Deprecated!) Limit number of files copied to <= n
 -i                     Ignore failures during copy
 -log             Folder on DFS where distcp execution logs are
                        saved
 -m               Max number of concurrent maps to use for copy
 -mapredSslConf   Configuration for ssl config file, to use with
                        hftps://
 -overwrite             Choose to overwrite target files unconditionally,
                        even if they exist.
 -p               preserve status (rbugpcaxt)(replication,
                        block-size, user, group, permission,
                        checksum-type, ACL, XATTR, timestamps). If -p is
                        specified with no , then preserves
                        replication, block size, user, group, permission,
                        checksum type and timestamps. raw.* xattrs are
                        preserved when both the source and destination
                        paths are in the /.reserved/raw hierarchy (HDFS
                        only). raw.* xattrpreservation is independent of
                        the -p flag. Refer to the DistCp documentation for
                        more details.
 -sizelimit       (Deprecated!) Limit number of files copied to <= n
                        bytes
 -skipcrccheck          Whether to skip CRC checks between source and
                        target paths.
 -strategy        Copy strategy to use. Default is dividing work
                        based on file sizes
 -tmp             Intermediate work path to be used for atomic
                        commit
 -update                Update target, copying only missingfiles or
                        directories
                        
                        
--備份hbase
[hdpusr01@hadoop1 bin]$ hdfs dfs -ls /
Found 3 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 02:39 /hbase
drwx-wx-wx   - hdpusr01 supergroup          0 2016-05-23 12:25 /tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-06-25 03:12 /user


[hdpusr01@hadoop1 ~]$ hadoop distcp /hbase /hbasebak
16/07/25 03:07:58 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[/hbase], targetPath=/hbasebak, targetPathExists=false, preserveRawXattrs=false}
16/07/25 03:07:58 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/25 03:08:00 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
16/07/25 03:08:00 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor
16/07/25 03:08:02 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/25 03:08:03 INFO mapreduce.JobSubmitter: number of splits:13
16/07/25 03:08:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1469176015126_0002
16/07/25 03:08:05 INFO impl.YarnClientImpl: Submitted application application_1469176015126_0002
16/07/25 03:08:05 INFO mapreduce.Job: The url to track the job:
16/07/25 03:08:05 INFO tools.DistCp: DistCp job-id: job_1469176015126_0002
16/07/25 03:08:05 INFO mapreduce.Job: Running job: job_1469176015126_0002
16/07/25 03:08:10 INFO mapreduce.Job: Job job_1469176015126_0002 running in uber mode : false
16/07/25 03:08:10 INFO mapreduce.Job:  map 0% reduce 0%
16/07/25 03:08:25 INFO mapreduce.Job:  map 4% reduce 0%
16/07/25 03:08:26 INFO mapreduce.Job:  map 10% reduce 0%
16/07/25 03:08:28 INFO mapreduce.Job:  map 14% reduce 0%
16/07/25 03:09:02 INFO mapreduce.Job:  map 15% reduce 0%
16/07/25 03:09:19 INFO mapreduce.Job:  map 16% reduce 0%
16/07/25 03:09:20 INFO mapreduce.Job:  map 38% reduce 0%
16/07/25 03:09:21 INFO mapreduce.Job:  map 46% reduce 0%
16/07/25 03:09:34 INFO mapreduce.Job:  map 54% reduce 0%
16/07/25 03:09:35 INFO mapreduce.Job:  map 92% reduce 0%
16/07/25 03:09:38 INFO mapreduce.Job:  map 100% reduce 0%
16/07/25 03:09:39 INFO mapreduce.Job: Job job_1469176015126_0002 completed successfully
16/07/25 03:09:39 INFO mapreduce.Job: Counters: 33
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=1404432
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=57852
                HDFS: Number of bytes written=41057
                HDFS: Number of read operations=378
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=99
...


[hdpusr01@hadoop1 ~]$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 02:39 /hbase
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:09 /hbasebak
drwx-wx-wx   - hdpusr01 supergroup          0 2016-05-23 12:25 /tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-06-25 03:12 /user
[hdpusr01@hadoop1 ~]$ hdfs dfs -ls /hbasebak
Found 8 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/.tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/WALs
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/archive
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/corrupt
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/data
-rw-r--r--   1 hdpusr01 supergroup         42 2016-07-25 03:09 /hbasebak/hbase.id
-rw-r--r--   1 hdpusr01 supergroup          7 2016-07-25 03:09 /hbasebak/hbase.version
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:09 /hbasebak/oldWALs


--啟動hbase
[hdpusr01@hadoop1 bin]$ ./start-hbase.sh 
starting master, logging to /home/hdpusr01/hbase-1.0.3/bin/../logs/hbase-hdpusr01-master-hadoop1.out
starting regionserver, logging to /home/hdpusr01/hbase-1.0.3/bin/../logs/hbase-hdpusr01-1-regionserver-hadoop1.out


--刪除hbase中的表


hbase(main):001:0> list 
TABLE                                                                                                                                                                  
member                                                                                                                                                                 
1 row(s) in 0.1850 seconds


=> ["emp"]
hbase(main):002:0> drop 'emp'


ERROR: Table emp is enabled. Disable it first.


Here is some help for this command:
Drop the named table. Table must first be disabled:
  hbase> drop 't1'
  hbase> drop 'ns1:t1'


hbase(main):003:0> disable 'emp'
0 row(s) in 1.1880 seconds

hbase(main):004:0> drop 'emp'
0 row(s) in 4.2010 seconds

hbase(main):005:0> list
TABLE                                                                                                                                                                  
0 row(s) in 0.0060 seconds


=> []


--關閉hbase
[hdpusr01@hadoop1 bin]$ ./stop-hbase.sh 
stopping hbase..................


[hdpusr01@hadoop1 bin]$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:11 /hbase
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:09 /hbasebak
drwx-wx-wx   - hdpusr01 supergroup          0 2016-05-23 12:25 /tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-06-25 03:12 /user
[hdpusr01@hadoop1 bin]$ hdfs dfs -mv /hbase /hbase.old
[hdpusr01@hadoop1 bin]$ hdfs dfs -mv /hbasebak /hbase 
[hdpusr01@hadoop1 bin]$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:09 /hbase
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:11 /hbase.old
drwx-wx-wx   - hdpusr01 supergroup          0 2016-05-23 12:25 /tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-06-25 03:12 /user


#除了上面的方法之外還可以使用distcp的overwriter引數還原資料
[hdpusr01@hadoop1 bin]$ hadoop distcp -overwrite /hbasebak /hbase
16/07/25 03:42:48 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[/hbasebak], targetPath=/hbase, targetPathExists=true, preserveRawXattrs=false}
16/07/25 03:42:48 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/25 03:42:51 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
16/07/25 03:42:51 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor
16/07/25 03:42:52 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/25 03:43:14 INFO mapreduce.JobSubmitter: number of splits:12
16/07/25 03:43:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1469176015126_0003
16/07/25 03:43:23 INFO impl.YarnClientImpl: Submitted application application_1469176015126_0003
16/07/25 03:43:23 INFO mapreduce.Job: The url to track the job:
16/07/25 03:43:23 INFO tools.DistCp: DistCp job-id: job_1469176015126_0003
16/07/25 03:43:23 INFO mapreduce.Job: Running job: job_1469176015126_0003
16/07/25 03:43:29 INFO mapreduce.Job: Job job_1469176015126_0003 running in uber mode : false
16/07/25 03:43:29 INFO mapreduce.Job:  map 0% reduce 0%
。。。
16/07/25 03:45:27 INFO mapreduce.Job:  map 82% reduce 0%
16/07/25 03:45:32 INFO mapreduce.Job:  map 92% reduce 0%
16/07/25 03:45:39 INFO mapreduce.Job:  map 100% reduce 0%
16/07/25 03:45:47 INFO mapreduce.Job: Job job_1469176015126_0003 completed successfully
16/07/25 03:45:47 INFO mapreduce.Job: Counters: 33
。。。


--啟動hbase
[hdpusr01@hadoop1 bin]$ ./start-hbase.sh 
starting master, logging to /home/hdpusr01/hbase-1.0.3/bin/../logs/hbase-hdpusr01-master-hadoop1.out
starting regionserver, logging to /home/hdpusr01/hbase-1.0.3/bin/../logs/hbase-hdpusr01-1-regionserver-hadoop1.out


--驗證資料
[hdpusr01@hadoop1 bin]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016


hbase(main):001:0> list
TABLE                                                                                                                                                                  
emp                                                                                                                                                                    
1 row(s) in 0.2090 seconds


=> ["emp"]
hbase(main):002:0> scan 'emp'
ROW                                        COLUMN+CELL                                                                                                                 
 6379                                      column=info:salary, timestamp=1469428164465, value=10000                                                                    
 7822                                      column=info:ename, timestamp=1469428143019, value=tanlei                                                                    
 8899                                      column=info:job, timestamp=1469428186434, value=IT Engineer                                                                 
3 row(s) in 0.1220 seconds

注意:透過distcp的overwirte引數還原後hbase中的資料可以正常顯示,但是存在zookeeper中的/hbase/table節點下的關於emp的後設資料不見了,而透過hdfs dfs -mv操作直接還原後在zookeeper中的後設資料卻存在。




2.使用CopyTable熱備hbase
--建立新表
hbase(main):004:0> create 'emp2','info'
0 row(s) in 0.7300 seconds


--檢視CopyTable的幫助
[hdpusr01@hadoop1 ~]$ hbase org.apache.hadoop.hbase.mapreduce.CopyTable --help
Usage: CopyTable [general options] [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR]


Options:
 rs.class     hbase.regionserver.class of the peer cluster specify if different from current cluster
 rs.impl      hbase.regionserver.impl of the peer cluster
 startrow     the start row
 stoprow      the stop row
 starttime    beginning of the time range (unixtime in millis) without endtime means from starttime to forever
 endtime      end of the time range.  Ignored if no starttime specified.
 versions     number of cell versions to copy
 new.name     new table's name
 peer.adr     Address of the peer cluster given in the format hbase.zookeeer.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
 families     comma-separated list of families to copy
              To copy from cf1 to cf2, give sourceCfName:destCfName. 
              To keep the same name, just give "cfName"
 all.cells    also copy delete markers and deleted cells
 bulkload     Write input into HFiles and bulk load to the destination table


Args:
 tablename    Name of the table to copy


Examples:
 To copy 'TestTable' to a cluster that uses replication for a 1 hour window:
 $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --starttime=1265875194289 --endtime=1265878794289 --peer.adr=server1,server2,server3:2181:/hbase --families=myOldCf:myNewCf,cf2,cf3 TestTable 
For performance consider the following general option:
  It is recommended that you set the following to >=100. A higher value uses more memory but
  decreases the round trip time to the server and may increase performance.
    -Dhbase.client.scanner.caching=100
  The following should always be set to false, to prevent writing data twice, which may produce 
  inaccurate results.
    -Dmapreduce.map.speculative=false


--使用CopyTable複製資料到新表
hdpusr01@hadoop1 ~]$ hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=emp2 emp
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-07-25 07:35:24,955 INFO  [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2016-07-25 07:35:25,065 INFO  [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2016-07-25 07:35:25,244 INFO  [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2016-07-25 07:35:30,765 INFO  [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x50965003 connecting to ZooKeeper ensemble=hadoop1:29181
2016-07-25 07:35:30,770 INFO  [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-07-25 07:35:30,770 INFO  [main] zookeeper.ZooKeeper: Client environment:host.name=hadoop1
2016-07-25 07:35:30,770 INFO  [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_79
2016-07-25 07:35:33,334 INFO  [main] impl.YarnClientImpl: Submitted application application_1469176015126_0006
2016-07-25 07:35:33,360 INFO  [main] mapreduce.Job: The url to track the job:
2016-07-25 07:35:33,360 INFO  [main] mapreduce.Job: Running job: job_1469176015126_0006
2016-07-25 07:35:39,436 INFO  [main] mapreduce.Job: Job job_1469176015126_0006 running in uber mode : false
2016-07-25 07:35:39,437 INFO  [main] mapreduce.Job:  map 0% reduce 0%
2016-07-25 07:35:44,584 INFO  [main] mapreduce.Job:  map 100% reduce 0%
2016-07-25 07:35:58,657 INFO  [main] mapreduce.Job: Job job_1469176015126_0006 completed successfully
。。。


--驗證資料
hbase(main):005:0> scan 'emp2'
ROW                                        COLUMN+CELL                                                                                                                 
 6379                                      column=info:salary, timestamp=1469428164465, value=10000                                                                    
 7822                                      column=info:ename, timestamp=1469428143019, value=tanlei                                                                    
 8899                                      column=info:job, timestamp=1469428186434, value=IT Engineer                                                                 
3 row(s) in 0.0170 seconds
注意:CopyTable可以實現在同一個叢集或者不同叢集間複製資料



3.使用Export熱備hbase
--備份表資料
[hdpusr01@hadoop1 ~]$ hbase org.apache.hadoop.hbase.mapreduce.Export emp2 /tmp/emp2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-07-25 09:32:57,375 INFO  [main] mapreduce.Export: versions=1, starttime=0, endtime=9223372036854775807, keepDeletedCells=false
2016-07-25 09:32:58,241 INFO  [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2016-07-25 09:32:59,440 INFO  [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39e4ff0c connecting to ZooKeeper ensemble=hadoop1:29181
2016-07-25 09:32:59,446 INFO  [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-07-25 09:32:59,446 INFO  [main] zookeeper.ZooKeeper: Client environment:host.name=hadoop1
2016-07-25 09:32:59,446 INFO  [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_79
。。。
2016-07-25 09:33:00,795 INFO  [main] mapreduce.Job: The url to track the job:
2016-07-25 09:33:00,796 INFO  [main] mapreduce.Job: Running job: job_1469176015126_0007
2016-07-25 09:33:06,968 INFO  [main] mapreduce.Job: Job job_1469176015126_0007 running in uber mode : false
2016-07-25 09:33:06,970 INFO  [main] mapreduce.Job:  map 0% reduce 0%
2016-07-25 09:33:13,028 INFO  [main] mapreduce.Job:  map 100% reduce 0%
2016-07-25 09:33:13,036 INFO  [main] mapreduce.Job: Job job_1469176015126_0007 completed successfully
。。。


--建立新表
hbase(main):010:0> create 'emp3','info'
0 row(s) in 0.1560 seconds

=> Hbase::Table - emp3


--匯入資料到新表
[hdpusr01@hadoop1 ~]$ hbase  org.apache.hadoop.hbase.mapreduce.Import emp3 /tmp/emp2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-07-25 09:46:30,373 INFO  [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2016-07-25 09:46:31,954 INFO  [main] input.FileInputFormat: Total input paths to process : 1
2016-07-25 09:46:32,019 INFO  [main] mapreduce.JobSubmitter: number of splits:1
2016-07-25 09:46:32,593 INFO  [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1469176015126_0008
2016-07-25 09:46:32,821 INFO  [main] impl.YarnClientImpl: Submitted application application_1469176015126_0008
2016-07-25 09:46:32,851 INFO  [main] mapreduce.Job: The url to track the job:
。。。
2016-07-25 09:46:38,947 INFO  [main] mapreduce.Job: Job job_1469176015126_0008 running in uber mode : false
2016-07-25 09:46:38,948 INFO  [main] mapreduce.Job:  map 0% reduce 0%
2016-07-25 09:46:44,018 INFO  [main] mapreduce.Job:  map 100% reduce 0%
2016-07-25 09:46:45,030 INFO  [main] mapreduce.Job: Job job_1469176015126_0008 completed successfully
。。。


--驗證資料
hbase(main):012:0> scan 'emp3'
ROW                                        COLUMN+CELL                                                                                                                 
 6379                                      column=info:salary, timestamp=1469428164465, value=10000                                                                    
 7822                                      column=info:ename, timestamp=1469428143019, value=tanlei                                                                    
 8899                                      column=info:job, timestamp=1469428186434, value=IT Engineer                                                                 
3 row(s) in 0.0230 seconds


至此:有關hbase熱備和冷備的一些方法就介紹到此,還有其他一些技術可以實現hbase的熱備這裡就不介紹了。

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/20801486/viewspace-2122530/,如需轉載,請註明出處,否則將追究法律責任。

相關文章