Hbase 之 某Region長期處於 RIT 狀態 ( 空洞 )

weixin_34378969發表於2018-03-19

速記:

Hbase web UI 發現某Region長期處於如下狀態:

app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3. state=PENDING_OPEN, ts=Wed Mar 14 21:22:10 CST 2018 (396447s ago), server=yq-hadoop184132,60020,1520836279511

Regions in Transition,沒錯,出現了RIT。
執行 hbase hbck 命令檢查:

ERROR: Region { meta => app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3., hdfs => hdfs://yq-hadoop19:8020/hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3, deployed => , replicaId => 0 } not deployed on any region server.
18/03/19 11:40:08 INFO util.HBaseFsck: Handling overlap merges in parallel. set hbasefsck.overlap.merge.parallel to false to run serially.
ERROR: There is a hole in the region chain between 02 and 03.  You need to create a new .regioninfo and region dir in hdfs to plug the hole.
ERROR: Found inconsistency in table app_user_isnew

檢查結果顯示,該表存在一個空洞問題,我們定位到如下分割槽:

| app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3. | yq-hadoop184140:60020| 02 | 03 |

檢查Hbase後設資料,後設資料是存在的,如下:

hbase(main):053:0> get'hbase:meta','app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3.'
COLUMN                                                    CELL                                                                                                                                                                  
 info:regioninfo                                          timestamp=1517807391090, value={ENCODED => 3eb41df715cdd0f9a2b0ce6550b586b3, NAME => 'app_user_isnew,02,1517807389209.3eb41df715cdd0f9a2b0ce6550b586b3.', STARTKEY => 
                                                          '02', ENDKEY => '03'}                                                                                                                                                 
 info:seqnumDuringOpen                                    timestamp=1520449077356, value=\x00\x00\x00\x00\x00\x00\x00\x13                                                                                                       
 info:server                                              timestamp=1520449077356, value=yq-hadoop184140:60020                                                                                                                  
 info:serverstartcode                                     timestamp=1520449077356, value=1520214106665                                                                                                                          
4 row(s) in 0.0070 seconds

檢查HDFS目錄檔案,發現.regioninfo檔案也是存在的,如下:

$ hdfs dfs -ls /hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3
Found 3 items
-rw-r--r--   3 hbase hbase         53 2018-02-05 13:09 /hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3/.regioninfo
drwxr-xr-x   - hbase hbase          0 2018-02-05 13:09 /hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3/f1
drwxr-xr-x   - hbase hbase          0 2018-03-08 02:57 /hbase/data/default/app_user_isnew/3eb41df715cdd0f9a2b0ce6550b586b3/recovered.edits

既然後設資料表和HDFS目錄中都有,那應該是該region註冊的問題,我們執行以下命令:

hbase hbck -fixAssignments
此命令用於修復未分配,錯誤分配或者多次分配Region的問題。

執行結果如下:

# hbase hbck -fixAssignments
18/03/19 14:14:19 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
HBaseFsck command line options: -fixAssignments
18/03/19 14:14:19 WARN util.HBaseFsck: Got AccessDeniedException when preCheckPermission 
org.apache.hadoop.hbase.security.AccessDeniedException: Permission denied: action=WRITE path=hdfs://yq-hadoop19:8020/hbase/.hbase-snapshot user=hdfs
    at org.apache.hadoop.hbase.util.FSUtils.checkAccess(FSUtils.java:1797)
    at org.apache.hadoop.hbase.util.HBaseFsck.preCheckPermission(HBaseFsck.java:1929)
    at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4731)
    at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:4559)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:4547)
Current user hdfs does not have write perms to hdfs://yq-hadoop19:8020/hbase/.hbase-snapshot. Please rerun hbck as hdfs user hbase

錯誤顯示我們應該使用hbase使用者執行該命令,使用hbase使用者執行。

sudo -u hbase hbase hbck -fixAssignments

執行後結果:
0 inconsistencies detected.
Status: OK

解決!

參考文獻

1. hbase 修復 hbck

相關文章