hadoop建立兩大錯誤:Bad connection to FS. command aborted. exception和Shutting down NameNode at hadoop
1.問題目錄表:
1. Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:88
2. 88 failed on connection exception: java.net.ConnectException: Connection refused
3. : no further information
錯誤提示“Bad connection to FS. command aborted. exception: Call tolocalhost/127.0.0.1:88
88failed on connection exception: java.net.ConnectException: Connection refused
:no further information”
起初懷疑是fs服務沒有啟動,但反覆關閉啟動多次後仍沒用,請教高手後,被建議重新格式化namenode,就可以了。
格式化指令如下(在hadoop的bin目錄下):
1. $ ./hadoop namenode -format
成功之後重啟hadoop就可以了
2如果錯誤還存在,那麼手動刪除檔案
1. $ bin/hadoop fs -ls /
2. 11/08/18 17:02:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
3. .1:9000. Already tried 0 time(s).
4. 11/08/18 17:02:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
5. .1:9000. Already tried 1 time(s).
6. 11/08/18 17:02:39 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
7. .1:9000. Already tried 2 time(s).
8. 11/08/18 17:02:41 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
9. .1:9000. Already tried 3 time(s).
10. 11/08/18 17:02:43 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
11. .1:9000. Already tried 4 time(s).
12. 11/08/18 17:02:45 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
13. .1:9000. Already tried 5 time(s).
14. 11/08/18 17:02:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
15. .1:9000. Already tried 6 time(s).
16. 11/08/18 17:02:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
17. .1:9000. Already tried 7 time(s).
18. 11/08/18 17:02:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
19. .1:9000. Already tried 8 time(s).
20. 11/08/18 17:02:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
21. .1:9000. Already tried 9 time(s).
1. Bad connection to FS. command aborted.
錯誤提示“Bad connection to FS. command aborted.”
把你DataNode上的DFS資料全刪了,再重新格式化NameNode。
即:先將你D盤下tmp目錄下所有檔案刪了,在重複上面第一點的過程
hadoop格式化失敗原因
[root@hadoop home]# hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
14/01/24 11:41:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop/192.168.174.174
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.1.2
STARTUP_MSG: build =https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782;compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
Re-format filesystem in /usr/java/hadoop/tmp/dfs/name ? (Y or N) y
Format aborted in /usr/java/hadoop/tmp/dfs/name
14/01/24 11:42:04 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/192.168.174.174
************************************************************/
隨後啟動hadoop,發現http://hdp0:5007無法顯示。
將cd/usr/java/hadoop/tmp/dfs資料夾整個刪除。然後再格,成功!!!
[root@hadoop dfs]# hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
14/01/24 11:44:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop/192.168.174.174
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.1.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1-r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
14/01/24 11:44:59 INFO util.GSet: VM type = 32-bit
14/01/24 11:44:59 INFO util.GSet: 2% max memory = 19.33375 MB
14/01/24 11:44:59 INFO util.GSet: capacity = 2^22 = 4194304entries
14/01/24 11:44:59 INFO util.GSet: recommended=4194304, actual=4194304
14/01/24 11:45:00 INFO namenode.FSNamesystem: fsOwner=root
14/01/24 11:45:00 INFO namenode.FSNamesystem: supergroup=supergroup
14/01/24 11:45:00 INFO namenode.FSNamesystem: isPermissionEnabled=false
14/01/24 11:45:00 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/01/24 11:45:00 INFO namenode.FSNamesystem: isAccessTokenEnabled=falseaccessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/01/24 11:45:00 INFO namenode.NameNode: Caching file names occuring more than10 times
14/01/24 11:45:00 INFO common.Storage: Image file of size 110 saved in 0seconds.
14/01/24 11:45:00 INFO namenode.FSEditLog: closing edit log: position=4,editlog=/usr/java/hadoop/tmp/dfs/name/current/edits
14/01/24 11:45:00 INFO namenode.FSEditLog: close success: truncate to 4,editlog=/usr/java/hadoop/tmp/dfs/name/current/edits
14/01/24 11:45:01 INFO common.Storage: Storage directory/usr/java/hadoop/tmp/dfs/name has been successfully formatted.
14/01/24 11:45:01 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/192.168.174.174
************************************************************//************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hdp0/192.168.221.100
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r911707; compiled
by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
11/04/12 15:33:30 INFO namenode.FSNamesystem:fsOwner=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare
11/04/12 15:33:30 INFO namenode.FSNamesystem: supergroup=supergroup
11/04/12 15:33:30 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/04/12 15:33:31 INFO common.Storage: Image file of size 96 saved in 0seconds.
11/04/12 15:33:31 INFO common.Storage: Storage directory /home/hadoop/dfs/namehas been successfully formatted.
11/04/12 15:33:31 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hdp0/192.168.221.100
************************************************************/
相關文章
- Hadoop錯誤之namenode當機的資料恢復Hadoop資料恢復
- 3.3 Shutting Down a DatabaseDatabase
- hadoop(二)—hadoop配置、執行錯誤總結Hadoop
- Hadoop雙namenode配置搭建(HA)Hadoop
- 3.3.4 Shutting Down with the Transactional Mode
- Windows下hadoop環境搭建之NameNode啟動報錯WindowsHadoop
- Hadoop安裝錯誤總結Hadoop
- Hadoop 啟動namenode節點失敗Hadoop
- Hadoop框架:NameNode工作機制詳解Hadoop框架
- Hadoop2之NameNode HA詳解Hadoop
- hadoop(5)--NameNode後設資料管理(2)Hadoop
- Hadoop常見錯誤及解決方案Hadoop
- Hadoop3.2.1 【 HDFS 】原始碼分析 : Secondary Namenode解析Hadoop原始碼
- Hadoop3.2.1 【 HDFS 】原始碼分析 : Standby Namenode解析Hadoop原始碼
- Hadoop中Namenode單點故障的解決方案Hadoop
- Hadoop原始碼:namenode格式化和啟動過程實現Hadoop原始碼
- 黑猴子的家:Hadoop之Namenode多目錄配置Hadoop
- appium 執行的時候,偶爾會出現:[BaseDriver] Shutting down because we waited 1200 seconds for a commandAPPAI
- 1、大資料 Hadoop配置和單機Hadoop系統配置大資料Hadoop
- start-all.sh指令碼啟動Hadoop的NameNode、DataNode、ResourceManager和NodeManager失敗指令碼Hadoop
- Hadoop VERSION檔案誤刪Hadoop
- 黑猴子的家:Hadoop NameNode 高可用 (High Availability) 實現解析HadoopAI
- Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission deniedUseExceptionthreadAIApacheHadoop
- hadoop1.0 和 Hadoop 2.0 的區別Hadoop
- 400 Bad Request(錯誤請求)
- 安裝Hadoop3X 出現的錯誤資訊Hadoop
- 大資料和Hadoop什麼關係?為什麼大資料要學習Hadoop?大資料Hadoop
- Hadoop技術內幕:深入解析Hadoop和HDFS 1.3準備 Hadoop 原始碼Hadoop原始碼
- 建立HDFS,匯入HADOOP jar包HadoopJAR
- 大資料和Hadoop平臺介紹大資料Hadoop
- Hadoop大資料部署Hadoop大資料
- 大資料hadoop工具大資料Hadoop
- hadoop三大元件Hadoop元件
- oozie.action.hadoop.LauncherException: IO error Connection timed out: no further informationHadoopExceptionErrorORM
- Hadoop大資料實戰系列文章之安裝HadoopHadoop大資料
- SSH出現Connection refused錯誤
- 大資料hadoop入門之hadoop家族產品詳解大資料Hadoop
- **大資料hadoop瞭解**大資料Hadoop
- 大資料hadoop資料大資料Hadoop