hadoop建立兩大錯誤:Bad connection to FS. command aborted. exception和Shutting down NameNode at hadoop
1.問題目錄表:
1. Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:88
2. 88 failed on connection exception: java.net.ConnectException: Connection refused
3. : no further information
錯誤提示“Bad connection to FS. command aborted. exception: Call tolocalhost/127.0.0.1:88
88failed on connection exception: java.net.ConnectException: Connection refused
:no further information”
起初懷疑是fs服務沒有啟動,但反覆關閉啟動多次後仍沒用,請教高手後,被建議重新格式化namenode,就可以了。
格式化指令如下(在hadoop的bin目錄下):
1. $ ./hadoop namenode -format
成功之後重啟hadoop就可以了
2如果錯誤還存在,那麼手動刪除檔案
1. $ bin/hadoop fs -ls /
2. 11/08/18 17:02:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
3. .1:9000. Already tried 0 time(s).
4. 11/08/18 17:02:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
5. .1:9000. Already tried 1 time(s).
6. 11/08/18 17:02:39 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
7. .1:9000. Already tried 2 time(s).
8. 11/08/18 17:02:41 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
9. .1:9000. Already tried 3 time(s).
10. 11/08/18 17:02:43 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
11. .1:9000. Already tried 4 time(s).
12. 11/08/18 17:02:45 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
13. .1:9000. Already tried 5 time(s).
14. 11/08/18 17:02:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
15. .1:9000. Already tried 6 time(s).
16. 11/08/18 17:02:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
17. .1:9000. Already tried 7 time(s).
18. 11/08/18 17:02:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
19. .1:9000. Already tried 8 time(s).
20. 11/08/18 17:02:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
21. .1:9000. Already tried 9 time(s).
1. Bad connection to FS. command aborted.
錯誤提示“Bad connection to FS. command aborted.”
把你DataNode上的DFS資料全刪了,再重新格式化NameNode。
即:先將你D盤下tmp目錄下所有檔案刪了,在重複上面第一點的過程
hadoop格式化失敗原因
[root@hadoop home]# hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
14/01/24 11:41:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop/192.168.174.174
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.1.2
STARTUP_MSG: build =https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782;compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
Re-format filesystem in /usr/java/hadoop/tmp/dfs/name ? (Y or N) y
Format aborted in /usr/java/hadoop/tmp/dfs/name
14/01/24 11:42:04 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/192.168.174.174
************************************************************/
隨後啟動hadoop,發現http://hdp0:5007無法顯示。
將cd/usr/java/hadoop/tmp/dfs資料夾整個刪除。然後再格,成功!!!
[root@hadoop dfs]# hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
14/01/24 11:44:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop/192.168.174.174
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.1.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1-r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
14/01/24 11:44:59 INFO util.GSet: VM type = 32-bit
14/01/24 11:44:59 INFO util.GSet: 2% max memory = 19.33375 MB
14/01/24 11:44:59 INFO util.GSet: capacity = 2^22 = 4194304entries
14/01/24 11:44:59 INFO util.GSet: recommended=4194304, actual=4194304
14/01/24 11:45:00 INFO namenode.FSNamesystem: fsOwner=root
14/01/24 11:45:00 INFO namenode.FSNamesystem: supergroup=supergroup
14/01/24 11:45:00 INFO namenode.FSNamesystem: isPermissionEnabled=false
14/01/24 11:45:00 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/01/24 11:45:00 INFO namenode.FSNamesystem: isAccessTokenEnabled=falseaccessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/01/24 11:45:00 INFO namenode.NameNode: Caching file names occuring more than10 times
14/01/24 11:45:00 INFO common.Storage: Image file of size 110 saved in 0seconds.
14/01/24 11:45:00 INFO namenode.FSEditLog: closing edit log: position=4,editlog=/usr/java/hadoop/tmp/dfs/name/current/edits
14/01/24 11:45:00 INFO namenode.FSEditLog: close success: truncate to 4,editlog=/usr/java/hadoop/tmp/dfs/name/current/edits
14/01/24 11:45:01 INFO common.Storage: Storage directory/usr/java/hadoop/tmp/dfs/name has been successfully formatted.
14/01/24 11:45:01 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/192.168.174.174
************************************************************//************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hdp0/192.168.221.100
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r911707; compiled
by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
11/04/12 15:33:30 INFO namenode.FSNamesystem:fsOwner=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare
11/04/12 15:33:30 INFO namenode.FSNamesystem: supergroup=supergroup
11/04/12 15:33:30 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/04/12 15:33:31 INFO common.Storage: Image file of size 96 saved in 0seconds.
11/04/12 15:33:31 INFO common.Storage: Storage directory /home/hadoop/dfs/namehas been successfully formatted.
11/04/12 15:33:31 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hdp0/192.168.221.100
************************************************************/
相關文章
- HADOOP遇到namenode: at org.apache.hadoop.net.NetUtils.createSocketAddr錯誤HadoopApache
- Hadoop錯誤之namenode當機的資料恢復Hadoop資料恢復
- org.apache.hadoop.hdfs.server.namenode.NameNode.ApacheHadoopServer
- Hadoop雙namenode配置搭建(HA)Hadoop
- 3.3 Shutting Down a DatabaseDatabase
- hadoop(二)—hadoop配置、執行錯誤總結Hadoop
- Windows下hadoop環境搭建之NameNode啟動報錯WindowsHadoop
- hadoop中namenode無法啟動Hadoop
- hadoop安裝中錯誤Hadoop
- Hadoop常見錯誤2Hadoop
- 3.3.4 Shutting Down with the Transactional Mode
- bash: hadoop: command not foundHadoop
- Hadoop 啟動namenode節點失敗Hadoop
- Hadoop2之NameNode HA詳解Hadoop
- Hadoop 之 NameNode 後設資料原理Hadoop
- Hadoop安裝錯誤總結Hadoop
- 避免雲分析決策和Hadoop錯誤的七個大資料流言Hadoop大資料
- Hadoop框架:NameNode工作機制詳解Hadoop框架
- hadoop(5)--NameNode後設資料管理(2)Hadoop
- Hadoop Namenode 無法啟動 總結一Hadoop
- hadoop配置、執行錯誤總結Hadoop
- hadoop日常錯誤解決方法整理Hadoop
- Shutting down the Oracle ASMLib driver: [ failed ]OracleASMAI
- Hadoop3.2.1 【 HDFS 】原始碼分析 : Standby Namenode解析Hadoop原始碼
- Hadoop中Namenode單點故障的解決方案Hadoop
- Hadoop原始碼:namenode格式化和啟動過程實現Hadoop原始碼
- Hadoop常見錯誤及解決方案Hadoop
- hadoop配置、執行錯誤總結一Hadoop
- hadoop常見錯誤及處理方法Hadoop
- 黑猴子的家:Hadoop之Namenode多目錄配置Hadoop
- Hadoop3.2.1 【 HDFS 】原始碼分析 : Secondary Namenode解析Hadoop原始碼
- Hadoop之HDFS及NameNode單點故障解決方案Hadoop
- hadoop 安裝錯誤記錄(持續更新)Hadoop
- 1、大資料 Hadoop配置和單機Hadoop系統配置大資料Hadoop
- hadoop主節點(NameNode)備份策略以及恢復方法Hadoop
- start-all.sh指令碼啟動Hadoop的NameNode、DataNode、ResourceManager和NodeManager失敗指令碼Hadoop
- solrcloud 報 HTTP Status 503 - Server is shutting down or failed to initializeSolrCloudHTTPServerAI
- 安裝Hadoop3X 出現的錯誤資訊Hadoop