hadoop的datanode啟動不了
檢視日誌如下:
2014-12-22 12:08:27,264 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50075
2014-12-22 12:08:27,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = root
2014-12-22 12:08:27,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2014-12-22 12:08:32,865 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2014-12-22 12:08:32,889 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2014-12-22 12:08:32,931 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2014-12-22 12:08:32,945 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2014-12-22 12:08:32,968 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2014-12-22 12:08:32,992 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:8020 starting to offer service
2014-12-22 12:08:33,001 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-12-22 12:08:33,003 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-12-22 12:08:33,536 INFO org.apache.hadoop.hdfs.server.common.Storage: DataNode version: -56 and NameNode layout version: -60
2014-12-22 12:08:33,699 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 17247@henry-ThinkPad-T4002014-12-22 12:08:33,706 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:8020. Exiting.
java.io.IOException: Incompatible clusterIDs in /home/hadoop/tmp/dfs/data: namenode clusterID = CID-19f887ba-2e8d-4c7e-ae01-e38a30581693; datanode clusterID = CID-14aac0b3-3c32-45db-adb8-b5fc494eaa3d
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
at java.lang.Thread.run(Thread.java:662)
2014-12-22 12:08:33,716 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:8020
2014-12-22 12:08:33,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2014-12-22 12:08:35,718 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-12-22 12:08:35,720 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-12-22 12:08:35,722 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at henry-ThinkPad-T400/127.0.0.1
************************************************************/
從日誌上看,加粗的部分說明了問題
datanode的clusterID 和 namenode的clusterID 不匹配。
解決辦法:
根據日誌中的路徑,cd /home/hadoop/tmp/dfs
能看到 data和name兩個資料夾,
將name/current下的VERSION中的clusterID複製到data/current下的VERSION中,覆蓋掉原來的clusterID
讓兩個保持一致
然後重啟,啟動後執行jps,檢視程式
20131 SecondaryNameNode
20449 NodeManager
19776 NameNode
21123 Jps
19918 DataNode
20305 ResourceManager
出現該問題的原因:在第一次格式化dfs後,啟動並使用了hadoop,後來又重新執行了格式化命令(hdfs
namenode -format),這時namenode的clusterID會重新生成,而datanode的clusterID 保持不變。
版權宣告:本文為博主原創文章,未經博主允許不得轉載。
相關文章
- Hadoop2.6的DataNode啟動不了Hadoop
- 啟動hadoop沒有datanodeHadoop
- Hadoop中datanode無法啟動Hadoop
- hadoop原始碼_hdfs啟動流程_2_DataNodeHadoop原始碼
- start-all.sh指令碼啟動Hadoop的NameNode、DataNode、ResourceManager和NodeManager失敗指令碼Hadoop
- 原始碼|HDFS之DataNode:啟動過程原始碼
- Hadoop框架:DataNode工作機制詳解Hadoop框架
- gitblit 服務啟動不了Git
- 記錄:hadoop 2.5.2 叢集動態增加新datanode 無法通訊的問題Hadoop
- HBase啟動不了的一個原因處理
- mac上virtual box的系統啟動不了Mac
- 新克隆的虛擬機器在啟動datanode後namenode消失(未解決)虛擬機
- mysql啟動不了是什麼原因MySql
- eclipse下tomcat啟動不了拉EclipseTomcat
- hadoop啟動遇到的各種問題Hadoop
- hadoop 將檔案上傳到指定的datanode 檢視檔案所在的塊Hadoop
- hadoop2.7叢集初始化之後沒有DataNode的問題Hadoop
- oracle資料庫監聽啟動不了的原因分析Oracle資料庫
- hadoop中namenode無法啟動Hadoop
- Hadoop 2.7.4 關閉與啟動Hadoop
- hadoop叢集配置和啟動Hadoop
- Pentaho data integration(kettle) 在Mac上啟動不了Mac
- MySQL啟動建立不了pid怎麼辦MySql
- centos7網路卡啟動不了的解決辦法CentOS
- 醫院端的資料庫啟動不了之解決。資料庫
- Hadoop叢集初始化啟動Hadoop
- Hadoop 啟動namenode節點失敗Hadoop
- Hadoop2.7實戰v1.0之動態刪除DataNode(含NodeManager)節點(修改dfs.replication)Hadoop
- 電腦無法啟動怎麼辦 電腦啟動不了的原因與解決辦法
- linux重新啟動後,阿里的oceanbase啟動不了,用這個命令obd cluster start demoLinux阿里
- win10無主之地3啟動不了怎麼修復_win10啟動不了無主之地3如何解決Win10
- oracle 10g rac vip 服務啟動不了的問題Oracle 10g
- Hadoop2.7實戰v1.0之動態新增、刪除DataNode節點及複製策略導向Hadoop
- hadoop 啟動 Permissions for id_rsa are too openHadoop
- Hadoop叢集環境啟動順序Hadoop
- Hadoop Namenode 無法啟動 總結一Hadoop
- linux下安裝mongodb啟動不了怎麼辦LinuxMongoDB
- mysql啟動不了,mysql連線不上,問題排查MySql