hadoop報錯:could only be replicated to 0 nodes, instead of 1
錯誤
[root@hadoop test]# hadoop jar hadoop.jarcom.hadoop.hdfs.CopyToHDFS
14/01/26 10:20:00 WARN hdfs.DFSClient: DataStreamerException: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File/user/hadoop/test/01/hello.txt could only be replicated to 0 nodes, instead of1
atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
atsun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
14/01/26 10:20:00 WARN hdfs.DFSClient: Error Recoveryfor block null bad datanode[0] nodes == null
14/01/26 10:20:00 WARN hdfs.DFSClient: Could not getblock locations. Source file "/user/hadoop/test/01/hello.txt" -Aborting...
Exception in thread "main"org.apache.hadoop.ipc.RemoteException: java.io.IOException: File/user/hadoop/test/01/hello.txt could only be replicated to 0 nodes, instead of1
atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
atsun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
14/01/26 10:20:00 ERROR hdfs.DFSClient: Failed to closefile /user/hadoop/test/01/hello.txt
org.apache.hadoop.ipc.RemoteException:java.io.IOException: File /user/hadoop/test/01/hello.txt could only bereplicated to 0 nodes, instead of 1
atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
atsun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
ient.java:2989)
2.究其原因並制定解決方案:
原因:
多次格式化hadoop導致版本資訊不一致,修改為一致狀態即可解決問題
解決
1、先把服務都停掉 stop-all.sh
2、格式化namenode hadoop namenode -foramt
3、重新啟動所有服務 start-all.sh
4、可以進行正常操作了
注:
這裡格式化很可能出現格式失敗的情況,這裡就不在細說失敗的原有和解決方法,不清楚的同學可以參考博文:http://blog.csdn.net/yangkai_hudong/article/details/18731395
3.網上其它相關的解決資料
1,現象:flume再往Hadoop HDFS寫檔案時flume.log報錯 be replicated to 0 nodes, instead of 1
2012-12-18 13:47:24,673 WARN hdfs.BucketWriter: CaughtIOException writing to HDFSWriter (java.io.IOException: File/logdata/20121218/bj4aweb04/8001_4A_ACA/8001_4A_ACA.1355799411582.tmp couldonly be replicated
to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
2,檢視相關程式狀態 datanode沒有正常啟動
[hadoop@dtydb6 hadoop]$ jps
7427 Jps
7253 TaskTracker
6576 NameNode
6925 SecondaryNameNode
7079 JobTracker
3,檢視datanode的日誌
Incompatible namespaceIDs
java.io.IOException: Incompatible namespaceIDs in /hadoop/logdata:namenode namespaceID = 13513664; datanode namespaceID = 525507667
4,根據報錯資訊定位到namespaceIDs版本不一致
根據參考文件的解決方案,原因是多次格式化hadoop導致版本資訊不一致,修改為一致狀態即可解決問題
解決的辦法很簡單,兩個方案
1).
所有的datanode刪掉,重新建(很麻煩,但看你了)
2)登上datanode,把位於{dfs.data.dir}/current/VERSION中的namespaceID改為最新的版本即可
[hadoop@dtydb6 current]$ cat VERSION
#Fri Dec 14 09:37:22 CST 2012
namespaceID=525507667
storageID=DS-120876865-10.4.124.236-50010-1354772633249
cTime=0
storageType=DATA_NODE
layoutVersion=-32
5,重新啟動hadoop,datanode已經成功啟動
[hadoop@dtydb6 current]$ jps
8770 JobTracker
8436 DataNode
8266 NameNode
8614 SecondaryNameNode
相關文章
- hadoop報:could only be replicated to 0 nodes, instead of 1Hadoop
- hadoop報:jobtracker.info could only be replicated to 0 nodes, instead of 1Hadoop
- Error:map.xml could only be replicated to 0 nodes instead of minReplication (=1)ErrorXML
- mysql 5.7啟動報錯"Expected to open undo tablespaces but was able to find only 0"MySql
- 報錯-only final is permittedMIT
- pg_basebackup 報錯could not create directory
- vipca報錯 Error 0PCAError
- oracle RAC dbca的時候報錯提示cluster nodes are not accessibleOracle
- cmake報錯CMake Error: Could not find CMAKE_ROOTError
- 【報錯】elasticsearch 報錯blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]ElasticsearchBloCORBIndexdeleteAPI
- MySQL 報錯'Variable 'XXX' is a read only variable'MySql
- hadoop叢集篇--從0到1搭建hadoop叢集Hadoop
- Could not initialize class sun.awt.X11GraphicsEnvironment 報錯
- Could not resolve host: 'localhost 報錯解決辦法localhost
- go tool compile 報錯 could not import sync (file not found)GoCompileImport
- Hadoop MapReduce進階 使用分散式快取進行replicated joinHadoop分散式快取
- hadoop啟動時,報ssh: Could not resolve hostname xxx: NamHadoop
- Swift代理報錯Optional can only be applied to members of an @objc protocolSwiftAPPOBJProtocol
- 解決報錯error the @annotation pointcut expression is only supported at Java 5ErrorExpressJava
- Python3類方法報錯takes 0 positional arguments but 1 was givenPython
- MySQL 8.0版本連線報錯:Could not create coMySql
- 安裝 dingo/api 報錯:Your requirements could not be resolvedGoAPIUIREM
- homestead下安裝laravel報錯:Your requirements could not be...LaravelUIREM
- Hadoop 從 0 到 1 學習 ——第一章 Hadoop 介紹Hadoop
- MySQL could not be resolved: Temporary failure in name resolution報錯解決方法MySqlAI
- hadoop命令報錯:許可權問題Hadoop
- 64位linux報錯Could not initialize class java.awt.image.BufferedImageLinuxJava
- Linux報錯:Could not get lock /var/lib/dpkg/lock-frontendLinux
- WireMock 的時候報錯:No response could be served as there are no stub mappings in this WireMockREMMockAPP
- 引入第三方庫報錯Could not find method apt() for argumentsAPT
- Laravel-查詢-ONLY_FULL_GROUP_BY SQL 模式-報錯限制-解決LaravelSQL模式
- React報錯之React.Children.only expected to receive single React element childReact
- redis報錯Windows error 0x70RedisWindowsError
- 使用root使用者啟動hadoop報錯Hadoop
- Hadoop hdfs上傳檔案報錯解決Hadoop
- hadoop執行./start-all.sh,突然報錯Hadoop
- 執行flutter run命令報錯::ERROR: Could not connect to lockdownd, error code -17FlutterError
- sqlserver bulk insert報錯Cannot bulk load because the file could not be opened.SQLServer