Hadoop2.7實戰v1.0之新增DataNode節點後,更改檔案複製策略dfs.replication
1.檢視當前系統的複製策略dfs.replication為3,表示檔案會備份成3份
a.透過檢視hdfs-site.xml 檔案
b.透過檢視當前hdfs檔案的複製值是多少
c.透過 hadoop fsck /,也可以方便的看到Average block replication的值仍然為3,該值我們可以手動的進行動態修改。
而Default replication factor則需要重啟整個Hadoop叢集才能修改(就是hdfs-site.xml 檔案中改為4,然後叢集重啟才生效,不過這種情況不適用生產叢集),
但實際影響系統的還是Average block replication的值,因此並非一定要修改預設值Default replication factor。
2.修改hdfs檔案備份係數
3.測試
4.修改namenode節點的hdfs-site.xml的引數
5.再次測試
##先不重啟試試看
##【事實證明】:無需重啟叢集或者namenode節點,是從剛才動態設定命令(hdfs dfs -setrep -w 4 -R /)的記憶體資訊中讀取的,
而不是從配置檔案hdfs-site.xml檔案中讀取配置的,從而驗證了上面這句話:
實際影響系統的還是Average block replication的值,因此並非一定要修改預設值Default replication factor。
總結命令:
hdfs fsck /
hdfs dfs -setrep -w 4 -R /
a.透過檢視hdfs-site.xml 檔案
點選(此處)摺疊或開啟
-
[root@sht-sgmhadoopnn-01 ~]# cd /hadoop/hadoop-2.7.2/etc/hadoop
-
[root@sht-sgmhadoopnn-01 hadoop]# more hdfs-site.xml
-
<property>
-
<name>dfs.replication</name>
-
<value>3</value>
- </property>
點選(此處)摺疊或開啟
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -ls /testdir
-
Found 7 items
-
-rw-r--r-- 3 root supergroup 37322672 2016-03-05 17:59 /testdir/012_HDFS.avi
-
-rw-r--r-- 3 root supergroup 224001146 2016-03-05 18:01 /testdir/016_Hadoop.avi
-
-rw-r--r-- 3 root supergroup 176633760 2016-03-05 19:11 /testdir/022.avi
-
-rw-r--r-- 3 root supergroup 30 2016-02-28 22:42 /testdir/1.log
-
-rw-r--r-- 3 root supergroup 196 2016-02-28 22:23 /testdir/full_backup.log
-
-rw-r--r-- 3 root supergroup 142039186 2016-03-05 17:55 /testdir/oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm
-
-rw-r--r-- 3 root supergroup 44 2016-02-28 19:40 /testdir/test.log
- [root@sht-sgmhadoopnn-01 hadoop]#
- ### 緊跟-rw-r--r--許可權後面的3,表示該檔案在hdfs有多少份備份
而Default replication factor則需要重啟整個Hadoop叢集才能修改(就是hdfs-site.xml 檔案中改為4,然後叢集重啟才生效,不過這種情況不適用生產叢集),
但實際影響系統的還是Average block replication的值,因此並非一定要修改預設值Default replication factor。
點選(此處)摺疊或開啟
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs fsck /
-
16/03/06 17:15:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-
Connecting to namenode via http://sht-sgmhadoopnn-01:50070/fsck?ugi=root&path=%2F
-
FSCK started by root (auth:SIMPLE) from /172.16.101.55 for path / at Sun Mar 06 17:15:29 CST 2016
-
............Status: HEALTHY
-
Total size: 580151839 B
-
Total dirs: 15
-
Total files: 12
-
Total symlinks: 0
-
Total blocks (validated): 11 (avg. block size 52741076 B)
-
Minimally replicated blocks: 11 (100.0 %)
-
Over-replicated blocks: 0 (0.0 %)
-
Under-replicated blocks: 0 (0.0 %)
-
Mis-replicated blocks: 0 (0.0 %)
-
Default replication factor: 3
-
Average block replication: 3.0
-
Corrupt blocks: 0
-
Missing replicas: 0 (0.0 %)
-
Number of data-nodes: 4
-
Number of racks: 1
-
FSCK ended at Sun Mar 06 17:15:29 CST 2016 in 9 milliseconds
-
The filesystem under path '/' is HEALTHY
- You have mail in /var/spool/mail/root
點選(此處)摺疊或開啟
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -help
-
-setrep [-R] [-w] ... :
-
Set the replication level of a file. If is a directory then the command
-
recursively changes the replication factor of all files under the directory tree
-
rooted at .
-
-
-w It requests that the command waits for the replication to complete. This
-
can potentially take a very long time.
-
-R It is accepted for backwards compatibility. It has no effect.
-
-
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -setrep -w 4 -R /
-
setrep: `-R': No such file or directory
-
Replication 4 set: /out1/_SUCCESS
-
Replication 4 set: /out1/part-r-00000
-
Replication 4 set: /testdir/012_HDFS.avi
-
Replication 4 set: /testdir/016_Hadoop.avi
-
Replication 4 set: /testdir/022.avi
-
Replication 4 set: /testdir/1.log
-
Replication 4 set: /testdir/full_backup.log
-
Replication 4 set: /testdir/oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm
-
Replication 4 set: /testdir/test.log
-
Replication 4 set: /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1456590271264_0002-1456659654297-root-word+count-1456659679606-1-1-SUCCEEDED-root.root-1456659662730.jhist
-
Replication 4 set: /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1456590271264_0002.summary
-
Replication 4 set: /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1456590271264_0002_conf.xml
-
Waiting for /out1/_SUCCESS ... done
-
Waiting for /out1/part-r-00000 .... done
-
Waiting for /testdir/012_HDFS.avi ... done
-
Waiting for /testdir/016_Hadoop.avi ... done
-
Waiting for /testdir/022.avi ... done
-
Waiting for /testdir/1.log ... done
-
Waiting for /testdir/full_backup.log ... done
-
Waiting for /testdir/oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm ... done
-
Waiting for /testdir/test.log ... done
-
Waiting for /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1456590271264_0002-1456659654297-root-word+count-1456659679606-1-1-SUCCEEDED-root.root-1456659662730.jhist ... done
-
Waiting for /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1456590271264_0002.summary ... done
-
Waiting for /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1456590271264_0002_conf.xml ... done
-
[root@sht-sgmhadoopnn-01 hadoop]#
-
-
##再次檢查備份系統的情況, Average block replication為4
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs fsck /
-
16/03/06 17:25:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-
Connecting to namenode via %2F
-
FSCK started by root (auth:SIMPLE) from /172.16.101.55 for path / at Sun Mar 06 17:25:51 CST 2016
-
............Status: HEALTHY
-
Total size: 580151839 B
-
Total dirs: 15
-
Total files: 12
-
Total symlinks: 0
-
Total blocks (validated): 11 (avg. block size 52741076 B)
-
Minimally replicated blocks: 11 (100.0 %)
-
Over-replicated blocks: 0 (0.0 %)
-
Under-replicated blocks: 0 (0.0 %)
-
Mis-replicated blocks: 0 (0.0 %)
-
Default replication factor: 3
-
Average block replication: 4.0
-
Corrupt blocks: 0
-
Missing replicas: 0 (0.0 %)
-
Number of data-nodes: 4
-
Number of racks: 1
-
FSCK ended at Sun Mar 06 17:25:51 CST 2016 in 6 milliseconds
- The filesystem under path '/
點選(此處)摺疊或開啟
-
[root@sht-sgmhadoopnn-01 hadoop]# vi /tmp/wjp.log
-
hello,i am
-
hadoop
-
hdfs
-
mapreduce
-
yarn
-
hive
-
zookeeper
-
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -put /tmp/wjp.log /testdir
-
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -ls /testdir
-
Found 8 items
-
-rw-r--r-- 4 root supergroup 37322672 2016-03-05 17:59 /testdir/012_HDFS.avi
-
-rw-r--r-- 4 root supergroup 224001146 2016-03-05 18:01 /testdir/016_Hadoop.avi
-
-rw-r--r-- 4 root supergroup 176633760 2016-03-05 19:11 /testdir/022.avi
-
-rw-r--r-- 4 root supergroup 30 2016-02-28 22:42 /testdir/1.log
-
-rw-r--r-- 4 root supergroup 196 2016-02-28 22:23 /testdir/full_backup.log
-
-rw-r--r-- 4 root supergroup 142039186 2016-03-05 17:55 /testdir/oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm
-
-rw-r--r-- 4 root supergroup 44 2016-02-28 19:40 /testdir/test.log
-
-rw-r--r-- 3 root supergroup 62 2016-03-06 17:30 /testdir/wjp.log
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -rm /testdir/wjp.log
-
16/03/06 17:31:47 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 1440 minutes, Emptier interval = 0 minutes.
-
Moved: 'hdfs://mycluster/testdir/wjp.log' to trash at: hdfs://mycluster/user/root/.Trash/Current
-
[root@sht-sgmhadoopnn-01 hadoop]#
- ### put的測試檔案wjp.log的備份數還是3,於是我先把測試檔案刪除掉,去修改namenode節點的hdfs-site.xml的引數
點選(此處)摺疊或開啟
-
[root@sht-sgmhadoopnn-01 hadoop]# vi hdfs-site.xml
-
<property>
-
<name>dfs.replication</name>
-
<value>4</value>
-
</property>
- [root@sht-sgmhadoopnn-01 hadoop]# scp hdfs-site.xml root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2/etc/hadoop
- ###假如叢集中,配置了namenode HA,那麼應該需要對另外一個standbyNamenode節點的檔案要同步一直,無需也同步到datanode節點
##先不重啟試試看
點選(此處)摺疊或開啟
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -put /tmp/wjp.log /testdir
-
16/03/06 17:36:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-
You have mail in /var/spool/mail/root
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -ls /testdir
-
16/03/06 17:36:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-
Found 8 items
-
-rw-r--r-- 4 root supergroup 37322672 2016-03-05 17:59 /testdir/012_HDFS.avi
-
-rw-r--r-- 4 root supergroup 224001146 2016-03-05 18:01 /testdir/016_Hadoop.avi
-
-rw-r--r-- 4 root supergroup 176633760 2016-03-05 19:11 /testdir/022.avi
-
-rw-r--r-- 4 root supergroup 30 2016-02-28 22:42 /testdir/1.log
-
-rw-r--r-- 4 root supergroup 196 2016-02-28 22:23 /testdir/full_backup.log
-
-rw-r--r-- 4 root supergroup 142039186 2016-03-05 17:55 /testdir/oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm
-
-rw-r--r-- 4 root supergroup 44 2016-02-28 19:40 /testdir/test.log
-
-rw-r--r-- 4 root supergroup 62 2016-03-06 17:36 /testdir/wjp.log
-
-
[root@sht-sgmhadoopnn-01 hadoop]# hdfs fsck /
-
16/03/06 21:49:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-
Connecting to namenode via http://sht-sgmhadoopnn-01:50070/fsck?ugi=root&path=%2F
-
FSCK started by root (auth:SIMPLE) from /172.16.101.55 for path / at Sun Mar 06 21:49:12 CST 2016
-
...............Status: HEALTHY
-
Total size: 580152025 B
-
Total dirs: 17
-
Total files: 15
-
Total symlinks: 0
-
Total blocks (validated): 14 (avg. block size 41439430 B)
-
Minimally replicated blocks: 14 (100.0 %)
-
Over-replicated blocks: 0 (0.0 %)
-
Under-replicated blocks: 0 (0.0 %)
-
Mis-replicated blocks: 0 (0.0 %)
-
Default replication factor: 3
-
Average block replication: 4.0
-
Corrupt blocks: 0
-
Missing replicas: 0 (0.0 %)
-
Number of data-nodes: 4
-
Number of racks: 1
- FSCK ended at Sun Mar 06 21:49:12 CST 2016 in 8 milliseconds
而不是從配置檔案hdfs-site.xml檔案中讀取配置的,從而驗證了上面這句話:
實際影響系統的還是Average block replication的值,因此並非一定要修改預設值Default replication factor。
總結命令:
hdfs fsck /
hdfs dfs -setrep -w 4 -R /
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30089851/viewspace-2047825/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Hadoop2.7實戰v1.0之動態新增、刪除DataNode節點及複製策略導向Hadoop
- Hadoop2.7實戰v1.0之動態刪除DataNode(含NodeManager)節點(修改dfs.replication)Hadoop
- Hadoop2.7實戰v1.0之YARN HAHadoopYarn
- Hadoop2.7實戰v1.0之HDFS HAHadoop
- DataNode工作機制 & 新增節點 &下線節點
- Hadoop2.7實戰v1.0之JVM引數調優HadoopJVM
- Hadoop2.7實戰v1.0之Linux引數調優HadoopLinux
- Hadoop2.7實戰v1.0之HBase1.1.5 HA分散式搭建Hadoop分散式
- Hadoop2.7實戰v1.0之Hive-2.0.0+MySQL本地模式安裝HadoopHiveMySql模式
- hadoop2.7叢集初始化之後沒有DataNode的問題Hadoop
- Hadoop2.7實戰v1.0之Hive-2.0.0+MySQL遠端模式安裝HadoopHiveMySql模式
- 使用joinjs繪製流程圖(七)-實戰-繪製流程圖+節點設定樣式+節點新增事件JS流程圖事件
- 給XML檔案新增新的節點XML
- Hadoop2.7實戰v1.0之start-balancer.sh與hdfs balancer資料均衡Hadoop
- Hadoop2.7實戰v1.0之Flume1.6.0搭建(Http Source-->Memory Chanel --> Hdfs Sink)HadoopHTTP
- [java IO流]之檔案複製Java
- 怎樣新增、刪除、移動、複製、建立、查詢節點
- Hadoop2.7實戰v1.0之Eclipse+Hive2.0.0的JDBC案例(最詳細搭建)HadoopEclipseHiveJDBC
- HAC叢集更改IP(單節點更改、全部節點更改)
- nc複製檔案
- 複製檔案githubGithub
- 檔案複製(Go語言實現)Go
- Java IO 流之拷貝(複製)檔案Java
- JavaScript 複習之 Document 節點JavaScript
- JavaScript 複習之 Element 節點JavaScript
- javascript如何實現複製克隆一個dom元素節點JavaScript
- 原生javascript實現的節點複製cloneNode()函式用法JavaScript函式
- ubuntu下檔案複製Ubuntu
- 隱藏檔案複製
- eclipse maven專案複製之後修改地方EclipseMaven
- mybatis配置檔案(其中,注意節點先後順序)MyBatis
- 實戰goldengate之ora-To-ora單向複製Go
- jQuery如何複製克隆一個元素節點jQuery
- Hadoop2.7實戰v1.0之Hive-2.0.0的Hiveserver2服務和beeline遠端除錯HadoopHiveServer除錯
- oracle 11g rac新增節點前之清除節點資訊Oracle
- python多程式實現檔案海量複製Python
- Java實現檔案複製的四種方式Java
- JS如何實現點選複製功能,JS點選複製文字JS