透過API訪問HDFS
透過API操作HDFS
今天的主要內容
HDFS獲取檔案系統
HDFS檔案上傳
HDFS檔案下載
HDFS目錄建立
HDFS資料夾刪除
HDFS檔名更改
HDFS檔案詳情檢視
定位檔案讀取
FileSystem類的學習
1. HDFS獲取檔案系統
//獲取檔案系統@Testpublic void initHDFS() throws Exception{ //1. 獲取檔案系統 Configuration configuration = new Configuration(); FileSystem fileSystem = FileSystem.get(configuration); //2. 列印檔案系統到控制檯 System.out.println(fileSystem.toString()); }
2. HDFS檔案上傳(測試引數優先順序)
@Testpublic void putFileToHdfs() throws Exception{ Configuration conf = new Configuration(); conf.set("dfs.replication", "2"); //程式碼優先順序是最高的 conf.set("fs.defaultFS", "hdfs://10.9.190.111:9000"); FileSystem fileSystem = FileSystem.get(conf); //上傳檔案 fileSystem.copyFromLocalFile(new Path("hdfs.txt"), new Path("/user/anna/hdfs/test.txt")); //關閉資源 fileSystem.close(); } 引數優先順序:(1)客戶端程式碼中設定的值 >(2)classpath 下的使用者自定義配置檔案 > (3)然後是伺服器的預設配置
3. HDFS檔案下載
public void copyToLocalFile(boolean delSrc,Path src,Path dst,boolean useRawLocalFileSystem) throws IOException delSrc - whether to delete the src src - path dst - path useRawLocalFileSystem - whether to use RawLocalFileSystem as local file system or not. @Testpublic void testCopyToLocalFile() throws Exception{ Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.9.190.111:9000"); FileSystem fileSystem = FileSystem.get(conf); ///下載檔案 fileSystem.copyToLocalFile(false,new Path("/user/anna/hdfs/test.txt"), new Path("test.txt"),true); //關閉資源 fileSystem.close(); }
4. HDFS目錄建立
@Testpublic void testMakedir() throws Exception{ Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.9.190.111:9000"); FileSystem fileSystem = FileSystem.get(conf); //目錄建立 fileSystem.mkdirs(new Path("/user/anna/test/hahaha")); //關閉資源 fileSystem.close(); }
5. HDFS資料夾刪除
@Testpublic void testDelete() throws Exception{ Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.9.190.111:9000"); FileSystem fileSystem = FileSystem.get(conf); //資料夾刪除 fileSystem.delete(new Path("/user/anna/test/hahaha"),true); //true表示遞迴刪除 //關閉資源 fileSystem.close(); }
6. HDFS檔名更改
@Testpublic void testRename() throws Exception{ Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.9.190.111:9000"); FileSystem fileSystem = FileSystem.get(conf); //檔名稱更改 fileSystem.rename(new Path("/user/anna/test/copy.txt"), new Path("/user/anna/test/copyRename.txt")); //關閉資源 fileSystem.close(); }
7. HDFS檔案詳情檢視
幾種實現方法
1. public abstract FileStatus[] listStatus(Path f) throws FileNotFoundException,IOException * 返回FileStatus型陣列2. public FileStatus[] listStatus(Path f,PathFilter filter) throws FileNotFoundException,IOException3. public FileStatus[] listStatus(Path[] files,PathFilter filter) throws FileNotFoundException,IOException * 此時注意PathFilter是一個介面,裡面只有一個方法:accept,本質是對檔案進行篩選 * Enumerate all files found in the list of directories passed in, calling listStatus(path, filter) on each one. 注意:以上方法返回的檔案按照字母表順序排列
程式碼:FileStatus[] listStatus(Path f)
//FileStatus[] listStatus(Path f)的使用try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); //listStatus獲取/test目錄下資訊 FileStatus[] fileStatuses = fileSystem.listStatus(new Path("/test")); //遍歷輸出資料夾下檔案 for(FileStatus fileStatus :fileStatuses) { System.out.println(fileStatus.getPath() + " " + new Date(fileStatus.getAccessTime()) + " " + fileStatus.getBlockSize() + " " + fileStatus.getPermission()); } }catch(Exception e) { e.printStackTrace(); }/* 在JDK1.8中輸出結果為: ---------------------------------------------------------------------------- hdfs://10.9.190.90:9000/test/hadoop-2.7.3.tar.gz 2012-07-26 134217728 rw-r--r-- hdfs://10.9.190.90:9000/test/hello.txt 2012-07-26 134217728 rw-r--r-- hdfs://10.9.190.90:9000/test/test2 1970-01-01 0 rwxr-xr-x ---------------------------------------------------------------------------- */
程式碼:FileStatus[] listStatus(Path f,PathFilter filter)
try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); //列出目錄下字尾為.md的檔案相關資訊 FileStatus[] statuses = fileSystem.listStatus(new Path("/test/test2"), new PathFilter() { @Override public boolean accept(Path path) { // TODO Auto-generated method stub String string = path.toString(); if(string.endsWith(".md")) return true; else return false; } }); //列出檔案資訊 for(FileStatus status : statuses) { System.out.println("Path : " + status.getPath() + " Permisson : " + status.getPermission() + " Replication : " + status.getReplication()); } }catch(Exception e) { e.printStackTrace(); }
7. 定位檔案讀取
8. FileSystem類的學習
FileSystem的學習
-
今天的主要內容
對照官方文件進行FileSystem類的學習
-
FileSystem中的方法
* boolean exists(Path p) * boolean isDirectory(Path p) * boolean isFile(Path p) * FileStatus getFileStatus(Path p) * Path getHomeDirectory() * FileStatus[] listStatus(Path path, PathFilter filter) FileStatus[] listStatus(Path path) FileStatus[] listStatus(Path[] paths, PathFilter filter) FileStatus[] listStatus(Path[] paths) * RemoteIterator[LocatedFileStatus] listLocatedStatus(Path path, PathFilter filter) RemoteIterator[LocatedFileStatus] listLocatedStatus(Path path) RemoteIterator[LocatedFileStatus] listFiles(Path path, boolean recursive) * BlockLocation[] getFileBlockLocations(FileStatus f, int s, int l) BlockLocation[] getFileBlockLocations(Path P, int S, int L) * long getDefaultBlockSize() long getDefaultBlockSize(Path p) long getBlockSize(Path p) * boolean mkdirs(Path p, FsPermission permission) * FSDataOutputStream create(Path, ...) FSDataOutputStream append(Path p, int bufferSize, Progressable progress) FSDataInputStream open(Path f, int bufferSize) * boolean delete(Path p, boolean recursive) * boolean rename(Path src, Path d) * void concat(Path p, Path sources[]) * boolean truncate(Path p, long newLength) * interface RemoteIterator boolean hasNext() E next() * interface StreamCapabilities boolean hasCapability(capability)
準備工作
start-dfs.sh啟動hadoop叢集
-
eclipse進行hdfs檔案系統的訪問
匯入相應的jar包
-
建立與hdfs的連線並獲取FileSystem檔案物件
第二種方式可能會丟擲InterruptedException異常,因為
the static FileSystem get(URI uri, Configuration conf,String user) method MAY return a pre-existing instance of a filesystem client class—a class that may also be in use in other threads. The implementations of FileSystem shipped with Apache Hadoop do not make any attempt to synchronize access to the working directory field.(此時get方法可能會返回一個已經存在FileSystem物件,也就是存線上程非同步問題,所以我們儘量用前一種方式來完成FileSystem物件的建立)
-
第一種方式
* public static FileSystem get(Configuration conf) throws IOException //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //namenode上的IP地址 埠為:9000 //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf);
-
第二種方式
* public static FileSystem get(URI uri,Configuration conf,String user) throws IOException, InterruptedException URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory()); FileSystem fileSystem = FileSystem.get(new URI("hdfs://10.9.190.90:9000"),new Configuration(),"root"); //此時工作目錄會相應更改為/user/root
兩種方式比較
org.apache.hadoop.fs.FileSystem簡介
The abstract FileSystem class is the original class to access Hadoop filesystems; non-abstract subclasses exist for all Hadoop-supported filesystems.(抽象基類FileSystem定義了對hadoop檔案系統的操作)
-
All operations that take a Path to this interface MUST support relative paths. In such a case, they must be resolved relative to the working directory defined by setWorkingDirectory().(setWorkingDirectory()方法預設工作目錄)
FileSystem中的getWorkingDirector()返回當前系統的工作目錄
-
程式碼
//獲得與hdfs檔案系統的連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.9.190.90:9000"); //獲取檔案系統物件 FileSystem fileSystem = FileSystem.get(conf); //獲取當前工作目錄 System.out.println("=========獲取當前工作目錄============="); System.out.println(fileSystem.getWorkingDirectory()); //設定新的工作目錄 //System.out.println("=========設定新的工作目錄============="); fileSystem.setWorkingDirectory(new Path("hdfs://10.9.190.90:9000/user/anna")); //Path在hdfs中的作用和File作用類似,代表路徑
-
結果
=========獲取當前工作目錄============= hdfs://10.9.190.90:9000/user/root =========獲取設定後工作目錄============= hdfs://10.9.190.90:9000/user/anna
FileSystem方法——判斷功能
-
預備知識
import org.apache.hadoop.fs.Path;類似於java.io.File代表hdfs的檔案路徑
方法
判斷是否為檔案
判斷是否為目錄
判斷檔案是否存在
public boolean exists(Path f) throws IOException
public boolean isDirectory(Path f) throws IOException
public boolean isFile(Path f) throws IOException
練習
try { //獲得與hdfs檔案系統的連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.9.190.90:9000"); //獲取連線物件 FileSystem fileSystem = FileSystem.get(conf); //判斷檔案是否存在 System.out.println(fileSystem.exists(new Path("/test"))); //true //判斷是否為目錄 System.out.println(fileSystem.isDirectory(new Path("/test"))); //true //判斷是否為檔案 System.out.println(fileSystem.isFile(new Path("/test"))); //false }catch(Exception e) { e.printStackTrace(); }
FileSystem方法——獲取功能—檔案資訊獲取
方法
Return the current user's home directory in this FileSystem. The default implementation returns "/user/$USER/".
返回當前使用者的home目錄
Return a file status object that represents the path.
返回的是FileStatus物件型別
public abstract FileStatus getFileStatus(Path f) throws IOException
public Path getHomeDirectory()
練習
try { //獲得與hdfs檔案系統的連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.9.190.90:9000"); //獲取連線物件 FileSystem fileSystem = FileSystem.get(conf); //獲取當前使用者的home目錄 System.out.println("========當前使用者的home目錄============"); Path path = fileSystem.getHomeDirectory(); System.out.println(path); //獲取檔案狀態物件 System.out.println("============檔案資訊==============="); FileStatus status = fileSystem.getFileStatus(new Path("/eclipse")); System.out.println("Path : " + status.getPath()); System.out.println("isFile ? " + status.isFile()); System.out.println("Block size : " + status.getBlockSize()); System.out.println("Perssions : " + status.getPermission()); System.out.println("Replication : " + status.getReplication()); System.out.println("isSymlink : " + status.isSymlink()); }catch(Exception e) { e.printStackTrace(); } /* 在JDK1.8中輸出結果為: * ------------------------------------------------ * ========當前使用者的home目錄============ hdfs://10.9.190.90:9000/user/anna ============檔案資訊=============== Path : hdfs://10.9.190.90:9000/eclipse isFile ? true Block size : 134217728 Perssions : rw-r--r-- Replication : 3 isSymlink : false ------------------------------------------------ */
FileStatus中常用方法
public Path getPath()
public boolean isFile()
public boolean isSymlink()
public long getBlockSize()
public short getReplication()
public FsPermission getPermission()
FileSystem方法——獲取功能——資料夾遍歷1
方法
此時注意PathFilter是一個介面,裡面只有一個方法:accept,本質是對檔案進行篩選
Enumerate all files found in the list of directories passed in, calling listStatus(path, filter) on each one.
返回FileStatus型陣列
public abstract FileStatus[] listStatus(Path f) throws FileNotFoundException,IOException
public FileStatus[] listStatus(Path f,PathFilter filter)
throws FileNotFoundException,IOExceptionpublic FileStatus[] listStatus(Path[] files,PathFilter filter)
throws FileNotFoundException,IOException注意:以上方法返回的檔案按照字母表順序排列
練習1——FileStatus[] listStatus(Path f)的使用
//FileStatus[] listStatus(Path f)的使用 try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); //listStatus獲取/test目錄下資訊 FileStatus[] fileStatuses = fileSystem.listStatus(new Path("/test")); //遍歷輸出資料夾下檔案 for(FileStatus fileStatus :fileStatuses) { System.out.println(fileStatus.getPath() + " " + new Date(fileStatus.getAccessTime()) + " " + fileStatus.getBlockSize() + " " + fileStatus.getPermission()); } }catch(Exception e) { e.printStackTrace(); } /* 在JDK1.8中輸出結果為: ---------------------------------------------------------------------------- hdfs://10.9.190.90:9000/test/hadoop-2.7.3.tar.gz 2012-07-26 134217728 rw-r--r-- hdfs://10.9.190.90:9000/test/hello.txt 2012-07-26 134217728 rw-r--r-- hdfs://10.9.190.90:9000/test/test2 1970-01-01 0 rwxr-xr-x ---------------------------------------------------------------------------- */
練習2——FileStatus[] listStatus(Path f,PathFilter filter)的使用
需求:列出/test/test2目錄下以.md結尾的問價資訊
-
程式碼:
try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); //列出目錄下字尾為.md的檔案相關資訊 FileStatus[] statuses = fileSystem.listStatus(new Path("/test/test2"), new PathFilter() { @Override public boolean accept(Path path) { // TODO Auto-generated method stub String string = path.toString(); if(string.endsWith(".md")) return true; else return false; } }); //列出檔案資訊 for(FileStatus status : statuses) { System.out.println("Path : " + status.getPath() + " Permisson : " + status.getPermission() + " Replication : " + status.getReplication()); } }catch(Exception e) { e.printStackTrace(); }
注意問題
By the time the listStatus() operation returns to the caller, there is no guarantee that the information contained in the response is current. The details MAY be out of date, including the contents of any directory, the attributes of any files, and the existence of the path supplied.(listStatus()方法執行緒不安全)
FileSystem方法——獲取功能——資料夾遍歷2
方法
注意:此方法是protected的,protected許可權是:本類,同一包下(子類或無關類),不同包下子類
public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f)
throws FileNotFoundException, IOExceptionprotected org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f,PathFilter filter)
throws FileNotFoundException, IOException注意:LocatedFileStatus是FileStatus的子類
使用
try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); //列出目錄下字尾為.md的檔案相關資訊 RemoteIterator<LocatedFileStatus> iterator = fileSystem.listLocatedStatus(new Path("/test/test2")); while(iterator.hasNext()) { LocatedFileStatus status = iterator.next(); System.out.println("Path : " + status.getPath() + " Permisson : " + status.getPermission() + " Replication : " + status.getReplication()); } }catch(Exception e) { e.printStackTrace(); } /* * 在JDK1.8中輸出結果為: * --------------------------------------------------------------------------------------------- * Path : hdfs://10.9.190.90:9000/test/test2/Map.md Permisson : rw-r--r-- Replication : 3 Path : hdfs://10.9.190.90:9000/test/test2/biji.md Permisson : rw-r--r-- Replication : 3 Path : hdfs://10.9.190.90:9000/test/test2/haha.txt Permisson : rw-r--r-- Replication : 3 --------------------------------------------------------------------------------------------- * */
與listStatus(Path p)不同的是
listStatus返回的是FileStatus[]陣列型別,遍歷時可透過陣列for-each進行遍歷
listLocatedStatus(Path p)返回的是LocatedFileStatus型別的RemoteIterator集合,透過迭代器進行遍歷輸出
但是要注意的是listLocatedStatus()方法本質上內部還是listStatus(Path p)實現的
FileSystem方法——獲取功能——資料夾遍歷3
方法
遞迴遍歷出資料夾內容以及子資料夾中內容
public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listFiles(Path f,boolean recursive)
throws FileNotFoundException,IOException
使用
try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); //列出目錄下字尾為.md的檔案相關資訊 RemoteIterator<LocatedFileStatus> iterator = fileSystem.listFiles(new Path("/test"),true); while(iterator.hasNext()) { LocatedFileStatus status = iterator.next(); System.out.println("Path : " + status.getPath() + " Permisson : " + status.getPermission() + " Replication : " + status.getReplication()); } }catch(Exception e) { e.printStackTrace(); } /* * 在JDK1.8中輸出結果為: * --------------------------------------------------------------------------------------------------- * Path : hdfs://10.9.190.90:9000/test/hadoop-2.7.3.tar.gz Permisson : rw-r--r-- Replication : 3 Path : hdfs://10.9.190.90:9000/test/hello.txt Permisson : rw-r--r-- Replication : 3 Path : hdfs://10.9.190.90:9000/test/test2/Map.md Permisson : rw-r--r-- Replication : 3 Path : hdfs://10.9.190.90:9000/test/test2/biji.md Permisson : rw-r--r-- Replication : 3 Path : hdfs://10.9.190.90:9000/test/test2/haha.txt Permisson : rw-r--r-- Replication : 3 --------------------------------------------------------------------------------------------------- * */
FileSystem方法——獲取功能——獲取檔案block的位置
方法
public BlockLocation[] getFileBlockLocations(Path p,long start,long len) throws IOException
public BlockLocation[] getFileBlockLocations(FileStatus file,long start,long len) throws IOException
使用
//檢視/test/hadoop的block存放位置 try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); FileStatus status = fileSystem.getFileStatus(new Path("/test/hadoop")); BlockLocation[] locations = fileSystem.getFileBlockLocations(status, 0,status.getLen()); for(BlockLocation location : locations) { System.out.println("host : " + location.getHosts() + " name : " + location.getNames() + " length : " + location.getLength()); } }catch(Exception e) { e.printStackTrace(); } /* 在JDK1.8中輸出結果為: ------------------------------------------------------------------------------ host : [Ljava.lang.String;@18ece7f4 name : [Ljava.lang.String;@3cce57c7 length : 134217728 host : [Ljava.lang.String;@1cf56a1c name : [Ljava.lang.String;@33f676f6 length : 79874467 ------------------------------------------------------------------------------ */
FileSystem方法——獲取功能——獲取到某檔案的輸出流
方法
Create an FSDataOutputStream at the indicated Path with write-progress reporting. Files are overwritten by default.
overwrite - if a file with this name already exists, then if true, the file will be overwritten, and if false an exception will be thrown.
public FSDataOutputStream create(Path f) throws IOException
public FSDataOutputStream create(Path f,boolean overwrite)
throws IOExceptionpublic FSDataOutputStream create(Path f,
Progressable progress)
throws IOExceptionpublic FSDataOutputStream create(Path f,boolean overwrite,int bufferSize)
throws IOExceptionpublic FSDataOutputStream create(Path f,boolean overwrite,int bufferSize, Progressable progress)throws IOException
FSDataOutputStream append(Path p, int bufferSize, Progressable progress)
使用——將本地E:/hzy.jpg上傳到hdfs的/1.jpg
public static void main(String[] args) { BufferedInputStream in = null; FSDataOutputStream out = null; try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); //獲取本地檔案輸入流 File file = new File("E:/hzy.jpg"); in = new BufferedInputStream(new FileInputStream(file)); final long fileSize = file.length(); //獲取到/test/hello.txt的輸出流 out = fileSystem.create(new Path("/1.jpg"),new Progressable() { long fileCount = 0; @Override public void progress() { // TODO Auto-generated method stub fileCount++; System.out.println("總進度:" + (fileCount/fileSize)*100 + " %"); } }); //複製 int len = 0; while((len = in.read()) != -1) { out.write(len); //此時也可以用:IOUtils.copyBytes(in,out,conf); } in.close(); out.close(); }catch(Exception e) { e.printStackTrace(); }finally { if(in != null) { try { in.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if (out != null) { try { out.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }
}
FileSystem方法——獲取功能——獲取到某檔案的輸入流——讀取檔案
方法
public FSDataInputStream open(Path f) throws IOException
public abstract FSDataInputStream open(Path f,int bufferSize)throws IOException
使用——將hdfs中的1.jpg複製到本地E:/hzy2.jpg
try { //建立與HDFS連線 Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://10.9.190.90:9000"); //獲得fileSystem FileSystem fileSystem = FileSystem.get(conf); //獲取hdfs檔案輸入流 FSDataInputStream in = fileSystem.open(new Path("/1.jpg")); //獲取本地輸出流 BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(new File("E:/hzyCopy.jpg"))); int len = 0; byte[] bArr = new byte[1024*3]; while((len = in.read(bArr)) != -1) { out.write(bArr,0,len); } in.close(); out.close(); }catch(Exception e) { e.printStackTrace(); }
}
FileSystem方法——建立功能
public boolean mkdirs(Path f) throws IOException
FileSystem方法——刪除功能
public abstract boolean delete(Path f,boolean recursive) throws IOException
設計執行緒同步問題
FileSystem方法——重新命名功能
public abstract boolean rename(Path src,Path dst)throws IOException
FileSystem其他方法
-
public void concat(Path trg,Path[] psrcs)throws IOException
Concat existing files together.
public boolean truncate(Path f,long newLength)throws IOException
interface RemoteIterator
-
定義
public interface RemoteIterator<E> { boolean hasNext() throws IOException; E next() throws IOException; }
The primary use of RemoteIterator in the filesystem APIs is to list files on (possibly remote) filesystems.
使用
//listLocatedFileStatus(Path f) public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f) throws FileNotFoundException,IOException //listLocatedStatus(Path f,PathFilter filter) protected org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f,PathFilter filter) throws FileNotFoundException,IOException //listStatusIterator(Path p) public org.apache.hadoop.fs.RemoteIterator<FileStatus> listStatusIterator(Path p) throws FileNotFoundException,IOException //listFiles(Path f,boolean recursive) public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listFiles(Path f,boolean recursive) throws FileNotFoundException,IOException
interface StreamCapabilities
-
方法
public interface StreamCapabilities { boolean hasCapability(String capability); }
-
使用
hadoop2.7.3中無此方法,在2.9.1中才有
作者:須臾之北
連結:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/36/viewspace-2816924/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- docker 中容器透過 API 互相訪問DockerAPI
- vnc viewer透過外網訪問,vnc viewer透過外網訪問8個步驟VNCView
- 【.bat】IISExpress配置透過IP訪問程式BATExpress
- Oracle/MySQL透過odbc訪問PostgreSQL for LightDBOracleMySql
- docker 中容器通過 API 互相訪問DockerAPI
- svn透過https協議訪問的搭建過程HTTP協議
- Oracle 透過透明閘道器 訪問 mysqlOracleMySql
- 透過Kerberos認證訪問Oracle11gROSOracle
- 外網如何透過https訪問自己的服務HTTP
- Oracle 11.2.0.4 透過透明閘道器訪問mysql 8.0.16OracleMySql
- [Linux Mint]無法透過ssh和xrdp訪問本地Linux
- 如何透過DDNS 更快地訪問鐵威馬NAS?DNS
- 如何讓NAS可以透過網際網路訪問?
- 【vscode】vscode透過埠訪問本地html頁面(Live Server)VSCodeHTMLServer
- 透過自定義域名 + SSL 的方式訪問 Amazon MQ for RabbitMQMQ
- 新版本下如何透過外部網路訪問wsl
- 怎樣透過holer從外網訪問本地網站?網站
- 如何透過holer從外網訪問本地Web應用Web
- 透過 SAP UI5 ODataModel API 在 JavaScript 程式碼裡訪問 OData 後設資料試讀版UIAPIJavaScript
- kubernetes實戰篇之通過api-server訪問dashboardAPIServer
- WebHDFS :通過Web訪問Hadoop分散式檔案系統 (HDFS)的開源工具WebHadoop分散式開源工具
- 如何透過holer從外網訪問本地的資料庫?資料庫
- HDFS 05 - HDFS 常用的 Java API 操作JavaAPI
- 4、hdfs api使用API
- 【磐維資料庫】透過python訪問磐維資料庫資料庫Python
- 怎樣透過holer從外網ssh訪問本地Linux系統?Linux
- 透過訪問URL地址,5分鐘內滲透你的網站!很刑很可拷!網站
- Hadoop(十)HDFS API操作HadoopAPI
- 前端專案透過‘URL 重寫’部署在 IIS 中,訪問 WebAPI 介面前端WebAPI
- windows如何訪問ubuntu的指定目錄(透過samba檔案共享服WindowsUbuntuSamba
- 如何透過 Geth、Node.js 和 UNIX/PHP 訪問以太坊節點Node.jsPHP
- 跨境電商如何透過API選品API
- 透過API介面實現資料探勘?API
- Python使用 Kubernetes API 訪問叢集PythonAPI
- Laravel API 允許跨域訪問LaravelAPI跨域
- 如何通過瀏覽器 JavaScript API 訪問伺服器資料庫瀏覽器JavaScriptAPI伺服器資料庫
- NAS教程丨如何透過DDNS實現SMB服務的遠端訪問?DNS
- Docker的通俗理解和透過宿主機埠訪問Redis容器的例項DockerRedis