12 Spark on YARN
Spark的部署方式靈活多變,主要有Local、Standalone、Mesos和YARN。
(1)如果只是在單機上部署執行用於學習,可以使用Local部署方式;
(2)如果想用於真正的叢集上,可以採取Standalone、Mesos和YARN的部署方式,Standalone是Spark內建的部署方式,Mesos和YARN是外部的資源排程框架。在實際的生產過程中,選擇使用YARN的部署方式的主要優點:便於與已有的Hadoop系統整合,便於管理叢集和共享記憶體。Spark on YARN根據Driver執行位置的不同,分為Spark on YARN-Cluster和Spark on YARN-Client模式。
YARN是一種全新Hadoop資源管理器,對於執行在其上的框架提供了作業系統級別的排程。
YARN架構的重要組成部分:
ResourceManager(RM)
NodeManager(NM)
ApplicationMaster(AM)
Container
Spark on YARN的部署模式
如果將Spark部署在YARN上面,必須確保HADOOP_CONF_DIR或YARN_CONF_DIR(在spark-env.sh中可以配置)指向Client端包含Hadoop叢集配置的目錄。Spark透過這些配置可以連線YARN ResourceManager,並且能夠向HDFS寫入資料。該目錄中的配置檔案會被分發到YARN叢集,以便於應用使用的Container能夠使用同樣的配置。
在YARN上啟動Spark應用依據Spark Driver執行位置的不同,可以分為兩種部署模式:yarn-cluster和yarn-client。
(1)在yarn-cluster模式下,Spark Driver執行被YARN管理的ApplicationMaster程式中,在應用啟動之後,Client端可以退出。ResourceManager的地址是從Hadoop的配置中讀出來的。因此,YARN模式下的--master命令列引數可以設定為yarn-client或者yarn-cluster。
(2)在yarn-client模式下,Spark Driver執行在Client程式中,並且在該模式下,AppicationMaster只會向YARN請求資源。
Spark on YARN的部署模式中yarn-cluster和yarn-client的區別
在yarn-cluster模式是實際生產常見的模式,而yarn-client更加適合於使用者的互動和除錯,也就是希望快速地看到application的輸出。
先明白一個概念:Application Master。在YARN中,每個Application例項都有一個Application Master程式,它是Application啟動的第一個容器。它負責和ResourceManager打交道,並請求資源。獲取資源之後告訴NodeManager為其啟動container。
yarn-cluster和yarn-client模式的區別其實就是Application Master程式的區別:
(1)yarn-cluster模式下,driver執行在AM(Application Master)中,它負責向YARN申請資源,並監督作業的執行狀況。當使用者提交了作業之後,就可以關掉Client,作業會繼續在YARN上執行。然而yarn-cluster模式不適合執行互動型別的作業。
(2)yarn-client模式下,Application Master僅僅向YARN請求executor,client會和請求的container通訊來排程他們工作,也就是說Client不能離開。Driver執行在Client上,而Driver包含了DAGScheduler及TaskScheduler,因此在整個應用未執行完成期間,Client不能退出。
看下下面的兩幅圖應該會明白(上圖是yarn-cluster模式,下圖是yarn-client模式):
如:
spark-submit --master yarn
--deploy-mode cluster
--num-executors 25
--executor-cores 2
--driver-memory 4g
--executor-memory 4g
--conf spark.broadcast.compress=true spark_data_analysis_cluster.py > /app/log/.out 2>&1
該提交命令啟動了YARN Client程式,並且透過該Client程式啟動預設的ApplicationMaster,然後SparkPi將作為Application的一個子執行緒執行。Client將定期向ApplicationMaster更新狀態並將其顯示在終端。在應用執行完成後,Client會退出。
在yarn-client模式下,ApplicationMaster只負責向RM申請Executor需要的資源。當Spark on YARN時,spark-shell和pyspark必須使用yarn-client模式。如果要在yarn-client模式下啟動Spark應用,只需要在命令列引數--master後傳入yarn-client引數即可,例如:
pyspark --master yarn-client
--executor-memory 1g
--driver-memory 1g
--num-executors 4
--executor-cores 2
--num-executors 分配給應用的YARN Container的總數;
--driver-memory 分配給Driver的最大heap size;
--executor-memory 分配給每個executor的最大heap size;
--executor-cores 分配給每個executor的最大處理器core數量;
下面使用spark-submit --help所顯示的部分內容:
[root@server106 ~]# spark-submit --help
Usage: spark-submit [options] [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Options:
--master MASTER_URL , , yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or on one of the worker machines inside the cluster ("cluster") (Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.--exclude-packages Comma-separated list of groupId:artifactId, to exclude while resolving the dependencies provided in --packages to avoid dependency conflicts.
--repositories Comma-separated list of additional remote repositories to search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.--files FILES Comma-separated list of files to be placed in the working
directory of each executor.--conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the classpath.--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
--proxy-user NAME User to impersonate when submitting the application.
--help, -h Show this help message and exit
--verbose, -v Print additional debug output
--version, Print the version of current Spark
Spark on YARN:
--executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode,or all available cores on the worker in standalone mode)
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1).--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor.--principal PRINCIPAL Principal to be used to login to KDC, while running on
secure HDFS.--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above. This keytab will be copied to the node running the Application Master via the Secure Distributed Cache, for renewing the login tickets and the delegation tokens periodically.
作者:7125messi
連結:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/4729/viewspace-2816629/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Spark on Yarn 和Spark on MesosSparkYarn
- hive on spark on yarnHiveSparkYarn
- Spark on Yarn 實踐SparkYarn
- Spark:Yarn-client與Yarn-clusterSparkYarnclient
- spark 與 yarn 結合SparkYarn
- 搭建spark on yarn 叢集SparkYarn
- Spark on Yarn 環境搭建SparkYarn
- Spark開發-Yarn cluster模式SparkYarn模式
- spark on yarn 資料插入mysqlSparkYarnMySql
- Spark on Yarn 部分一原理及使用SparkYarn
- Spark 原始碼系列(七)Spark on yarn 具體實現Spark原始碼Yarn
- 【Spark篇】---Spark中yarn模式兩種提交任務方式SparkYarn模式
- 部署Spark2.2叢集(on Yarn模式)SparkYarn模式
- Spark UI (基於Yarn) 分析與定製SparkUIYarn
- Spark原始碼解析-Yarn部署流程(ApplicationMaster)Spark原始碼YarnAPPAST
- spark on yarn 的資源排程器設定.SparkYarn
- Spark on Yarn 任務提交流程原始碼分析SparkYarn原始碼
- 基於樹莓派的叢集實驗(一)--spark on yarn樹莓派SparkYarn
- spark-submit提交到yarn中執行的log怎麼看?SparkMITYarn
- 提交spark程式到yarn出現ERROR SparkContext: Error initializing SparkContext.SparkYarnErrorContext
- 18【線上日誌分析】之Spark on Yarn配置日誌Web UI(HistoryServer服務)SparkYarnWebUIServer
- Yarn篇--搭建yarn叢集Yarn
- cdh版spark on yarn與idea直連操作sql遇到的一些問題SparkYarnIdeaSQL
- yarn - 忽略版本號要求 yarn installYarn
- yarn 命令Yarn
- Yarn-cluster 與 Yarn-client的區別Yarnclient
- Yarn 安裝Yarn
- Hadoop YarnHadoopYarn
- Yarn詳解Yarn
- yarn install,yarn add,NPM run dev 報錯YarnNPMdev
- node中安裝yarn(nodejs Yarn替代npm包管理)YarnNodeJSNPM
- Yarn執行原理Yarn
- yarn certificate has expiredYarn
- yarn的安裝,並使用yarn安裝vue腳手架YarnVue
- Spark之spark shellSpark
- 【Spark篇】---Spark初始Spark
- Hadoop YARN 架構HadoopYarn架構
- windows 下安裝 yarnWindowsYarn