spark 2.1.0 standalone模式配置&&打包jar包透過spark-submit提交
配置 spark-env.sh export JAVA_HOME=/apps/jdk1.8.0_181 export SPARK_MASTER_HOST=bigdata00 export SPARK_MASTER_PORT=7077 slaves bigdata01 bigdata02 bigdata03 啟動spark shell ./spark-shell --master spark://bigdata00:7077 --executor-memory 512M 用spark shell 完成一個wordcount scala> sc.textFile("hdfs://bigdata00:9000/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect 結果: res3: Array[(String, Int)] = Array((this,1), (is,4), (girl,3), (love,1), (will,1), (day,1), (boreing,1), (my,1), (miss,2), (test,2), (forget,1), (spark,2), (soon,1), (most,1), (that,1), (a,2), (afternonn,1), (i,3), (might,1), (of,1), (today,2), (good,1), (for,1), (beautiful,1), (time,1), (and,1), (the,5))
//主類 package hgs.sparkwc import org.apache.spark.SparkContext import org.apache.spark.SparkConf object WordCount { def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("WordCount") val context = new SparkContext() context.textFile(args(0),1).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2).saveAsTextFile(args(1)) context.stop } } //------------------------------------------------------------------------------------------ //以下式pom.xml檔案 <project xmlns=" xsi:schemaLocation=" <modelVersion>4.0.0</modelVersion> <groupId>hgs</groupId> <artifactId>sparkwc</artifactId> <version>1.0.0</version> <packaging>jar</packaging> <name>sparkwc</name> <url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>2.11.8</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.1.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.6.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.6</version> <configuration> <archive> <manifest> <!-- 我執行這個jar所執行的主類 --> <mainClass>hgs.sparkwc.WordCount</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef> <!-- 必須是這樣寫 --> jar-with-dependencies </descriptorRef> </descriptorRefs> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> <plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>3.2.0</version> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> <configuration> <args> <!-- <arg>-make:transitive</arg> --> <arg>-dependencyfile</arg> <arg>${project.build.directory}/.scala_dependencies</arg> </args> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.18.1</version> <configuration> <useFile>false</useFile> <disableXmlReport>true</disableXmlReport> <!-- If you have classpath issue like NoDefClassError,... --> <!-- useManifestOnlyJar>false</useManifestOnlyJar --> <includes> <include>**/*Test.*</include> <include>**/*Suite.*</include> </includes> </configuration> </plugin> </plugins> </build> </project>
最後在build assembly:assembly的時候出現以下問題 scalac error: bad option: '-make:transitive' 原因是scala-maven-plugin 外掛的配置 <arg>-make:transitive</arg> 有問題,把該行註釋掉即可 網上的答案: 刪除<arg>-make:transitive</arg> 或者新增該依賴: <dependency> <groupId>org.specs2</groupId> <artifactId>specs2-junit_${scala.compat.version}</artifactId> <version>2.4.16</version> <scope>test</scope> </dependency> 最後在伺服器提交任務: ./spark-submit --master spark://bigdata00:7077 --executor-memory 512M --total-executor-cores 3 /home/sparkwc.jar hdfs://bigdata00:9000/words hdfs://bigdata00:9000/wordsout2
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/31506529/viewspace-2215620/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- spark-submit 提交的引數SparkMIT
- spark-submit執行jar包報錯找不到類的解決方法SparkMITJAR
- Spark Standalone模式 高可用部署Spark模式
- IDEA開發Spark應用並提交本地Spark 2.1.0 standIdeaSpark
- idea打包jar包IdeaJAR
- 部署spark2.2叢集(standalone模式)Spark模式
- spark-submit提交任務時執行流程(簡單版)SparkMIT
- Windows上搭建Standalone模式的Spark環境Windows模式Spark
- 深入理解Spark 2.1 Core (五):Standalone模式Spark模式
- spark-submit提交到yarn中執行的log怎麼看?SparkMITYarn
- Android Studio打包apk,aar,jar包AndroidAPKJAR
- Assemby 打包並啟動jar包JAR
- 聊聊如何避免多個jar透過maven打包成一個jar,多個同名配置檔案發生覆蓋問題JARMaven
- AndroidStudio打包Library專案成jar包AndroidJAR
- jenkins將打包的jar包部署到nexusJenkinsJAR
- Flink原始碼剖析:Jar包任務提交流程原始碼JAR
- Jar 包執行時修改配置JAR
- Spark RDD的預設分割槽數:(spark 2.1.0)Spark
- Spark SQL 教程: 透過示例瞭解 Spark SQLSparkSQL
- SpringBoot透過maven引入的jar包為什麼有的沒有版本號Spring BootMavenJAR
- spark原始碼之任務提交過程Spark原始碼
- Java 如何打增量 jar 包【修改部分檔案不需全部打包】JavaJAR
- jar包JAR
- 透過spark將資料儲存到elasticsearchSparkElasticsearch
- 大資料Spark叢集模式配置大資料Spark模式
- Spark3.0YarnCluster模式任務提交流程原始碼分析SparkYarn模式原始碼
- springboot 執行 jar 包讀取外部配置檔案Spring BootJAR
- ruoyi vue 前後端分離版本 打包分離jar包至libVue後端JAR
- springboot,springcloud打包成jar教程Spring BootGCCloudJAR
- AndroidStudio專案打包成jarAndroidJAR
- java-jar啟動jar包JavaJAR
- standalone執行模式下 應用模式作業部署模式
- 透過vconfig命令配置VLAN
- 【SpringBoot】服務 Jar 包的啟動過程原理Spring BootJAR
- maven專案打包說有依賴jar包到一個資料夾MavenJAR
- Maven推送本地jar包到遠端私有倉庫配置MavenJAR
- java修改jar包JavaJAR
- centos 部署jar包CentOSJAR