spark2-submit --class bi.tag.TSimilarTagsTable --master yarn-client --executor-memory 6G --num-executors 5 --executor-cores 2 /var/lib/hadoop-hdfs/seijing/ble/tag/spark-sql/pf-spark-master/pi/target/pi-1.0.1-SNAPSHOT.jar
spark2-submit --class resume.mlib.RcoAID \
--master yarn \
--deploy-mode client \
--num-executors 4 \
--executor-memory 10G \
--executor-cores 3 \
--driver-memory 10g \
--conf "spark.executor.extraJavaOptions='-Xss512m'" \
--driver-java-options "-Xss512m" \
/var/lib/hadoop-hdfs/als_ecommend/reserver-1.0-SNAPSHOT.jar $1 $2 >> /var/lib/hadoop-hdfs/als_ecommend/logs/log_spark_out_`date +\%Y\%m\%d`.log
$1 $2 是 上一层,执行这个脚本传进来的参数
/bin/bash /root/combine.sh aa bb
aa bb 就是传入的参数
最后打印出的日志格式为:
-rw-r--r-- 1 root root 2375 Feb 27 15:25 log_spark_out_20200227.log
-rw-r--r-- 1 root root 712272 Feb 28 17:03 log_spark_out_20200228.log
-rw-r--r-- 1 root root 2375 Mar 9 15:36 log_spark_out_20200309.log
-rw-r--r-- 1 root root 712463 Mar 10 20:24 log_spark_out_20200310.log
-rw-r--r-- 1 root root 10578 Mar 12 18:51 log_spark_out_20200312.log
-rw-r--r-- 1 root root 468018 Mar 13 10:06 log_spark_out_20200313.log
-rw-r--r-- 1 root root 712602 Mar 19 18:26 log_spark_out_20200319.log
只有print的,以及DF show 这样的日志才会存储到日志文件中。
logger打印的日志在控制台运行任务时可以看到,但是并不能存储到日志文件中。
2. spark-submit 详细参数说明
参数名 | 参数说明 |
--master | master 的地址,提交任务到哪里执行,例如 spark://host:port, yarn, local |
--deploy-mode | 在本地 (client) 启动 driver 或在 cluster 上启动,默认是 client |
--class | 应用程序的主类,仅针对 java 或 scala 应用 |
--name | 应用程序的名称 |
--jars | 用逗号分隔的本地 jar 包,设置后,这些 jar 将包含在 driver 和 executor 的 classpath 下 |
--packages | 包含在driver 和executor 的 classpath 中的 jar 的 maven 坐标 |
--exclude-packages | 为了避免冲突 而指定不包含的 package |
--repositories | 远程 repository |
--conf PROP=VALUE | 指定 spark 配置属性的值, 例如 -conf spark.executor.extraJavaOptions="-XX:MaxPermSize=256m" |
--properties-file | 加载的配置文件,默认为 conf/spark-defaults.conf |
--driver-memory | Driver内存,默认 1G |
--driver-java-options | 传给 driver 的额外的 Java 选项 |
--driver-library-path | 传给 driver 的额外的库路径 |
--driver-class-path | 传给 driver 的额外的类路径 |
--driver-cores | Driver 的核数,默认是1。在 yarn 或者 standalone 下使用 |
--executor-memory | 每个 executor 的内存,默认是1G |
--total-executor-cores | 所有 executor 总共的核数。仅仅在 mesos 或者 standalone 下使用 |
--num-executors | 启动的 executor 数量。默认为2。在 yarn 下使用 |
--executor-core | 每个 executor 的核数。在yarn或者standalone下使用 |
--num-executors 5 \
--executor-cores 2 \
/var/business_data/p-1.0.1-SNAPSHOT.jar > /var/business_data/business_data.log
代码中去掉.master("local[*]"),任务依然可以跑成功。
但是代码中存在.master("local[*]")参数的情况下,我直接把脚本改为:
--master yarn \
--deploy-mode cluster \
spark2-submit \
--class bi.tag.BusinessDataCombineErpJobs \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 3g \
--executor-cores 2 \
/var/business_data/p-1.0.1-SNAPSHOT.jar 10
注意:数字10 是代码BusinessDataCombineErpJobs 中自定义的传入的一个参数
报错日志为:
azkaban:
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO yarn.Client: Application report for application_1583730534669_117324 (state: FAILED)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO yarn.Client:
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - client token: N/A
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - diagnostics: Application application_1583730534669_117324 failed 2 times due to AM Container for appattempt_1583730534669_117324_000002 exited with exitCode: 13
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - For more detailed output, check application tracking page:http://pf-bigdata4:8088/proxy/application_1583730534669_117324/Then, click on links to logs of each attempt.
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Diagnostics: Exception from container-launch.
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Container id: container_e87_1583730534669_117324_02_000001
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Exit code: 13
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Stack trace: ExitCodeException exitCode=13:
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell.runCommand(Shell.java:604)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell.run(Shell.java:507)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:789)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.FutureTask.run(FutureTask.java:266)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.lang.Thread.run(Thread.java:748)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO -
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO -
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Container exited with a non-zero exit code 13
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Failing this attempt. Failing the application.
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - ApplicationMaster host: N/A
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - ApplicationMaster RPC port: -1
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - queue: default
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - start time: 1590649410241
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - final status: FAILED
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - tracking URL: http://pf-bigdata4:8088/cluster/app/application_1583730534669_117324
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - user: root
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Exception in thread "main" org.apache.spark.SparkException: Application application_1583730534669_117324 finished with failed status
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.yarn.Client.run(Client.scala:1153)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1568)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:892)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Shutdown hook called
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-eb1e1b60-ef09-4a58-8e5f-dc988411999e
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Deleting directory /huayong/data/tmp/spark-dba79ec3-1f27-4da0-8e8e-5a98c31c156f
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Process completed unsuccessfully in 55 seconds.
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine ERROR - Job run failed!
java.lang.RuntimeException: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1
at azkaban.jobExecutor.ProcessJob.run(ProcessJob.java:305)
at azkaban.execapp.JobRunner.runJob(JobRunner.java:787)
at azkaban.execapp.JobRunner.doRun(JobRunner.java:602)
at azkaban.execapp.JobRunner.run(JobRunner.java:563)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1
at azkaban.jobExecutor.utils.process.AzkabanProcess.run(AzkabanProcess.java:125)
at azkaban.jobExecutor.ProcessJob.run(ProcessJob.java:297)
... 8 more
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine ERROR - azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1 cause: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1
28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Finishing job bi_cal_business_data_table_combine at 1590649460777 with status FAILED
yarn logs -applicationId application_1583730534669_117324命令查看日志为:
20/05/28 15:04:17 WARN lazy.LazyStruct: Extra bytes detected at the end of the row! Ignoring similar problems.
20/05/28 15:04:17 WARN lazy.LazyStruct: Extra bytes detected at the end of the row! Ignoring similar problems.
20/05/28 15:04:19 ERROR yarn.ApplicationMaster: Uncaught exception:
java.lang.IllegalStateException: User did not initialize spark context!
at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:467)
at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:301)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply$mcV$sp(ApplicationMaster.scala:241)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:241)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:241)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:782)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:781)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:240)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:806)
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
脚本最后一行的自定义的传参的参数 10去掉,依然上面的错。
但是代码中把.master("local[*]"去掉后,使用client和cluster模式,都可以跑成功。
1.代码中local[*]参数去掉后,两种模式都可以跑成功,不去掉,只能跑client模式
2.cluster模式是在集群跑任务,使用的是集群随机一台机器的资源,而client模式是在提交任务的这台机器上跑,使用的是这台机器的资源
3.没问题的脚本:
client:
spark2-submit \
--class bi.tag.BusinessDataCombineErpJobs \
--master yarn-client \
--driver-memory 1g \
--executor-memory 3g \
--executor-cores 2 \
/var/business_data/pi-1.0.1-SNAPSHOT-yarn-cluster.jar
cluster:
spark2-submit \
--class bi.tag.BusinessDataCombineErpJobs \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 3g \
--executor-cores 2 \
/var/business_data/pi-1.0.1-SNAPSHOT-yarn-cluster.jar
sparkstreaming的提交示例:
spark2-submit --master yarn-client --conf spark.driver.memory=2g --class com.tzb.sparkstreaming.prod.DataChangeStreaming --executor-memory 8G --num-executors 5 --executor-cores 2 /test/spark-test-jar-with-dependencies.jar >> /test/sparkstreaming_datachange.log
https://www.cnblogs.com/weiweifeng/p/8073553.html
写在前面的话:本篇博客为原创,认真阅读需要比对spark 2.1.1的源码,预计阅读耗时30分钟,如果大家发现有问题或者是不懂的,欢迎讨论
欢迎关注公众号:后来X
spark 2.1.1的源码包(有需要自取):关注公众号【后来X】,回复spark源码
上一篇博文,我们看了在Yarn Cluster模式下,从Spark-submit提交任务开始,到最后启动了ExecutorBackend线程,也就是进行到了图中的第9步。
上一篇博文地址:https://blog.csdn.net/weixin_38586230/article/details/104342440
1、接下来先看Excutor端
一、提交任务代码
@Override
public Response submitApplication(String[] args) throws IOException, InterruptedException {
log.info("spark任务传入参数args:{}", args);
args[0] = args[0].replace("}}", "} }").replace("{{", "{ {");
SparkLauncher ha.
本部分来源,也可以到spark官网查看英文版。
使用spark-submit时,应用程序的jar包以及通过—jars选项包含的任意jar文件都会被自动传到集群中。spark-submit --class --master --jars Spark根目录的bin目录下spark-submit脚本用于在集群上启动应用程序,它通过统一接口使用Spark所支持的所有集群管理器,因此无需特殊配置每一个
记录一下最近整理的spark 集群模式提交yarn的部分常用参数设置 (友情提示:以下代码块中注释部分未加注释标# )
spark-submit --master yarn-cluster \ yarn模式
--name ${APP_NAME} \ appName
--executor-memory 3G \ 每个exe
1、什么是Spark SQL
Spark SQL是Spark用于结构化数据(structured data)处理的Spark模块。与基本的Spark RDD API不同,Spark SQL的抽象数据类型为Spark提供了关于数据结构和正在执行的计算的更多信息。
在内部,Spark SQL使用这些额外的信息去做一些额外的优化,有多种方式与Spark SQL进行交互,比如: SQL和DatasetAPI。
当计算结果的时候,使用的是相同的执行引擎,不依赖你正在使用哪种API或
1:运行 ./bin/spark-sql
需要先把hive-site.xml 负责到spark的conf目录下
[jifeng@feng02 spark-1.2.0-bin-2.4.1]$ ./bin/spark-sql
Spark assembly has been built with Hive, including Datanucleus jars on classpath
java.l
1. 在yarn上启动spark application
确保HADOOP_CONF_DIR或YARN_CONF_DIR指向包含Hadoop集群(客户端)配置文件的目录。
这些configs用于写入HDFS并连接YARN ResourceManager。这个目录中包含的配置将被分发到YARN集群中,以便应用程序使用的所有容器使用相同的配置。如果配置引用的Java系统属性或环境变量不是由YARN管理的,它们也应该在Spark应用程序的配置(dri
Spark-submit脚本提交任务时最简易的命令格式如下:
./bin/spark-submit \
--master spark://localhost:7077 \
任务包 任务参数
而实际开发中用的一般是如下的格式
./bin/spark-submit \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 11
scala> spark.read.
csv format jdbc json load option options orc parquet schema table text textFile
注意:加载数据的相关参数需写到上述方法中,如:textFile需传入加载数据的路径,jdbc需传入JDBC相关参数。
例如:直接加载Json数据
scala> spark.read.json("/opt
2种方式解决Flink报错Exception in thread "main" org.apache.flink.api.common.functions.InvalidTypesException
上文Yarn源码剖析(一) --- RM与NM服务启动以及心跳通信介绍了yarn是如何启动的,本文将介绍在yarn正常启动后,任务是如何通过spark-submit提交到yarn上的。
spark-submit脚本
1. 先来观察一下任务提交时的spark-submit脚本中各个参数的含义(并没列举所有,只列举了关键的几个参数)
/spark/bin/s...