Spark - 错误“必须在您的配置中设置主 URL";提交应用程序时
我有一个 Spark 应用程序,它在本地模式下运行没有问题,但在提交到 Spark 集群时遇到了一些问题.
I have an Spark app which runs with no problem in local mode,but have some problems when submitting to the Spark cluster.
错误信息如下:
16/06/24 15:42:06 WARN scheduler.TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2, cluster-node-02): java.lang.ExceptionInInitializerError
at GroupEvolutionES$$anonfun$6.apply(GroupEvolutionES.scala:579)
at GroupEvolutionES$$anonfun$6.apply(GroupEvolutionES.scala:579)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:401)
at GroupEvolutionES$.<init>(GroupEvolutionES.scala:37)
at GroupEvolutionES$.<clinit>(GroupEvolutionES.scala)
... 14 more
16/06/24 15:42:06 WARN scheduler.TaskSetManager: Lost task 5.0 in stage 0.0 (TID 5, cluster-node-02): java.lang.NoClassDefFoundError: Could not initialize class GroupEvolutionES$
at GroupEvolutionES$$anonfun$6.apply(GroupEvolutionES.scala:579)
at GroupEvolutionES$$anonfun$6.apply(GroupEvolutionES.scala:579)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
在上面的代码中,GroupEvolutionES
是主类.错误消息显示必须在您的配置中设置主 URL",但我已将--master"参数提供给 spark-submit
.
In the above code, GroupEvolutionES
is the main class. The error msg says "A master URL must be set in your configuration", but I have provided the "--master" parameter to spark-submit
.
有人知道如何解决这个问题吗?
Anyone who knows how to fix this problem?
Spark 版本:1.6.1
Spark version: 1.6.1
sparkContext对象在哪里定义,是在main函数里面吗?
Where is the sparkContext object defined, is it inside the main function?
我也遇到了同样的问题,我犯的错误是我在主函数外和类内启动了 sparkContext.
I too faced the same problem, the mistake which i did was i initiated the sparkContext outside the main function and inside the class.
当我在 main 函数中启动它时,它运行良好.
When I initiated it inside the main function, it worked fine.