Spark平台下运行WordCount时遇到如下的报错该如何处理?求各路大神指教。。。

Spark平台下运行WordCount时遇到如下的报错该如何处理?求各路大神指教。。。

问题描述:

还有另外几个WARN
15/05/19 11:19:19 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
15/05/19 11:19:33 INFO AppClient$ClientActor: Connecting to master spark://172.18.219.136:7077...
15/05/19 11:19:34 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:19:49 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:19:53 INFO AppClient$ClientActor: Connecting to master spark://172.18.219.136:7077...
15/05/19 11:20:04 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:20:13 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/05/19 11:20:13 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
15/05/19 11:20:13 INFO TaskSchedulerImpl: Cancelling stage 1
15/05/19 11:20:13 INFO DAGScheduler: Failed to run collect at WordCount.scala:31
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.

http://taoistwar.gitbooks.io/spark-operationand-maintenance-management/content/spark_relate_software/hadoop_2x_install.html
spark-env.sh,中export SPARK_MASTER_IP= master节点的机器名或IP
如何是机器名,查看一下/etc/hosts有没有解析主机名

你好,你的问题解决了没,我也遇到这个问题,,头疼,同样的配置在不同的电脑下报这个错误!!!

你好,你的问题解决了没,我也遇到这个问题,,头疼,同样的配置在不同的电脑下报这个错误!!!

你好,你的问题解决了没,我也遇到这个问题,,头疼,同样的配置在不同的电脑下报这个错误!!!