使用Spark和Phoenix将CSV文件保存到hbase表
问题描述:
有人可以指出一个使用Spark 2.2 将csv文件保存到Hbase表的工作示例吗? 我尝试和失败的选项(请注意:所有选项都对我来说适用于Spark 1.6)
Can someone point me to a working example of saving a csv file to Hbase table using Spark 2.2 Options that I tried and failed (Note: all of them work with Spark 1.6 for me)
- 凤凰火花
- hbase-spark
- it.nerdammer.bigdata:spark-hbase-connector_2.10
All of them finally after fixing everything give similar error to this Spark HBase
谢谢
答
将以下参数添加到您的spark作业中-
Add below parameters to your spark job-
spark-submit \
--conf "spark.yarn.stagingDir=/somelocation" \
--conf "spark.hadoop.mapreduce.output.fileoutputformat.outputdir=/somelocation" \
--conf "spark.hadoop.mapred.output.dir=/somelocation"