Systemd Hdfs服务[hadoop]-启动
问题描述:
我已经创建了一个服务来启动和停止与我的Spark集群关联的hdfs.
服务:
I have created a service that starts and stops my hdfs that is associated to my spark cluster.
the service :
[Unit]
Description=Hdfs service
[Service]
Type=simple
WorkingDirectory=/home/hduser
ExecStart=/opt/hadoop-2.6.4/sbin/start-service-hdfs.sh
ExecStop=/opt/hadoop-2.6.4/sbin/stop-service-hdfs.sh
[Install]
WantedBy=multi-user.target
问题是当我启动该服务时,它在启动后立即启动和停止!:)我认为问题出在服务的类型,我真的不知道该选择哪种类型...
The problem is when i start the service, it starts and stops just after been started !! :) I think the problem is the type of the service, I don't really know what type to choose ...
谢谢.
最好的问候
Thank you.
Best regards
答
配置中有一些问题,这就是为什么它不起作用的原因.
Threre are some issues in your config, that is why it is not working.
我正在 hadoop
用户
HADOOP_HOME
是/home/hadoop/envs/dwh/hadoop/
[Unit]
Description=Hadoop DFS namenode and datanode
After=syslog.target network.target remote-fs.target nss-lookup.target network-online.target
Requires=network-online.target
[Service]
User=hadoop
Group=hadoop
Type=forking
ExecStart=/home/hadoop/envs/dwh/hadoop/sbin/start-dfs.sh
ExecStop=/home/hadoop/envs/dwh/hadoop/sbin/stop-dfs.sh
WorkingDirectory=/home/hadoop/envs/dwh
Environment=JAVA_HOME=/usr/lib/jvm/java-8-oracle
Environment=HADOOP_HOME=/home/hadoop/envs/dwh/hadoop
TimeoutStartSec=2min
Restart=on-failure
PIDFile=/tmp/hadoop-hadoop-namenode.pid
[Install]
WantedBy=multi-user.target
清单:
- 已设置用户和用户组
- 服务类型为
fork
- pid文件已设置,这是
start-dfs.sh
创建的实际pid - 环境变量正确