wget或curl从stdin

问题描述:

我想从stdin提供网址时下载网页。基本上一个进程连续生成URL到stdout /文件,我想管道他们wget或curl。

I'd like to download a web pages while supplying URLs from stdin. Essentially one process continuously produces URLs to stdout/file and I want to pipe them to wget or curl. (Think about it as simple web crawler if you want).

这似乎很好:

tail 1.log | wget -i - -O - -q 

但是当我使用'tail -f' (缓冲或wget正在等待EOF?):

But when I use 'tail -f' and it doesn't work anymore (buffering or wget is waiting for EOF?):

tail -f 1.log | wget -i - -O - -q

任何人都可以使用wget,curl或任何其他标准Unix工具?理想情况下,我不会想在循环中重新启动wget,只是让它运行下载URL。

Could anybody provide a solution using wget, curl or any other standard Unix tool? Ideally I don't won't want to restart wget in the loop, just keep it running downloading URLs as they come.

你需要使用的是xargs。例如

What you need to use is xargs. E.g.

tail -f 1.log | xargs -n1 wget -O - -q