从 json 脚本输出中抓取
问题描述:
我在 python 脚本中运行 scrapy
I am running scrapy
in a python script
def setup_crawler(domain):
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = ArgosSpider(domain=domain)
settings = get_project_settings()
crawler = Crawler(settings)
crawler.configure()
crawler.crawl(spider)
crawler.start()
reactor.run()
它运行成功并停止,但结果在哪里?我想要json格式的结果,我该怎么做?
it runs successfully and stops but where is the result ? I want the result in json format, how can I do that?
result = responseInJSON
就像我们使用命令一样
scrapy crawl argos -o result.json -t json
答
您需要设置FEED_FORMAT
和 FEED_URI
手动设置:
You need to set FEED_FORMAT
and FEED_URI
settings manually:
settings.overrides['FEED_FORMAT'] = 'json'
settings.overrides['FEED_URI'] = 'result.json'
如果您想将结果放入一个变量中,您可以定义一个 Pipeline
将项目收集到列表中的类.使用 spider_closed
信号处理程序查看结果:
If you want to get the results into a variable you can define a Pipeline
class that would collect items into the list. Use the spider_closed
signal handler to see the results:
import json
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from scrapy.utils.project import get_project_settings
class MyPipeline(object):
def process_item(self, item, spider):
results.append(dict(item))
results = []
def spider_closed(spider):
print results
# set up spider
spider = TestSpider(domain='mydomain.org')
# set up settings
settings = get_project_settings()
settings.overrides['ITEM_PIPELINES'] = {'__main__.MyPipeline': 1}
# set up crawler
crawler = Crawler(settings)
crawler.signals.connect(spider_closed, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
# start crawling
crawler.start()
log.start()
reactor.run()
仅供参考,看看 Scrapy 解析命令行参数.
FYI, look at how Scrapy parses command-line arguments.