编程语言
首页 > 编程语言> > python – 带Splash的CrawlSpider

python – 带Splash的CrawlSpider

作者:互联网

我的蜘蛛有些问题.我使用带scrapy的splash来获取由JavaScript生成的“下一页”的链接.从第一页下载信息后,我想从以下页面下载信息,但LinkExtractor功能无法正常工作.但看起来start_request函数不起作用.这是代码:

class ReutersBusinessSpider(CrawlSpider):
   name = 'reuters_business'
   allowed_domains = ["reuters.com"]
   start_urls = (
       'http://reuters.com/news/archive/businessNews?view=page&page=1',
   )

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url, self.parse, meta={
                'splash': {
                    'endpoint': 'render.html',
                    'args': {'wait': 0.5}
                }
            })
    def use_splash(self, request):
        request.meta['splash'] = {
                'endpoint':'render.html',
                'args':{
                    'wait':0.5,
                    }
                }
        return request

    def process_value(value):
        m = re.search(r'(\?view=page&page=[0-9]&pageSize=10)', value)
        if m:
            return urlparse.urljoin('http://reuters.com/news/archive/businessNews',m.group(1))


    rules = (
        Rule(LinkExtractor(restrict_xpaths='//*[@class="pageNext"]',process_value='process_value'),process_request='use_splash', follow=False),
        Rule(LinkExtractor(restrict_xpaths='//h2/*[contains(@href,"article")]',process_value='process_value'),callback='parse_item'),
    )



    def parse_item(self, response):
        l = ItemLoader(item=PajaczekItem(), response=response)

        l.add_xpath('articlesection','//span[@class="article-section"]/text()', MapCompose(unicode.strip), Join())
        l.add_xpath('date','//span[@class="timestamp"]/text()', MapCompose(parse))
        l.add_value('url',response.url)
        l.add_xpath('articleheadline','//h1[@class="article-headline"]/text()', MapCompose(unicode.title))
        l.add_xpath('articlelocation','//span[@class="location"]/text()')
        l.add_xpath('articletext','//span[@id="articleText"]//p//text()', MapCompose(unicode.strip), Join())

        return l.load_item()

日志:

2016-02-12 08:20:29 [scrapy] INFO: Spider opened 2016-02-12 08:20:29 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-02-12 08:20:29 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-02-12 08:20:38 [scrapy] DEBUG: Crawled (200) <POST localhost:8050/render.html>; (referer: None)
2016-02-12 08:20:38 [scrapy] DEBUG: Filtered offsite request to 'localhost': <GET http://localhost:8050/render.html?page=2&pageSize=10&view=page%3E;
2016-02-12 08:20:38 [scrapy] INFO: Closing spider (finished)

哪里出错?感谢帮助.

解决方法:

快速浏览一下,您不是使用splash调用start_request属性…例如,您应该使用SplashRequest.

def start_requests(self):
    for url in self.start_urls:
        yield SplahRequest(url, self.parse, meta={
            'splash': {
                'endpoint': 'render.html',
                'args': {'wait': 0.5}
            }
        })

给你的Splash设置合适,就是在设置中你已经启用了必要的中间位置并且指向正确的/ url也使它们能够正常触发和HTTP缓存…不,我没有运行你的代码应该是好的现在去

编辑:顺便说一下…它的下一页不是js生成的

Next Page not JS generater

所以…除非你使用splash有任何其他原因,我认为没有理由在文章请求的初始解析中使用简单的for循环,如…

for next in response.css("a.control-nav-next::attr(href)").extract():
    yield scrapy.Request(response.urljoin(next), callback=self.parse...

标签:python,scrapy,slash
来源: https://codeday.me/bug/20190706/1396677.html