编程语言
首页 > 编程语言> > python-Scrapy停止抓取,但继续爬取

python-Scrapy停止抓取,但继续爬取

作者:互联网

我正在尝试从网站的多个页面中抓取不同的信息.
在第十六页之前,所有工作:对页面进行爬网,抓取并将信息存储在我的数据库中.但是,在第16页之后,它会停止抓取,但会继续爬网.
我检查了网站,并在470页中包含更多信息. HTML标签是相同的.所以我不明白为什么它停止报废.

我的密码

def url_lister():
    url_list = []
    page_count = 1
    while page_count < 480:
        url = 'https://www.active.com/running?page=%s' %page_count 
        url_list.append(url)
        page_count += 1 
    return url_list

class ListeCourse_level1(scrapy.Spider):
    name = 'ListeCAP_ACTIVE' 
    allowed_domains = ['www.active.com'] 
    start_urls = url_lister()

    def parse(self, response):    
        selector = Selector(response)
        for uneCourse in response.xpath('//*[@id="lpf-tabs2-a"]/article/div/div/div/a[@itemprop="url"]'): 
            loader = ItemLoader(ActiveItem(), selector=uneCourse)
            loader.add_xpath('nom_evenement', './/div[2]/div/h5[@itemprop="name"]/text()')
        loader.default_input_processor = MapCompose(string) 
        loader.default_output_processor = Join()
        yield loader.load_item()
    pass

贝壳

>     2018-01-23 17:22:29 [scrapy.core.scraper] DEBUG: Scraped from <200     
>     https://www.active.com/running?page=15>
>     {
>      'nom_evenement': 'Enniscrone 10k run & 5k run/walk',
>      }
>     2018-01-23 17:22:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.active.com/running?page=16> (referer: None)
>     --------------------------------------------------
>                     SCRAPING DES ELEMENTS EVENTS
>     --------------------------------------------------
>     2018-01-23 17:22:34 [scrapy.extensions.logstats] INFO: Crawled 17 pages (at 17 pages/min), scraped 155 items (at 155 items/min)
>     2018-01-23 17:22:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.active.com/running?page=17> (referer: None)
> 
> --------------------------------------------------
>                 SCRAPING DES ELEMENTS EVENTS
> -------------------------------------------------- 2018-01-23 17:22:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET
> https://www.active.com/running?page=18> (referer: None)
> --------------------------------------------------
>                 SCRAPING DES ELEMENTS EVENTS
> -------------------------------------------------- 2018-01-23 17:22:43 [scrapy.core.engine] DEBUG: Crawled (200) <GET
> https://www.active.com/running?page=19> (referer: None)

解决方法:

这可能是由于只有17页包含您要查找的内容,而您却指示Scrapy访问所有480页的https://www.active.com/running?page=NNN格式的页面.更好的方法是在您访问的每个页面上检查是否存在下一页,并且只有在这种情况下才产生对下一页的请求.

因此,我会将您的代码重构为类似(未经测试)的代码:

class ListeCourse_level1(scrapy.Spider):
    name = 'ListeCAP_ACTIVE' 
    allowed_domains = ['www.active.com'] 
    base_url = 'https://www.active.com/running'
    start_urls = [base_url]

    def parse(self, response):    
        selector = Selector(response)
        for uneCourse in response.xpath('//*[@id="lpf-tabs2-a"]/article/div/div/div/a[@itemprop="url"]'): 
            loader = ItemLoader(ActiveItem(), selector=uneCourse)
            loader.add_xpath('nom_evenement', './/div[2]/div/h5[@itemprop="name"]/text()')
        loader.default_input_processor = MapCompose(string) 
        loader.default_output_processor = Join()
        yield loader.load_item()
        # check for next page link
        if response.xpath('//a[contains(@class, "next-page")]'):
            next_page = response.meta.get('page_number', 1) + 1
            next_page_url = '{}?page={}'.format(base_url, next_page)
            yield scrapy.Request(next_page_url, callback=self.parse, meta={'page_number': next_page})

标签:scrapy-spider,scrapy,python,web-crawler
来源: https://codeday.me/bug/20191025/1928446.html