爬虫_scrapy_CrawlSpider
作者:互联网
CrawlSpider
(1)继承自scrapy.Spider
(2)独门秘籍
CrawlSpider可以定义规则,再解析html内容的时候,可以根据连接规则提取出指定的链接,然后再向这些链接发送请求。
所以,如果有需要跟进链接的需求,意思就是爬取了网页之后,需要提取链接再次爬取,使用CrawlSpider是非常合适的。
1.创建项目
本案以爬取读书网符合规则的说有页码内数据。
(1)创建项目 scrapy startproject 项目的名字
(2)跳转到spiders文件夹的目录下
(3)创建爬虫文件
scrapy genspider -t crawl 爬虫文件的名字 爬取的域名
2.核心代码
read.py核心代码
import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from scrapy_readbook.items import ScrapyReadbookItem class ReadSpider(CrawlSpider): name = 'read' allowed_domains = ['www.dushu.com'] # https://www.dushu.com/book/1188.html的时候,爬取的数据少一页 # 这是因为起始页的地址不符合校验规则,用CrawlSpider爬取一定出现这个问题。根据观察要改成以下形式 start_urls = ['https://www.dushu.com/book/1188_1.html'] rules = ( # allow为正则表达式,\d代表数字,+代表多个 Rule(LinkExtractor(allow=r'/book/1188_\d+\.html'), callback='parse_item', follow=False), ) def parse_item(self, response): img_list=response.xpath('//div[@class="bookslist"]//img') for img in img_list: name = img.xpath('./@alt').extract_first() src = img.xpath('./@data-original').extract_first() book = ScrapyReadbookItem(name = name,src=src) yield book
items.py数据结构
# Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy # 定义数据结构 class ScrapyReadbookItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() # 名称 name = scrapy.Field() # 路径 src = scrapy.Field()
pipelines.py管道
# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html # useful for handling different item types with a single interface from itemadapter import ItemAdapter # 管道 class ScrapyReadbookPipeline: # 爬虫文件运行前执行这个方法 def open_spider(self,spider): self.fp = open('book.json','w',encoding='utf-8') def process_item(self, item, spider): self.fp.write(str(item)) return item # 爬虫运行后执行这个方法 def close_spider(self,spider): self.fp.close()
settings.py配置
# Scrapy settings for scrapy_readbook project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'scrapy_readbook' SPIDER_MODULES = ['scrapy_readbook.spiders'] NEWSPIDER_MODULE = 'scrapy_readbook.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'scrapy_readbook (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'scrapy_readbook.middlewares.ScrapyReadbookSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'scrapy_readbook.middlewares.ScrapyReadbookDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'scrapy_readbook.pipelines.ScrapyReadbookPipeline': 300, } # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
运行结果:
代码地址:https://gitee.com/heating-cloud/python_spider.git
标签:en,CrawlSpider,爬虫,item,scrapy,html,https,docs 来源: https://www.cnblogs.com/ckfuture/p/16330109.html