其他分享
首页 > 其他分享> > Scrapy 框架之 ——crawl Spiders

Scrapy 框架之 ——crawl Spiders

作者:互联网

 一、适用条件

   可以对有规律或者无规律的网站进行自动爬取


 二、代码讲解 

 (1)创健scrapy项目

  1. E:myweb>scrapy startproject mycwpjt
  2. New Scrapy project 'mycwpjt', using template directory 'd:\\python35\\lib\\site-packages\\scrapy\\templates\\project', created in:
  3. D:\Python35\myweb\part16\mycwpjt
  4. You can start your first spider with:
  5. cd mycwpjt
  6. scrapy genspider example example.com
 (2) 创健爬虫

  1. E:\myweb>scrapy genspider -t crawl weisuen sohu.com
  2. Created spider 'weisuen' using template 'crawl' in module:
  3. Mycwpjt.spiders.weisuen
(3)item编写

  1. # -*- coding: utf-8 -*-
  2. # Define here the models for your scraped items
  3. #
  4. # See documentation in:
  5. # http://doc.scrapy.org/en/latest/topics/items.html
  6. import scrapy
  7. class MycwpjtItem(scrapy.Item):
  8. # define the fields for your item here like:
  9. name = scrapy.Field()
  10. link = scrapy.Field()

(4)pipeline编写

  1. # -*- coding: utf-8 -*-
  2. # Define your item pipelines here
  3. #
  4. # Don't forget to add your pipeline to the ITEM_PIPELINES setting
  5. # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
  6. class MycwpjtPipeline(object):
  7. def process_item(self, item, spider):
  8. print(item["name"])
  9. print(item["link"])
  10. return item

(5)settings设置

  1. ITEM_PIPELINES = {
  2. 'mycwpjt.pipelines.MycwpjtPipeline': 300,
  3. }

(6)爬虫编写

  1. # -*- coding: utf-8 -*-
  2. import scrapy
  3. from scrapy.linkextractors import LinkExtractor
  4. from scrapy.spiders import CrawlSpider, Rule
  5. from mycwpjt.items import MycwpjtItem
  6. #显示可用的模板 scrapy genspider -l
  7. #利用crawlspider创建的框架 scrapy genspider -t crawl weisun sohu.com
  8. #开始爬取 scrapy crawl weisun --nolog
  9. class WeisunSpider(CrawlSpider):
  10. name = 'weisun'
  11. allowed_domains = ['sohu.com']
  12. start_urls = ['http://sohu.com/']
  13. rules = (
  14. # 新闻网页的url地址类似于:
  15. # “http://news.sohu.com/20160926/n469167364.shtml”
  16. # 所以可以得到提取的正则表达式为'.*?/n.*?shtml’
  17. Rule(LinkExtractor(allow=('.*?/n.*?shtml'), allow_domains=('sohu.com')), callback='parse_item', follow=True),
  18. )
  19. def parse_item(self, response):
  20. i = MycwpjtItem()
  21. #i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
  22. # 根据Xpath表达式提取新闻网页中的标题
  23. i["name"] = response.xpath("/html/head/title/text()").extract()
  24. # 根据Xpath表达式提取当前新闻网页的链接
  25. i["link"] = response.xpath("//link[@rel='canonical']/@href").extract()
  26. return i

     CrawlSpider是爬取那些具有一定规则网站的常用的爬虫,它基于Spider并有一些独特属性

      因为rulesRule对象的集合,所以这里介绍一下Rule。它有几个参数:link_extractorcallback=None、              cb_kwargs=Nonefollow=Noneprocess_links=Noneprocess_request=None
     其中的link_extractor既可以自己定义,也可以使用已有LinkExtractor类,主要参数为:

三、结果显示


 

标签:com,scrapy,sohu,item,Scrapy,link,Spiders,domains,crawl
来源: https://blog.csdn.net/magicboom/article/details/89791680