爬虫之scrapy和splash 结合爬取动态网页
作者:互联网
scrapy和splash 结合爬取动态网页
- 安装scrapy-splash:
pip install scrapy-splash
- 安装splash:
sudo docker pull scrapinghub/splash
- 运行splash:
docker run -it -d -p 8050:8050 --name splash scrapinghub/splash
- 编写scrapy:
- 设置
settings.py
:
- 设置
SPLASH_URL = 'http://xxx.xxx.xxx.xxx:8050' # splash的url
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
- 编写spider:
今日头条为例子:
from scrapy.selector import Selector
import scrapy
from scrapy_splash import SplashRequest
import sys
reload(sys)
sys.setdefaultencoding("utf8")
class MySpider(scrapy.Spider):
name = 'ddd'
def start_requests(self):
url = 'https://www.toutiao.com/'
yield SplashRequest(url=url, callback=self.parse, args={'wait': 0.5}, dont_filter=True)
def parse(self, response):
xbody = Selector(response=response)
title = xbody.xpath("//p[@class='title']/text()").extract()
for i in title:
print str(i).encode("gbk", 'ignore') # 乱码
标签:8050,url,xxx,爬取,scrapy,splash,import 来源: https://blog.csdn.net/jianmoumou233/article/details/79832644