其他分享
首页 > 其他分享> > 使用scrapy、requests遇到503状态码问题解决

使用scrapy、requests遇到503状态码问题解决

作者:互联网

错误日志如下:

2021-07-11 02:19:11 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <503 https://xxxx.com/tags/undef>: HTTP status code is not handled or not allowed

问题分析

503错误信息:

Checking your browser before accessing xxxx.com
This process is automatic. Your browser will redirect to your requested content shortly.
Please allow up to 5 seconds…

安装 pip install cfscrape

class DrdSpider(scrapy.Spider):
    def start_requests(self):
        cf_requests = []
        for url in self.start_urls:
            token, agent = cfscrape.get_tokens(url, USER_AGENT)
            #token, agent = cfscrape.get_tokens(url)
            cf_requests.append(scrapy.Request(url=url, cookies={'__cfduid': token['__cfduid']}, headers={'User-Agent': agent}))
            print "useragent in cfrequest: " , agent
            print "token in cfrequest: ", token
        return cf_requests
Traceback (most recent call last):
  File "C:\workspace\new-crm-agent\env\lib\site-packages\scrapy\core\engine.py", line 129, in _next_request
    request = next(slot.start_requests)
  File "C:\workspace\phub\scrapy_obj\mySpider\spiders\drd.py", line 35, in start_requests
    token, agent = cfscrape.get_tokens(url)
  File "C:\workspace\new-crm-agent\env\lib\site-packages\cfscrape\__init__.py", line 398, in get_tokens
    'Unable to find Cloudflare cookies. Does the site actually have Cloudflare IUAM ("I\'m Under Attack Mode") enabled?'
ValueError: Unable to find Cloudflare cookies. Does the site actually have Cloudflare IUAM ("I'm Under Attack Mode") enabled?

那么问题来了,为什么我使用cfscrape访问正常200,scrapy爬取却是503

if __name__ == "__main__":
    session = requests.session()
    heads = OrderedDict([('Host', None),
             ('Connection', 'keep-alive'),
             ('Upgrade-Insecure-Requests', '1'),
             ('User-Agent',
              'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.78 Safari/537.36'),
             ('Accept',
              'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8'),
             ('Accept-Language', 'en-US,en;q=0.9'),
             ('Accept-Encoding', 'gzip, deflate')])
    session.headers = heads
    resp = session.get("https://drd.com/tags/undi")
    print(resp)

返回结果:

<Response [503]>
Process finished with exit code 0

class CloudflareAdapter(HTTPAdapter):
    """ HTTPS adapter that creates a SSL context with custom ciphers """

    def get_connection(self, *args, **kwargs):
        conn = super(CloudflareAdapter, self).get_connection(*args, **kwargs)

        if conn.conn_kw.get("ssl_context"):
            conn.conn_kw["ssl_context"].set_ciphers(DEFAULT_CIPHERS)
        else:
            context = create_urllib3_context(ciphers=DEFAULT_CIPHERS)
            conn.conn_kw["ssl_context"] = context

        return conn
        
class CloudflareScraper(Session):
    def __init__(self, *args, **kwargs):
        self.delay = kwargs.pop("delay", None)
        # Use headers with a random User-Agent if no custom headers have been set
        headers = OrderedDict(kwargs.pop("headers", DEFAULT_HEADERS))

        # Set the User-Agent header if it was not provided
        headers.setdefault("User-Agent", DEFAULT_USER_AGENT)

        super(CloudflareScraper, self).__init__(*args, **kwargs)

        # Define headers to force using an OrderedDict and preserve header order
        self.headers = headers
        self.org_method = None

        self.mount("https://", CloudflareAdapter())
SSLContext.set_ciphers(ciphers)
为使用此上下文创建的套接字设置可用密码。 它应当为 OpenSSL 密码列表格式 的字符串。 如果没有可被选择的密码(由于编译时选项或其他配置禁止使用所指定的任何密码),则将引发 SSLError。

備註 在连接后,SSL 套接字的 SSLSocket.cipher() 方法将给出当前所选择的密码。
TLS 1.3 cipher suites cannot be disabled with set_ciphers().

/settings.py


DOWNLOADER_CLIENT_TLS_CIPHERS = "DEFAULT:!DH"

标签:__,cfscrape,self,headers,scrapy,requests,503
来源: https://www.cnblogs.com/liuchaohao/p/14995526.html