编程语言
首页 > 编程语言> > Python请求库超时,但从浏览器获取响应

Python请求库超时,但从浏览器获取响应

作者:互联网

我正在尝试为nba数据创建一个web scrapper.当我运行以下代码时:

import requests

response = requests.get('https://stats.nba.com/stats/leaguedashplayerstats?College=&Conference=&Country=&DateFrom=10%2F20%2F2017&DateTo=10%2F20%2F2017&Division=&DraftPick=&DraftYear=&GameScope=&GameSegment=&Height=&LastNGames=0&LeagueID=00&Location=&MeasureType=Base&Month=0&OpponentTeamID=0&Outcome=&PORound=0&PaceAdjust=N&PerMode=Totals&Period=0&PlayerExperience=&PlayerPosition=&PlusMinus=N&Rank=N&Season=2017-18&SeasonSegment=&SeasonType=Regular+Season&ShotClockRange=&StarterBench=&TeamID=0&VsConference=&VsDivision=&Weight=')

请求超时错误:

File “C:\ProgramData\Anaconda3\lib\site-packages\requests\api.py”,
line 70, in get
return request(‘get’, url, params=params, **kwargs)

File “C:\ProgramData\Anaconda3\lib\site-packages\requests\api.py”,
line 56, in request
return session.request(method=method, url=url, **kwargs)

File
“C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py”,
line 488, in request
resp = self.send(prep, **send_kwargs)

File
“C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py”,
line 609, in send
r = adapter.send(request, **kwargs)

File
“C:\ProgramData\Anaconda3\lib\site-packages\requests\adapters.py”,
line 473, in send
raise ConnectionError(err, request=request)

ConnectionError: (‘Connection aborted.’, OSError(“(10060,
‘WSAETIMEDOUT’)”,))

但是,当我在浏览器中点击相同的URL时,我收到了响应.

解决方法:

看起来您提到的网站正在检查请求标题中的"User-Agent".您可以伪造请求中的“用户代理”,使其看起来像来自实际的浏览器,您将收到响应.

例如:

>>> import requests
>>> url = "https://stats.nba.com/stats/leaguedashplayerstats?College=&Conference=&Country=&DateFrom=10%2F20%2F2017&DateTo=10%2F20%2F2017&Division=&DraftPick=&DraftYear=&GameScope=&GameSegment=&Height=&LastNGames=0&LeagueID=00&Location=&MeasureType=Base&Month=0&OpponentTeamID=0&Outcome=&PORound=0&PaceAdjust=N&PerMode=Totals&Period=0&PlayerExperience=&PlayerPosition=&PlusMinus=N&Rank=N&Season=2017-18&SeasonSegment=&SeasonType=Regular+Season&ShotClockRange=&StarterBench=&TeamID=0&VsConference=&VsDivision=&Weight="
>>> headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}

>>> response = requests.get(url, headers=headers)
>>> response.status_code
200

>>> response.text  # will return the website content

标签:python,python-requests,web-scraping,user-agent
来源: https://codeday.me/bug/20190608/1195854.html