爬虫综合大作业
作者:互联网
通过浏览器的检查元素
要想爬取这些数据,就必须在使用requests库时设置好请求的头部(headers)特别是cookie。
接下来开始分析:
首先是找到网易云音乐歌手网页:
在左侧我们可以看到歌手的分类,每个分类都对应一个url的id参数,同一类歌手又通过歌手名字的首字母进行排序,对应url中的initial参数。这里以华语歌手,A打头的网页的url为例。
url='http://music.163.com/#/discover/artist/cat?id=1001&initial=65'
因此我们只需要改变网址中的id和initial参数的值便可以将网易云音乐上所有的歌手信息爬取下来。
1 ls1 = [1001, 1002, 1003, 2001, 2002, 2003, 6001, 6002, 6003, 7001, 7002, 7003, 4001, 4002, 4003] # id的值 2 ls2 = [-1, 0, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90] # initial的值 3 for i in ls1: 4 for j in ls2: 5 url = 'http://music.163.com/discover/artist/cat?id=' + str(i) + '&initial=' + str(j)
这里我们创建两个列表来存储id和initial的值,从而构建爬取全部歌手信息的网页url。
接着我们开始设置请求的头(即headers的值),打开浏览器的开发者工具栏(鼠标右键点击检查),点击network,再点击Doc,找到原始请求返回的文件(即网址对应的文件),点击headers,里面有Request Headers,把里面的值全部设置为请求的头部的值。
一定不能漏了cookie的值。
1 import requests 2 from bs4 import BeautifulSoup 3 import csv 4 5 6 # 构造函数获取歌手信息 7 def get_artists(url): 8 headers={'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 9 'Accept-Encoding': 'gzip, deflate', 10 'Accept-Language': 'zh-CN,zh;q=0.9', 11 'Connection': 'keep-alive', 12 'Cookie': '_iuqxldmzr_=32; _ntes_nnid=0e6e1606eb78758c48c3fc823c6c57dd,1527314455632; ' 13 '_ntes_nuid=0e6e1606eb78758c48c3fc823c6c57dd; __utmc=94650624; __utmz=94650624.1527314456.1.1.' 14 'utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); WM_TID=blBrSVohtue8%2B6VgDkxOkJ2G0VyAgyOY;' 15 ' JSESSIONID-WYYY=Du06y%5Csx0ddxxx8n6G6Dwk97Dhy2vuMzYDhQY8D%2BmW3vlbshKsMRxS%2BJYEnvCCh%5CKY' 16 'x2hJ5xhmAy8W%5CT%2BKqwjWnTDaOzhlQj19AuJwMttOIh5T%5C05uByqO%2FWM%2F1ZS9sqjslE2AC8YD7h7Tt0Shufi' 17 '2d077U9tlBepCx048eEImRkXDkr%3A1527321477141; __utma=94650624.1687343966.1527314456.1527314456' 18 '.1527319890.2; __utmb=94650624.3.10.1527319890', 19 'Host': 'music.163.com', 20 'Referer': 'http://music.163.com/', 21 'Upgrade-Insecure-Requests': '1', 22 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ' 23 'Chrome/66.0.3359.181 Safari/537.36'} 24 r = requests.get(url, headers=headers) 25 soup = BeautifulSoup(r.text, 'html5lib') 26 for artist in soup.find_all('a', attrs={'class': 'nm nm-icn f-thide s-fc0'}): 27 artist_name = artist.string 28 artist_id = artist['href'].replace('/artist?id=', '').strip() 29 try: 30 writer.writerow((artist_id, artist_name)) 31 except Exception as msg: 32 print(msg) 33 34 35 ls1 = [1001, 1002, 1003, 2001, 2002, 2003, 6001, 6002, 6003, 7001, 7002, 7003, 4001, 4002, 4003] # id的值 36 ls2 = [-1, 0, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90] # initial的值 37 csvfile = open('F://歌手信息.csv', 'a', encoding='utf-8') # 文件存储的位置 38 writer = csv.writer(csvfile) 39 writer.writerow(('artist_id', 'artist_name')) 40 for i in ls1: 41 for j in ls2: 42 url = 'http://music.163.com/discover/artist/cat?id=' + str(i) + '&initial=' + str(j) 43 get_artists(url)View Code
至此利用Python爬取网易云音乐全部歌手信息的爬虫就完成了,这里我把信息存储成了csv 文件。来看一下结果:
标签:artist,url,initial,作业,爬虫,歌手,headers,id,综合 来源: https://www.cnblogs.com/hzj111/p/10786068.html