其他分享
首页 > 其他分享> > 【爬虫】爬取扇贝网单词书

【爬虫】爬取扇贝网单词书

作者:互联网

# By Vax
# At time - 2020/12/27 21:59
# linked from

import json

import requests
from lxml import etree
base_url = 'https://www.shanbay.com/wordlist/110521/232414/?page=%s'
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36'
}

def get_text(value):
    if value:
        return value[0]
    return ''

word_list = []
for i in range(1, 4):
    # 发送请求
    response = requests.get(base_url % i, headers=headers)
    # print(response.text)
    html = etree.HTML(response.text)
    tr_list = html.xpath('//tbody/tr')

    #tr_list="/html/body/div[3]/div/div[1]/div[2]/div/table/tbody/tr[6]   "
    #en=     "/html/body/div[3]/div/div[1]/div[2]/div/table/tbody/tr[6]/td[1]/strong"
    #tra=    "/html/body/div[3]/div/div[1]/div[2]/div/table/tbody/tr[6]/td[2]"
    # print(tr_list)
    for tr in tr_list:
        item = {}#构造单词列表
        en = get_text(tr.xpath('.//td[@class="span2"]/strong/text()'))
        tra = get_text(tr.xpath('.//td[@class="span10"]/text()'))
        print(en, tra)
        if en:
            item[en] = tra
            word_list.append(item)


 

效果图:

标签:en,text,扇贝,tr,爬虫,爬取,list,html,div
来源: https://blog.csdn.net/qq_41823684/article/details/114241817