其他分享
首页 > 其他分享> > 爬虫-aiohttp 模块的简单使用

爬虫-aiohttp 模块的简单使用

作者:互联网

代码 :

import asyncio
import aiohttp

urls = [
    'https://img.lianzhixiu.com/uploads/210304/37-21030410123B61.jpg',
    'https://img.lianzhixiu.com/uploads/210325/37-2103250930025H.jpg',
    'https://img.lianzhixiu.com/uploads/210208/37-21020P920505S.jpg',
]

async def aiodownload(url):
    name = url.rsplit('/', 1)[1]  # 切片 , 得到名字
    print(name)
    pass
    async with aiohttp.ClientSession() as session:  # 相当于requests
        async with session.get(url) as resp:  # resp = requests.get()
            # 请求回来, 写入文件
            with open(name, mode='wb') as f:  #创建文件
                f.write(await resp.content.read()) #读取内容是异步的, 需要await 挂起
                f.close()
    print(name, '下载完成')

async def main():
    tasks = []
    for url in urls:
        tasks.append(aiodownload(url))
    await asyncio.wait(tasks)

if __name__ == '__main__':
    asyncio.run(main())

 

标签:__,name,aiohttp,url,resp,37,爬虫,模块,async
来源: https://www.cnblogs.com/longly1111/p/16215123.html