py3-5-aiohttp-百万请求?

2017-01-03 09:52

说明

  • 一直想自己写个基于python的异步压测客户端,无意发现py3的aiohttp,找到了一篇英文文章说的就是构造百万级别http请求
  • http://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html
  • 作者对各个请求给服务器造成的情况作出了详细的对比。我就偷个懒没有。。。
  • 基于py3.5 3.4的语法请自行度娘

例子一

import asyncio
from aiohttp import ClientSession
#  你使用async以及await关键字将函数异步化
async def fetch(url):
    async with ClientSession() as session:
        async with session.get(url) as response:
            return await response.read()
async def run(loop,  r):
    url = "http://gc.ditu.aliyun.com/geocoding?a=苏州市"
    tasks = []
    for i in range(r):
        task = asyncio.ensure_future(fetch(url.format(i)))
        tasks.append(task)

    responses = await asyncio.gather(*tasks)
    # 注意asyncio.gather()的用法,它搜集所有的Future对象,然后等待他们返回。
    # print(json.loads(responses[0].decode()))
    print(len(responses))

loop = asyncio.get_event_loop()
future = asyncio.ensure_future(run(loop, 800))
loop.run_until_complete(future)
  • future = asyncio.ensure_future(run(loop, 800)) 这里 构造1000个请求时,就报too many open files loop is not close 作者也遇到此问题,说是本机的socket 端口用光了?很是怀疑

例子二 asyncio.Semaphore解决报错问题

import asyncio
from aiohttp import ClientSession

async def fetch(url):
    async with ClientSession() as session:
        async with session.get(url) as response:
            return await response.read()


async def bound_fetch(sem, url):
    async with sem:
        await fetch(url)

async def run(loop,  r):
    url = "http://gc.ditu.aliyun.com/geocoding?a=苏州市"
    tasks = []
    # create instance of Semaphore
    sem = asyncio.Semaphore(100)
    for i in range(r):
        # pass Semaphore to every GET request
        task = asyncio.ensure_future(bound_fetch(sem, url.format(i)))
        tasks.append(task)

    responses = await asyncio.gather(*tasks)
    print(responses)
number = 100000
loop = asyncio.get_event_loop()

future = asyncio.ensure_future(run(loop, number))
loop.run_until_complete(future)
  • 然而我用asyncio.Semaphore时,发现请求不成功,已经发了邮件给作者,没有回我。