Python异步编程实战让代码跑得比AI还快大家好我是船长。上周有个读者问我船长我爬虫跑10000个URL要3小时有没有办法加速我看了下他的代码——用的是同步requests。10000个URL每个假设耗时1秒再怎么优化也很难突破这个天花板。换成异步之后同样的任务8分钟跑完。这就是今天要讲的东西Python异步编程。一、为什么异步能快这么多先理解一个概念同步 vs 异步。同步代码就像排队买奶茶。你站在柜台前等服务员做好一杯下一个人才能点单。10000个人就要等10000次。异步代码就像自助点单机。你点完单就去玩手机叫你了再来拿。10000个人同时点单最后一起来拿。关键点在于等待IO的时候CPU是空闲的。网络请求、文件读写、数据库查询——这些操作99%的时间都在等。异步编程就是让你在等待的时候顺便干别的事。二、asyncio基础Hello World先看最简单例子import asyncio async def say_hello(): print(Hello) await asyncio.sleep(1) # 模拟IO等待 print(World) # 运行 asyncio.run(say_hello())几点说明async def定义异步函数await暂停当前协程让出CPUasyncio.run()入口函数三、实战场景1并发HTTP请求这是最常见的需求。安装依赖pip install aiohttp完整代码import asyncio import aiohttp import time async def fetch(session, url): async with session.get(url) as response: return await response.text() async def main(urls): async with aiohttp.ClientSession() as session: tasks [fetch(session, url) for url in urls] results await asyncio.gather(*tasks) return results # 测试 urls [fhttps://httpbin.org/delay/1 for _ in range(100)] start time.time() asyncio.run(main(urls)) print(f100个请求耗时: {time.time() - start:.2f}秒)输出100个请求耗时: 1.23秒如果是同步requests同样的代码要100秒。提升80倍。四、实战场景2批量文件读写import asyncio import aiofiles async def read_file(path): async with aiofiles.open(path, moder) as f: return await f.read() async def process_files(file_paths): tasks [read_file(path) for path in file_paths] contents await asyncio.gather(*tasks) return contents # 使用 import time file_list [fdata_{i}.txt for i in range(1000)] start time.time() contents asyncio.run(process_files(file_list)) print(f读取1000个文件耗时: {time.time() - start:.2f}秒)aiofiles让文件IO也能异步。五、实战场景3数据库批量插入import asyncio import asyncpg async def batch_insert(records): conn await asyncpg.connect( hostlocalhost, port5432, useruser, passwordpassword, databasedbname ) # 批量插入比逐条插入快10倍 await conn.executemany( INSERT INTO users(id, name) VALUES($1, $2), records ) await conn.close() # 使用 records [(i, fuser_{i}) for i in range(10000)] asyncio.run(batch_insert(records))asyncpg是PostgreSQL的异步驱动比同步psycopg2快很多。六、实战场景4异步爬虫完整示例import asyncio import aiohttp import aiofiles from bs4 import BeautifulSoup async def crawl_page(session, url, semaphore): async with semaphore: # 限制并发数避免被封 try: async with session.get(url) as response: html await response.text() soup BeautifulSoup(html, html.parser) title soup.find(title).text # 异步写入文件 async with aiofiles.open(foutput/{url.split(/)[-1]}.txt, w) as f: await f.write(title) return title except Exception as e: print(fError: {url} - {e}) return None async def main(start_url, max_pages100): # 1. 先获取所有链接 async with aiohttp.ClientSession() as session: # 省略爬取链接的逻辑 urls [fhttps://example.com/page/{i} for i in range(max_pages)] # 2. 并发抓取限制同时50个请求 semaphore asyncio.Semaphore(50) async with aiohttp.ClientSession() as session: tasks [crawl_page(session, url, semaphore) for url in urls] results await asyncio.gather(*tasks) return results # 运行 asyncio.run(main(https://example.com))几个要点Semaphore限制并发数太高会被封IP异常处理必须有网络请求随时会失败gather收集所有结果七、注意事项1. 不要混用同步和异步requests是同步库不能用在async函数里。要用aiohttp。2. 异步不等于多线程asyncio是单线程只是切换执行权。如果CPU密集型任务如加密、压缩要用multiprocessing。3. 调试比同步代码难print可能不会按顺序输出。用logging代替。总结异步编程的核心场景网络请求爬虫、API调用文件IO批量读写数据库操作批量插入查询消息队列消费提升效果IO密集型任务10-100倍性能提升。代码复杂度比同步稍高但值得。完整代码已上传到GitHub需要的同学公众号后台回复异步获取链接。有问题欢迎评论区聊聊。【船长Talk】专注数据分析 职场真相 投资洞察