I have a block of code which takes a long time to execute and is CPU intense. I want to run that block several times and want to use the full power of my CPU for that. Looking at asyncio I understood that it is mainly for asynchronous communication, but is also a general tool for asynchronous tasks.
In the following example the time.sleep(y) is a placeholder for the code I want to run. In this example every co-routine is executed one after the other and the execution takes about 8 seconds.
import asyncio
import logging
import time
async def _do_compute_intense_stuff(x, y, logger):
logger.info('Getting it started...')
for i in range(x):
time.sleep(y)
logger.info('Almost done')
return x * y
logging.basicConfig(format='[%(name)s, %(levelname)s]: %(message)s', level='INFO')
logger = logging.getLogger(__name__)
loop = asyncio.get_event_loop()
co_routines = [
asyncio.ensure_future(_do_compute_intense_stuff(2, 1, logger.getChild(str(i)))) for i in range(4)]
logger.info('Made the co-routines')
responses = loop.run_until_complete(asyncio.gather(*co_routines))
logger.info('Loop is done')
print(responses)
When I replace time.sleep(y) with asyncio.sleep(y) it returns nearly immediately. With await asyncio.sleep(y) it takes about 2 seconds.
Is there a way to parallelize my code using this approach or should I use multiprocessing or threading? Would I need to put the time.sleep(y) into a Thread?
awaitis a place that your task tells the event loop that it is willing for other tasks to run if they are not waiting.time.sleep()is the very opposite of cooperating. It blocks everything, so the event loop can't switch tasks.asyncio.sleep()produces a coroutine. If you don't await on it, it'll not do anything, so yes, you'd see an instant return.