Snippets: standalone py run_parallel function

syIsTyping
don’t code me on that
1 min readDec 26, 2021

Sometimes while writing quick py scripts, we find a need to run a bunch of functions in parallel. For eg, sending a bunch of slow sqli calls to the target server, or scraping a bunch of websites.

One way to do this would be to convert the function using coroutines (async/await syntax), or using a library that is async (eg, httpx instead of requests). But if we have existing functions and don’t want to spend too much time converting them, it would be nice to have a way to run a group of functions in parallel and returns the results as a list, in order.

Here’s a quick-and-dirty, standalone, generic py function that runs a group of functions in parallel and returns the results as a list, in order.

def run_parallel(*funcs):
from threading import Thread
results = {}
threads = [Thread(target = lambda i=i, f=f: results.update({i:f()})) for i, f in enumerate(funcs)]
[t.start() for t in threads]
[t.join() for t in threads]
return [results[key] for key in sorted(results)]

Just copy the entire function into your script as-is, and call it by passing in functions either as args or as a list. Eg:

def foo(s):
sleep(s)
print(s)
return s

print(run_parallel(lambda: foo(0), lambda: foo(2), lambda: foo(1)))

funcs = [
lambda: foo(0),
lambda: foo(2),
lambda: foo(1)
]
print(run_parallel(*funcs))

Either method will return [0, 2, 1]. The full output:

0
1
2
[0, 2, 1]
0
1
2
[0, 2, 1]

--

--

syIsTyping
don’t code me on that

Security engineer and new dad in Japan. I've learnt a lot from the community, so I hope to contribute back. I write technical articles and how-to guides.