There are many good articles on how to take an argument to one function and process it in parallel according to the value of the argument.
I thoroughly investigated the parallel processing and parallel processing of Python
However, there haven't been many methods for parallel processing multiple functions that do not take arguments, which have already been defined during development (although there may be few use cases), so I will summarize them.
The code is also available on Github
Since joblib is used for parallel processing this time, let's install it if you have not already installed it.
pip install joblib
First is the definition of a function that simply executes multiple functions in order.
def run_func(*funcs):
[f() for f in funcs]
Next, let this function be processed in parallel.
from joblib import Parallel, delayed
def parallel_process_func(*target_funcs):
Parallel(n_jobs=-1)([delayed(run_func)(func) for func in target_funcs])
n_jobs is the number of simultaneous processing, and when it is -1, parallel processing is performed using the core of the machine as much as possible. For example, if you set it to 3, it will be 3 parallel processing.
that's all. Let's actually move it.
def x():
print('-- x() START --')
[i for i in range(10000000)]
print('-- x() END --')
def y():
print('-- y() START --')
[i for i in range(10000000)]
print('-- y() END --')
parallel_process_func(x, y)
When executed, the output should look like the following.
-- x() START --
-- y() START --
-- x() END --
-- y() END --
It's being processed in parallel.
It seems that the usability is not bad because you only have to pass the function name to the defined parallel_process_func function.
Recommended Posts