One way to do it (after fixing the indentation of sigma):
# -*- coding: utf-8 -*-
import multiprocessing as mp
def sigma(b):
n=0
for i in range (1,550):
n=n+i+b
return n
if __name__ == '__main__':
inputs_b = [1, 2, 3, 4]
with mp.Pool(processes = 2) as p:
res = p.map(sigma, inputs_b)
The only issue with multiprocessing is that you can't run it in an IDE (like spyder), thus you need to save the results and retrieve it later.
It can be done with numpy, pandas, pickle, or others.
Then you might need to have multiple arguments. In this case, use starmap():
# -*- coding: utf-8 -*-
import multiprocessing as mp
def sigma(a, b):
n=0
for i in range (1,550):
n=n+i+b+a
return n
if __name__ == '__main__':
inputs_b = [(a,b) for a in range(5) for b in range(6, 10)]
with mp.Pool(processes = 2) as p:
res = p.starmap(sigma, inputs_b)
N.B: processes = N gives the number of processes to open. It is recommended to use the number of physical CPUs or the number of CPUs-1.
EDIT2: Your dummy example is a very simple case. You have 2 options: write your function to do an elementary task and parallelize the elementary tasks OR take your big function running for 72 hours and run 4 or more at the same time on different input.
You also need to make sure that the processes do not use shared resources or you'll need to use more complex implementation.
Finally, using multiprocessing on functions which generates a lot of data might end in a Memory error (Not enough RAM). This will depend on the application.
sigmafunction actual code?ODEtakes a long time to get results. I want it to do the calculation faster using multiprocessing.sigmafunction makes no sense however. Also, is it supposed to return anything or change something in place?sigmacan be split in multiple "little" jobs. On another hand, if your code has to/can do something else while thesigmais executed,you could rewrite your code a little so that it calls "the other parts of code" it self when the execution is completed (implement some kind of "event handler")