You could use pathos (and it's sister package pyina), to help you figure out exactly how you wanted to distribute the code in parallel.
pathos provides a unified API for parallel processing across threading, multiprocessing, and sockets. The API provides Pool objects which have blocking, non-blocking iterative, and asynchronous map and pipe methods. pyina extends this API to MPI and schedulers like torque and slurm. You can, generally, nest these constructs so you have heterogeneous and hierarchical parallel distributed computing.
You shouldn't need to modify your code at all to use pathos (and pyina).
There are a few examples of this on SO, including these:
Python Multiprocessing with Distributed Cluster Using Pathos and
Python Multiprocessing with Distributed Cluster
and in the examples directories found in pathos, pyina, and mystic -- found here:
https://github.com/uqfoundation
RabbitMQmay offer one approach; if you don't, something more solid such asMapReduce, see e.g github.com/GoogleCloudPlatform/appengine-mapreduce/wiki/… , or its popular dialectHadoop, may free you from substantial "housekeeping work". But it's important that you consider and express what constraints, exactly, you need to satisfy!