Python Multiprocessing: 7-Day Crash Course

Super Fast Python
10 min readNov 26, 2023

The Python multiprocessing module allows you to create and manage new child processes in Python.

It is specifically designed for you to develop parallel Python programs and make use of all of the CPU cores available in your system.

It does this by side-stepping the infamous Global Interpreter Lock (GIL) that limits Python threads in the threading module.

Multiprocessing is not perfect, there is a cost in sharing data between processes using inter-process communication. But if the data needed by a new process is small or processes can load and save data directly, then multiprocessing and process-based concurrency provide an excellent way to implement parallelism in Python.

This crash course is designed to get you up to speed with Python multiprocessing, super fast!

Python Multiprocessing: 7-Day Crash Course

Course Structure:

You have a lot of fun ahead, including:

  • Lesson 01: How to run functions in new processes
  • Lesson 02: How to extend the Process class
  • Lesson 03: How to protect code with a mutex
  • Lesson 04: How to limit access with a semaphore
  • Lesson 05: How to share objects with a pipe
  • Lesson 06: How to use producer-consumer processes with a queue
  • Lesson 07: How to kill a process

I designed this course to be completed in one week (7 lessons in 7 days).

Take your time. Leave the page open in a browser tab and complete one lesson per day.

Download All Source Code:

You can download a zip of all the code used in this tutorial here:

Email Version of This Course (+PDF Cheat Sheet)

If you would also like to receive this crash course via email, one lesson per day, you can sign up here:

Free Book-Length Guide to Python Multiprocessing:

Multiprocessing is a massive topic and we can’t cover it all.

If you want to go deeper, I recommend my massive Python Multiprocessing guide:

Quick Question:

Your first lesson in this series is up next.

Before then, a quick question:

Why are you interested in Python multiprocessing?

Let me know in the comments or via email. Maybe I can point you in the right direction and save you a ton of time!

Lesson 01: Run A Function In A New Process

We can easily run a function in a new process.

First, we must create an instance of the Process class and specify the name of the function to run via the target argument.

Next, we can start the process by calling the start() method.

A new instance of the Python interrupter will be created and a new thread within the new process will be created to execute our target function.

And that’s all there is to it.

We do not have control over when the process will execute precisely or which CPU core will execute it. Both of these are low-level responsibilities that are handled by the underlying operating system.

This approach is great for running one-off tasks in a new process.

The example below provides a complete working example of running a function in a new process.

# SuperFastPython.com
# example of running a function in another process
from time import sleep
from multiprocessing import Process

# a custom function that blocks for a moment
def task():
# block for a moment
sleep(1)
# display a message
print('This is from another process')

# entry point
if __name__ == '__main__':
# create a process
process = Process(target=task)
# run the process
process.start()
# wait for the process to finish
print('Waiting for the process...')
process.join()

Try running the example.

You can learn more about running functions in a new process in the tutorial:

Lesson 02: Extend The Process Class

We can extend the process class to run our code in a new child process.

This can be achieved by first extending the Process class, just like any other Python class.

Then the run() function of the Process class must be overridden to contain the code that you wish to execute in another process.

An instance of the class can then be created and the new process started by calling the start() method.

And that’s it.

Tying this together, the complete example of executing code in another process by extending the Process class is listed below.

# SuperFastPython.com
# example of extending the Process class
from time import sleep
from multiprocessing import Process

# custom process class
class CustomProcess(Process):
# override the run function
def run(self):
# block for a moment
sleep(1)
# display a message
print('This is coming from another process')

# entry point
if __name__ == '__main__':
# create the process
process = CustomProcess()
# start the process
process.start()
# wait for the process to finish
print('Waiting for the process to finish')
process.join()

Try running the example.

You can learn more about how to extend the Process class in the tutorial:

Lesson 03: Lock A Process With A Mutex

We can use mutual exclusion (mutex) lock for processes via the Lock class.

A mutual exclusion lock or mutex lock is a synchronization primitive intended to prevent a race condition.

An instance of the lock can be created and then acquired by processes before accessing a critical section, and released after the critical section.

The lock can be acquired via the acquire() method and released by calling the release() method.

We can achieve the same effect by using a lock object via the context manager interface. This is preferred as it ensures that the lock is always released, even if the block fails with an exception or returns.

A lock can be created and shared among multiple processes.

Tying this together, a complete example of sharing a lock among multiple processes is listed below.

# SuperFastPython.com
# example of a mutual exclusion (mutex) lock for processes
from time import sleep
from random import random
from multiprocessing import Process
from multiprocessing import Lock

# work function
def task(lock, identifier, value):
# acquire the lock
with lock:
print(f'>process {identifier} got the lock, sleeping for {value}')
sleep(value)

# entry point
if __name__ == '__main__':
# create the shared lock
lock = Lock()
# create a number of processes with different sleep times
processes = [Process(target=task, args=(lock, i, random())) for i in range(10)]
# start the processes
for process in processes:
process.start()
# wait for all processes to finish
for process in processes:
process.join()

Try running the example.

You can learn more about how to use mutex locks with processes in the tutorial:

Lesson 04: Semaphore With Processes

We can limit concurrent access by processes to a block of code using a semaphore.

A semaphore is a concurrency primitive that allows a limit on the number of processes (or threads) that can acquire a lock protecting a critical section.

Python provides the Semaphore class that can be configured to let a fixed number of processes acquire it. Any additional processes that attempt to acquire it will have to wait until a position becomes available.

This is helpful in many situations such as limiting access to a file or server resource.

The number of positions is specified when creating the Semaphore object.

The semaphore can then be acquired by calling the acquire() method, and released via the release() method.

Alternatively, the context manager interface can be used, which is preferred to ensure that each acquisition is always released.

Tying this together, the complete example of sharing a semaphore between processes is listed below.

# SuperFastPython.com
# example of using a semaphore
from time import sleep
from random import random
from multiprocessing import Process
from multiprocessing import Semaphore

# target function
def task(semaphore, number):
# attempt to acquire the semaphore
with semaphore:
# simulate computational effort
value = random()
sleep(value)
# report result
print(f'Process {number} got {value}')

# entry point
if __name__ == '__main__':
# create the shared semaphore
semaphore = Semaphore(2)
# create processes
processes = [Process(target=task, args=(semaphore, i)) for i in range(10)]
# start child processes
for process in processes:
process.start()
# wait for child processes to finish
for process in processes:
process.join()

Try running the example.

You can learn more about how to use semaphores with processes in the tutorial:

Lesson 05: Share Objects With A Pipe

We can share data between processes using the Pipe class.

A Pipe allows one process to send objects and another to receive.

It is a helpful approach if one process generates objects or data and another expects to receive and use the objects.

By default, pipes provide two-way communication (duplex), although a one-way pipe can be created by setting the “duplex” argument to False.

Tying this together, the example below shows an example of a send and receiver process sharing data with a pipe.

# SuperFastPython.com
# example of using a pipe between processes
from time import sleep
from random import random
from multiprocessing import Process
from multiprocessing import Pipe

# generate work
def sender(connection):
print('Sender: Running', flush=True)
# generate work
for i in range(10):
# generate a value
value = random()
# block
sleep(value)
# send data
connection.send(value)
# all done
connection.send(None)
print('Sender: Done', flush=True)

# consume work
def receiver(connection):
print('Receiver: Running', flush=True)
# consume work
while True:
# get a unit of work
item = connection.recv()
# report
print(f'>receiver got {item}', flush=True)
# check for stop
if item is None:
break
# all done
print('Receiver: Done', flush=True)

# entry point
if __name__ == '__main__':
# create the pipe
conn1, conn2 = Pipe()
# start the sender
sender_process = Process(target=sender, args=(conn2,))
sender_process.start()
# start the receiver
receiver_process = Process(target=receiver, args=(conn1,))
receiver_process.start()
# wait for all processes to finish
sender_process.join()
receiver_process.join()

Try running the example.

You can learn more about how to use pipes in the tutorial:

Lesson 06: Producer-Consumer Processes With A Queue

We can share data between producer and consumer processes using a shared queue.

The multiprocessing queue is process-safe, meaning that we can add and remove data from the queue without fear of race conditions, data loss, or corruption.

A Queue object can be created and shared among multiple processes.

Producer processes can add data to the queue by calling the put() method.

Consumer processes can retrieve data from the queue by calling the get() method. If there are no items on the queue, the consumer will block them until some data is available.

The producer-consumer pattern is very common in concurrent programs, and the multiprocessing Queue class is a way we can implement it easily.

Tying this together, a complete example of a producer-consumer example with a Queue is listed below.

# SuperFastPython.com
# example of using the queue with processes
from time import sleep
from random import random
from multiprocessing import Process
from multiprocessing import Queue

# generate work
def producer(queue):
print('Producer: Running', flush=True)
# generate work
for i in range(10):
# generate a value
value = random()
# block
sleep(value)
# add to the queue
queue.put(value)
# all done
queue.put(None)
print('Producer: Done', flush=True)

# consume work
def consumer(queue):
print('Consumer: Running', flush=True)
# consume work
while True:
# get a unit of work
item = queue.get()
# check for stop
if item is None:
break
# report
print(f'>got {item}', flush=True)
# all done
print('Consumer: Done', flush=True)

# entry point
if __name__ == '__main__':
# create the shared queue
queue = Queue()
# start the consumer
consumer_process = Process(target=consumer, args=(queue,))
consumer_process.start()
# start the producer
producer_process = Process(target=producer, args=(queue,))
producer_process.start()
# wait for all processes to finish
producer_process.join()
consumer_process.join()

Try running the example.

You can learn more about how to use queues in the tutorial:

Lesson 07: Kill A Process

Sometimes we need to kill a task executing in a new child process.

Killing a process is drastic, and it might be preferable to send a message to the process and request that it stop as soon as possible.

Nevertheless, sometimes we must stop the process immediately, such as on user request or due to a catastrophic failure.

This can be achieved by first getting the Process instance for the task, then calling the kill() method.

The method takes no arguments and does not block.

The function will terminate the process using the SIGKILL (signal kill) signal on most platforms, or the equivalent on windows.

Tying this together, the example below shows how to start a process, then later kill it.

# SuperFastPython.com
# example of killing a process
from time import sleep
from multiprocessing import Process

# custom task function
def task():
# execute a task in a loop
while True:
# block for a moment
sleep(1)
# report a message
print('Worker process running...', flush=True)

# entry point
if __name__ == '__main__':
# create a process
process = Process(target=task)
# run the process
process.start()
# wait for a moment
sleep(5)
# kill the process
process.kill()
# continue on...
print('Parent is continuing on...')

Try running the example.

You can learn more about how to kill child processes in the tutorial:

Thank-You

Thank you kindly, from Jason Brownlee at SuperFastPython.com

Thank you for letting me help you learn more about multiprocessing in Python.

If you ever have any questions about this course or Python concurrency in general, please reach out.

  • Contact me, messages go directly to my email inbox.

Remember, you can download a zip of all the code used in this tutorial.
Get it from here:

You can also receive this crash course via email, one lesson per day.
Sign-up here:

Finally, if you want to go deeper into Python multiprocessing, I recommend my massive guide:

Did you enjoy this course? Let me know.

--

--