Amdahl’s Law

Mrinal Gupta
Nerd For Tech
Published in
3 min readSep 22, 2023

No text on concurrency is complete without mentioning Amdahl’s Law. Blindly adding threads to speed up program execution may not always be a good idea. The law specifies the cap on the maximum speedup that can be achieved when parallelizing the execution of a program.

Amdahl’s Law Graph

Context

If you have a poultry farm where a hundred hens lay eggs each day, then no matter how many people you hire to process the laid eggs, you still need to wait an entire day for the 100 eggs to be laid. Increasing the number of workers on the farm can’t shorten the time it takes for a hen to lay an egg. Similarly, software programs consist of parts that can’t be sped up even if the number of processors is increased. These parts of the program must execute serially and aren’t amenable to parallelism.

The Law Itself

Amdahl’s law describes the theoretical speedup a program can achieve at best by using additional computing resources.

Amdahl’s Law

Example

Let’s say our program has a parallelizable portion of P = 90% = 0.9. Now let’s see how the speed-up occurs as we increase the number of processes.

Example of Amdahl’s Law

The speed-up steadily increases as we increase the number of processors or threads. However, as you can see the theoretical maximum speed-up for our program with 10% serial execution will be 10. We can’t speed up our program execution more than 10 times compared to when we run the same program on a single CPU or thread. To achieve greater speed-ups than 10 we must optimize or parallelize the serially executed portion of the code.

One should take the calculations using Amdahl’s law with a grain of salt. If the formula spits out a speed-up of 5x it doesn’t imply that in reality, one would observe a similar speed-up. Other factors such as the memory architecture, cache misses, network and disk I/O, etc. can affect the execution time of a program and the actual speed-up might be less than the calculated one.

The Amdahl’s law works on a problem of fixed size. However as computing resources are improved, algorithms run on larger and even larger datasets. As the dataset size grows, the parallelizable portion of the program grows faster than the serial portion and a more realistic assessment of performance is given by Gustafson’s law which we will discuss some other day.

Till then Happy Coding!!!

Follow me at Linkedin : https://www.linkedin.com/in/mrinalgupta1097/

--

--

Mrinal Gupta
Nerd For Tech

Software Developer on weekdays and A reading enthusiast on weekends. Beethoven’s silence and Chopin’s minute waltz intrigue me.