Time Complexity Analysis Cheat Sheet

Marisol Hernandez
4 min readJan 8, 2024

--

In my last blog, I shared the beginning of my journey in strengthening my understanding of computer science fundamentals, starting with Pointers. Since then, I have completed another Youtube playlist on Time Complexity Analysis.

The following is a summary of my learnings, along with a cheat sheet that can easily be exported and referenced on an as needed basis.

What is Time Complexity?

Time Complexity: a measurement of how fast the time taken by a program grows as the input size grows

Time Complexity

Why should we care?

  1. Optimization: guides algorithm optimization, ensuring efficient resource utilization and faster program execution
  2. Scalability: allows us to predict how an algorithm’s performance will scale as the input size increases
  3. Comparative Analysis: provides a standardized way to compare the efficiency of different algorithms

Runtime Dependencies

❌ Single vs multiprocessor

❌ Read/write speed to memory

❌ 32 bit vs 64 bit

✅ Input

When determining time complexity, we only care about how our program behaves with various inputs.

Asymptotic Notations

Asymptotic notations: mathematical notations used in analysis of algorithms to describe their behavior as the input size approaches infinity

Main asymptotic notations:

  1. Big O Notation (O)
  2. Omega Notation (Ω)
  3. Theta Notation (Θ)

Big O Notation (O)

Big O Notation: describes the upper bound on the growth rate of an algorithm in the worst-case scenario

Big O Notation

Omega Notation (Ω)

Omega Notation: represents the lower bound on the growth rate of an algorithm in the best-case scenario

Omega Notation

Theta Notation (Θ)

Theta Notation: provides a tight bound on the growth rate, encompassing both upper and lower bounds

Theta Notation

Determining Time Complexity — A simple example

Say we have a function that sums up the elements of an array,

def sum_array(arr):
total = 0
for num in arr:
total += num
return total

Loop Iteration: the number of iterations is directly proportional to the size of the array

Time Complexity: the time complexity is O(n), where n is the size of the input array, indicating a linear relationship between the input size and the time taken to execute the algorithm

Some General Rules

Rule #1: drop lower order terms and constant multiplier

Rule #2: Running time = ∑ Running time of all fragments

def my_func(n):
a = 5 # O(1)

for i in range(n): # O(n)
# simple statements

for i in range(n): # O(n^2)
# simple statements

for j in range(n):
# simple statements
Time complexity of my_func(n)

The total number of iterations in the worst case is proportional to n * n = n². As a result, the time complexity of the algorithm is O(n²).

This signifies that the running time of the algorithm grows quadratically with the size of the input, n.

Rule #3: When there exists conditional statements, pick the complexity of the condition which is the worst case

def my_other_func(n):
if <some condition>:
for i in range(n): # O(n)
# simple statements
else:
for i in range(n): # O(n^2)
# simple statements

for j in range(n):
# simple statements

The total number of iterations in the worst case is proportional to n * n = n². As a result, the time complexity of the algorithm is also O(n²).

Time Complexity Cheat Sheet

Below is a cheat sheet I crafted that contains a summary of my learnings. Feel free to export and reference it on an as needed basis; I know I will!

Time Complexity Analysis Cheat Sheet

References

  1. https://www.youtube.com/playlist?list=PL2_aWCzGMAwI9HK8YPVBjElbLbI3ufctn

--

--