Understanding Big O Notation

A Beginner’s Guide to Mastering Big O Notation in Computer Science

Auriga Aristo
Indonesian Developer
8 min readApr 27, 2024

--

Big-O Cheat Sheet Poster by Big-O Cheat Sheet

Have you ever wondered how fast your code runs or why it sometimes feels stuck in an infinite loop? That’s where the Big O notation comes into play — it is like a secret sauce that helps you understand your algorithms’ speed and efficiency. Whether you’re a seasoned developer or a budding coder, a good understanding of Big O notation is not just helpful; it’s essential for staying informed and up-to-date in the world of coding.

What is the Big O Notation?

Big O notation might sound fancy, but it is the most straightforward way to describe how well an algorithm scales as data increases. Think of it as a way to measure and prepare for the worst-case scenario for your code — like preparing for the most extensive, worst problem your code might face.

Why is it important?

When you are coding, especially on platforms like HackerRank or LeetCode, you want your solutions not just to work but to work efficiently. Big O notation helps you determine which parts of your code are slowpokes and which are sprinters. It’s all about making sure your code doesn’t just solve the problem but solves it in the least amount of time without hogging all the memory.

Key Concepts Unpacked

Worst-case Scenario

This concept is all about preparing for the extreme. Imagine you are writing a program that searches for a name in a long list. In the best case, the name you are looking for is at the top of the list. But the worst-case scenario? It might be at the bottom of the list or not at all. Planning for this ensures your program can handle even the most challenging situations without failing. Here’s a simple example:

Asymptotic Analysis

This sounds more complex than it is. Asymptotic analysis is about understanding how your algorithm behaves as the input size heads towards infinity. It’s like watching your code from a helicopter view to see how it performs in the long run. This will help predict the time or space needed for high data size. Considering sorting: sorting two items is quick, but what about two million of data? Asymptotic analysis gives you a high-level understanding of the scaling.

Both concepts — worst-case scenario and asymptotic analysis — are the foundation for effectively using the Big O notation. They will help you prepare and optimize your code for small tests. When your code hits real-world cases, the data size can be immense, and efficiency is vital.

Why Should You Care?

Now that we’ve covered Big O notation and unpacked some crucial concepts let’s explore why it’s such a game-changer in coding.

Measuring Algorithm Performance

Big O notation is a benchmark for gauging the efficiency of your algorithms. By understanding the worst-case scenario and the general behavior (asymptotic analysis), you can predict how it will perform as the challenges scale up. This is critical when participating in coding competitions or working on projects where efficiency can make or break your success.

For instance, consider a simple function that checks if a number is prime:

def is_prime(n):
if n <= 1:
return False
for i in range(2, int(n**0.5) + 1):
if n % i == 0:
return False
return True

# Checking prime for a large number
print(is_prime(7919))

In the code above, the time complexity is O(√n), where n is the number being checked. The code shows that as the numbers get more prominent, the time it takes to run the check increases, but not as dramatically as it would if the complexity were, say, O(n²).

Comparing Algorithms

Big O gives you a common language to compare approaches to the same problem. Let’s say you have two sorting algorithms: one runs in O(n²) and the other in O(n log n). With Big O, you can see that the second algorithm will perform better on larger datasets, guiding you to make more informed decisions about which algorithm to use under different circumstances.

Here is a visualization using a hypothetical table showing the time it takes for each algorithm to sort increasing sizes of data:

Practical Implications

Understanding and applying Big O notation can elevate your problem-solving skills and efficiency. For example, if you’re tasked with optimizing a search feature in an application, knowing whether to use a linear search (O(n)) or a binary search (O(log n)) can drastically reduce search times, leading to faster response times and a better user experience.

Common Big O Notations

As we dive deeper into the Big O notation, let’s break down some of the most common Big O notations you often encounter and how they can affect your code’s performance. This understanding is crucial for making informed decisions about which algorithms to use.

Big-O Complexity Chart by Big-O Cheat Sheet

O(1) — Constant Time

When an algorithm has a complexity of O(1), its performance is constant. No matter how much data you input, it takes the same time to complete. An easy example is accessing any element in an array by an index.

def get_item(arr, index):
return arr[index]

# Constant time example: Accessing an element
my_array = [1, 2, 3, 4, 5]
print(get_item(my_array, 2)) # Outputs 3

O(n) — Linear Time

An O(n) complexity means the performance time increases linearly with the amount of data. For instance, you might have to look through each element individually.

def find_item(list, item):
for i in list:
if i == item:
return f"{item} found!"
return f"{item} not found."

# Linear time example: Searching for an element
items = [5, 3, 7, 1, 4]
print(find_item(items, 7)) # Outputs '7 found!'

O(log n) — Logarithmic Time

Logarithmic time complexity is seen in algorithms that cut the problem size in half each step (like binary search). It is more efficient than linear time, especially as data grows.

def binary_search(arr, left, right, x):
if right >= left:
mid = left + (right - left) // 2
if arr[mid] == x:
return mid
elif arr[mid] > x:
return binary_search(arr, left, mid - 1, x)
else:
return binary_search(arr, mid + 1, right, x)
else:
return -1

# Logarithmic time example: Binary search
sorted_items = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(binary_search(sorted_items, 0, len(sorted_items)-1, 6)) # Outputs 5

O(n²) — Quadratic Time

Quadratic time complexities are the typical algorithms that involve nested iterations over the data. A classic example is the bubble sort algorithm, where each element is compared with every other element.

def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
return arr

# Quadratic time example: Bubble sort
unsorted_items = [64, 34, 25, 12, 22, 11, 90]
print(bubble_sort(unsorted_items)) # Outputs the sorted array

How to Calculate using Big O

Understanding how to determine the Big O notation of a given algorithm is essential for assessing its efficiency. Here’s a simple guide to help you calculate Big O with an example to illustrate the process:

  1. Identify the Worst Case: Consider the maximum number of operations your code might perform.
  2. Count the Loops: Note how often loops in your algorithm run about the input data size.
  3. Factor in Nested Loops: Remember that nested loops multiply the complexity.
  4. Ignore Constants and Non-Dominant Terms: When determining Big O, you drop constants and non-dominant terms because they don’t significantly affect scaling at large data sizes.

Let’s analyze a simple function that sums the elements of an array:

def sum_array(arr):
total = 0
for i in arr:
total += i
return total

By breaking down the function and examining how many times the loop runs relative to the input size, we can see that it processes each element once. Hence, the complexity is linear, O(n).

Real-World Example: Optimizing Search Algorithms

When optimizing database search functionality, choosing the suitable algorithm based on its Big O notation can significantly affect performance. In this example, we focus on an employee database search and compare three different search algorithms: linear search, binary search, and a hash table search. Each of these has different time complexities and use cases.

Linear search: O(n)

Linear search checks each record sequentially until the target is found. It’s simple but inefficient for large datasets.

def linear_search(employee_list, target_id):
for employee in employee_list:
if employee['id'] == target_id:
return employee
return None

Advantages:

  • Simple to implement
  • Effective for small, unsorted datasets

Disadvantages:

  • Inefficient as dataset size grows, as performance degrades linearly.
  • Time-consuming in large datasets, potentially leading to performance bottlenecks.

Binary Search: O(log n)

Binary search is efficient for sorted datasets. Each iteration uses a divide-and-conquer approach to cut the search effort in half.

def binary_search(sorted_employee_list, target_id):
low = 0
high = len(sorted_employee_list) - 1
while low <= high:
mid = (low + high) // 2
if sorted_employee_list[mid]['id'] == target_id:
return sorted_employee_list[mid]
elif sorted_employee_list[mid]['id'] < target_id:
low = mid + 1
else:
high = mid - 1
return None

Advantages:

  • Highly efficient for searching in large, sorted datasets.
  • Reduces the number of comparisons significantly, thus speeding up searches.

Disadvantages:

  • Necessitates maintaining a sorted dataset, which can be computationally expensive.
  • Any addition or deletion might require re-sorting and adding overhead in dynamic datasets.

Hash Table Search: O(1)

Hash tables offer an average time complexity of O(1) by using a hash function to map identifiers (like employee IDs) to specific indices in a table. This allows for swift data retrieval.

class HashTable:
def __init__(self, size):
self.size = size
self.table = [None] * size

def get_hash(self, key):
return key % self.size

def insert(self, key, value):
hash = self.get_hash(key)
self.table[hash] = value

def search(self, key):
hash = self.get_hash(key)
return self.table[hash]

ht = HashTable(10)
ht.insert(3, 'Alice')
ht.insert(1, 'Bob')
ht.insert(5, 'Charlie')

print(f"Found: {ht.search(5)}")

Advantages:

  • Provides fast data retrieval, generally O(1), making it ideal for frequent access.
  • Efficiently handles large datasets with complex data structures.

Disadvantages:

  • Space-inefficient, as it often requires extra memory to minimize collision effects.
  • Handling of collisions can complicate implementation and marginally slow down access.
  • Due to collisions, performance can degrade to O(n) in worst-case scenarios.

When deciding on a search method, consider the average time complexity and other factors like dataset size, update frequency, and specific performance needs. While hash table searches are excellent for large-scale operations with frequent lookups, their setup and maintenance costs might need to be more practical for smaller datasets or environments where data is less dynamic.

Mastering in the Real World

Understanding the Big O notation is more than just a theoretical exercise — it’s a crucial tool for optimizing the performance and efficiency of your algorithms. By mastering the concepts, you can make informed decisions about which algorithm to use, ensuring your application runs smoothly even as it scales. Big O notation helps you address current problems and anticipate potential performance issues, ensuring your software is robust and responsive under various conditions.

Now’s the perfect time to deepen your understanding of Big O notation. Dive into platforms like HackerRank and LeetCode to practice and refine your skills. The more you engage with these challenges, the better you’ll grasp how different algorithms impact performance and how you can optimize your code using Big O notation.

Keep learning and practicing. Every problem you solve helps you understand more about what makes code efficient and effective. Mastering Big O and optimizing your code begins with consistent practice and curiosity. Get started today and watch your coding skills soar!

--

--

Auriga Aristo
Indonesian Developer

4+ years in Backend Developer | PHP, Java/Kotlin, MySQL, Golang | New story every week