Algorithms — Discover the Power of Big O Notation

HlfDev
7 min readSep 5, 2023

--

Big O Notation — Complexity Chart

Big O notation plays a crucial role in analyzing the performance characteristics of algorithms. It enables us to estimate the time complexity in the worst-case scenario of an algorithm, taking into account the input size and the amount of data involved.

Article Topics

  • What is Big O notation?
  • Common types of complexity.
  • Understanding Time and Space Complexity.
  • General Rules of Big O Notation.
  • Best, average, and worst-case scenarios.
  • Examples of complexity in algorithms.
  • Conclusion.
  • References.

What is Big O Notation?

Big O notation is a powerful tool in algorithm analysis because it helps us understand the upper limit or worst-case performance of an algorithm. In other words, it helps us answer the question: “How fast or how efficient is an algorithm as the input data grows?” Through Big O notation, developers can achieve two essential things:

Comparative Evaluation: It allows us to compare different algorithms based on their efficiency. By using Big O, we can determine which algorithm is more suitable for a specific task, considering the size of the dataset we will work with.

Performance Prediction: It also enables us to make predictions about the algorithm’s performance as data grows. This is crucial for optimization and system scalability, ensuring they perform well even when facing larger workloads.

Common Types of Complexity.

O(1) — Constant Complexity: Regardless of the problem size, the algorithm always takes approximately the same amount of time to execute. For example, searching for an element in a sorted list where you can go directly to the desired position is an operation of constant complexity.

O(log n) — Logarithmic Complexity: The execution time increases more slowly as the issue size grows. It’s like finding a number in a sorted list by repeatedly halving it, as in binary search.

O(n) — Linear Complexity: The execution time grows in proportion to the issue size. For example, traversing a shopping list once to find an item is an operation of linear complexity.

O(n log n) — Linearithmic Complexity: Found in efficient sorting algorithms like merge sort and quick sort. As the difficulty size increases, the execution time grows, but not as quickly as quadratic complexities.

O(n²) — Quadratic Complexity: Common in algorithms with nested loops. As the concern size increases, the execution time grows rapidly. Imagine a two-dimensional array where you need to check every pair of elements.

O(n³) — Cubic Complexity: A step beyond quadratic complexity. As the issue size increases, the execution time increases even faster. This occurs in algorithms with three nested loops, such as those involving three-dimensional arrays.

O(2^n) — Exponential Complexity: Algorithms become extremely slow as the difficulty size increases. They are inefficient for large inputs, as the execution time grows exponentially. It can be compared to a time explosion.

O(n!) — Factorial Complexity: This is one of the slowest complexities. Factorial algorithms have an execution time that grows even faster than exponential ones. They are mainly used in permutation and combinatorial issues. The larger the input, the slower they become.

Understanding Time and Space Complexity.

In the world of computer science and algorithm design, efficiency is paramount. Understanding how the performance of algorithms scales with the size of the data they handle is crucial for making informed decisions when it comes to choosing the right algorithm for a task. Two fundamental concepts in this realm are Time Complexity and Space Complexity, often expressed using “Big O” notation.

Time Complexity (Big O Time):

  • Imagine you’re measuring the time it takes for an algorithm to solve a problem as the input (data) gets larger.
  • “Big O Time” describes the worst-case scenario for how long the algorithm will take to solve the problem as the input grows.
  • We use letters like O(N) or O(N²) to represent how time will increase concerning the input size.
  • For example, O(N) means that as the input grows, the algorithm’s execution time will increase linearly. O(N²) means that the execution time will increase quadratic as the input grows.

Space Complexity (Big O Space):

  • Now, imagine you’re measuring how much memory (storage space) an algorithm needs as it works with the input data.
  • “Big O Space” describes the maximum amount of memory the algorithm will use as the input grows.
  • Just like in time complexity, we use letters like O(N) or O(N²) to represent the amount of memory required concerning the input size.
  • For instance, O(N) means that the algorithm will use an amount of memory proportional to the input size. O(N²) means that the memory usage will increase quadratic as the input grows.

General Rules of Big O Notation.

Big O Notation — Performance of algorithms

There are some fundamental rules for expressing algorithm performance using Big O, including:

Factors Don’t Matter: We don’t worry about exact details, such as whether one algorithm takes 5 times longer than another. We only look at what happens when the data quantity becomes very large. Only What Matters: When comparing algorithms, we only care about the scalability part as we store more information in a database. The rest is not as important. Worst Case Matters: Usually, we are more concerned with the worst-case scenario when things take the longest time. It’s a kind of worst situation we are prepared to face.

Consider Only the Highest Complexity: In practice, the main rule for determining an algorithm’s complexity is that the highest complexity overrides the lower ones because Big O’s focus is on finding the worst-case scenario.

Big O Notation — Performance of algorithms

Given the code above, the highest complexity is O(n²) — Quadratic Complexity, because in the code there are two nested loops using O(n), so you can see that the lowest complexities O(1) and O(n) were ignored to determine how complex this code was.

Best, Average, and Worst Case

Big O allows us to analyze performance in different scenarios: best, average, and worst case.

Let’s assume we have a list containing 10 numbers sorted from smallest to largest, and we need to search for the number 17.

Big O notation — Ordered list of numbers — Best case

Best Case: We find it at the first position of the list, index 0, and don’t need to traverse the list further. This is our best case.

Now, let’s find the number 49.

Big O notation — Ordered list of numbers — Average case

Average Case: As shown above, we can see that it took 5 steps, from index 0 to index 4, which is practically halfway through the list to find the number. So, we’ll consider this our average case.

Now, let’s find the number 95.

Big O notation — Ordered list of numbers — Worst case

Worst Case: We notice that we need to traverse the entire list to find the number, which is at index 9. So, we should consider this the worst case. The purpose of Big O is always to analyze the worst case because imagine that this list has a billion records, and every time you need to access the last one, you have to traverse the entire list; the application would not be very performant.

Examples of Complexity in Algorithms

Here are practical examples of classifying algorithms using Big O:

O(1) — Constant Complexity:

O(1) — Constant Complexity

This function runs in O(1) time (or “constant time”) in relation to its input (list). The input list can have 1 item or 1,000 items, but this function still requires only one “step.”

O(n) — Linear Complexity:

O(n) — Linear Complexity

This function runs in O(n) time (or “linear time”), where n is the number of items in the list. If the list has 10 items, for example, we have to print 10 times. If it has 1,000 items, we have to print 1,000 times.

O(n²) — Quadratic Complexity:

O(n²) — Quadratic Complexity

This function runs in O(n²) time (or “quadratic time”). If our list has n items, our outer loop will run n times, and our inner loop will run n times for each iteration of the outer loop. With 10 items in the list, we print 100 pairs; with 1,000 items, we print 1,000,000 pairs.

Conclusion

Every programmer should have a solid understanding of Big O notation. This notation plays a fundamental role in determining the scalability of an algorithm, providing a crucial limit on the number of operations an algorithm can perform based on the data required to produce results.

--

--