Big O Categories šŸ“ˆ

Milan Parmar
Star Gazers
Published in
5 min readMay 23, 2021
Source: Wildheart Media

In this blog, I will be breaking down how we identify which category an algorithm is placed, under Big O Notation. If you arenā€™t familiar with the Big O then you can read one of my previous blogs here, that explains each category of Big O.

We know that Big O Notation refers to the time complexity of an algorithm, or in other words, how many steps an algorithm takes if there are N number of elements. However, there are a few things we must know about Big O, to determine which category an algorithm is placed in.

Ignoring Constants āœ‹šŸ¼

In the world of Big O Notation, some algorithms are described in the exact same way but may vary in speed. For example, we may assume a faster algorithm is O(log n) if another is O(N), but this isnā€™t always the case.

Let us look at two sorting algorithms that can help describe this phenomena ā€” Bubble Sort and Selection Sort.

Bubble sort passes through an array of numbers, then aims to swap two consecutive numbers to get the smallest number on the left hand side, until the array of numbers is sorted in ascending order. Bubble sort being an aptronym, as we would see the largest number ā€˜bubbleā€™ up to the right, for each time we have made a pass-through. Essentially, two numbers are being compared with one another to determine which is greater and two numbers are being swapped in order to sort them. Now, if we are to describe its efficiency, we would say that it is O(nĀ²). Why? Because as the number of elements increase, the number of steps grow exponentially. If we had 5 elements, the number of maximum steps it would take to sort using bubble sort, would be 20 steps (looking at up to 10 comparisons and up to 10 swaps).

Selection sort passes through an array of numbers, keeping track of which number is the smallest, and when the smallest number is found, it will swap that number with numbers on the far left and pass-through, working in the same way continuously, until sorted. Essentially, the lowest number is being compared with each value on each pass-through and swapping the lowest number into its correct position. Now, if we are to describe its efficiency, we would say that it is also O(nĀ²). If we had 5 elements to deal with, the number of maximum steps it would take to sort using selection sort, would be 14 steps (looking at 10 comparisons and 4 swaps). Okayā€¦waitā€¦hold onā€¦selection sort takes less steps to complete the algorithm than bubble sort but is also considered O(nĀ²)?!

What we see here is that selection sort takes roughly half of nĀ² steps and so we may assume that an appropriate way to name this algorithm is O(nĀ²/2). In other words, for the N number of elements, there are close to nĀ²/2 steps. We can see this through this table:

Source: A Common-Sense Guide to Data Structures and Algorithms

So, why are we not including ā€˜/2ā€™ when referring to the time complexity of selection sort? It is because Big O Notation ignores constants i.e. Big O never includes regular numbers that arenā€™t exponent. In the case for selection sort, we remove ā€˜/2ā€™ and are left with O(nĀ²).

Knowing this, if we were to have an algorithm that takes 2N steps (N * 2 steps), we would describe the efficiency of this algorithm as O(n) because we remove any regular numbers.

General Categories šŸ”¢

Big O Notation only concerns itself with general categories of algorithm speeds. The best way we can understand this is through an analogy, so letā€™s talk about cars. There are, of course, many types of cars that come under different categories e.g. micro, hatchback, coupe, sports, super etc.

Source: Monophy

If we were to compare two different cars, one of which is a micro and the other a supercar, it can become irrelevant to mention which one is faster. As these cars are completely on the opposite ends of the spectrum, we do not really need to express the long list of specs for each. We may as well just pass one as the slowest and and the other as the fastest, or micro and super.

The same applies to algorithm efficiencies. If we compare a O(n) algorithm and O(nĀ²) algorithm, they are totally different, that it doesnā€™t really matter whether the O(n) algorithm is O(2n) or O(n/2).

The reason why O(n) and O(nĀ²) algorithms would be in two separate categories, but O(n) and O(2n) or O(n/2) would be under the same, is because Big O cares about the long-term trajectory of the algorithmā€™s steps as the data increases. On a graph, an O(n) algorithm would be represented by a straight line and an O(2n) or O(n/2) algorithm would also be a straight line around O(n). However, O(nĀ²) tells us a totally different story of exponential growth, especially on a graph.

All types of Big O Notation ā€” O(1), O(n), O(nĀ²), O(log n) etc. are general categories that have a huge noticeable difference. If we were to multiply or divide the number of steps by a regular number, it doesnā€™t make them change to another category.

However, when two algorithms fall under the same classification of Big O, it doesnā€™t necessarily mean that both algorithms have the same speed or the number of steps taken to complete it. When two algorithms fall under the same category, further analysis is required to determine which algorithm is faster.

I hope this gives more clarification on Big O Notation and the classification of efficiencies. Take a read of Jay Wengrows book: ā€˜A Common-Sense Guide to Data Structures and Algorithmsā€™ for more in-depth explanation!

--

--

Milan Parmar
Star Gazers

Software Engineer šŸ‘ØšŸ½ā€šŸ’» | Full Stack Web Development šŸ’» | Smartphone Tech EnthusiastšŸ“±