Big-O Notation

Samarth Kamath
The Startup
Published in
9 min readSep 16, 2020

Is it just enough to know how to program? Is logic the only requirement?

The knowledge of basic data structures, their characteristics, and the algorithms are important tools in a software engineer’s toolkit. Having the right tools for the job can make the job a lot easier. Being trained in the proper use of these tools is essential for being effective. Data structures and algorithms allow you to write performant code, which minimizes the use of scarce resources and helps ensure that your code is not just correct but also fast.

If you have participated in Google Kickstart or solved some Hacker Rank contests or others, you would have encountered a situation where your code might be slower than expected and possibly you got a check popup or warning saying your code couldn’t do it on time. In the same way, we compare different algorithms based on their elapsed time, space used and network bandwidth used. These will say either your code is performant or not. These are called Dimensions of Performance.

Now that we know complexity is a measure of performance, how to measure complexity?

The complexity of your code, whether it’s time complexity, space complexity, or the complexity for any other resource usage, can be expressed in terms of the Big-O notation.

Now let us try to know about different types of complexities.

1. Constant Complexity

Real-life Example: Suppose I am selecting some people for a game and I will select only first four applicants. Will the number of applicants effect complexity of selection?

Even if there 1000 applicants I will just ignore the rest and select only the first four people. So the complexity is O(4).

Programming Example: If we create a function which will take an array as input and display the first character of the array. The program will have the complexity of O(1). This is because even if we take a 1,000,000 longword we only care about the first letter.

Example code:

//C++ Code

#include <iostream>
using namespace std;

int main()
{
char str[1000000];
cout << “Enter Your Name::”;
cin.getline(str, 1000000);

cout << “\nFirst Letter is:: “ << str[0];
return 0;
}

So if complexity doesn’t change with the number of inputs it is called Constant Complexity

2. Logarithmic Complexity

Real-life Example: In your school days, there was a custom of allowing benches for each person. Here what is done is everyone is made to stand according to increasing order of heights. Then they will be made to sit one after the other.

Now, suppose a person who was out for some reason comes in. He is the member of your class and needs to be added in the line. So, what your lecturer would probably do is, go to the exact middle of the line and compare the heights. If he fits in then that is all, else, your teacher will only consider the taller part if he is taller than the compared person, or the other if he is short. Again, the person is compared and compared until his height finally fits somewhere. Here unlike Linear Complexity, you are not comparing with each person in the group. Instead, as every comparison halves the entire set. Here comes the next complexity called Logarithmic Complexity or O(log n).

Programming Example: The above problem is called as searching in Programming and the method is called as Binary Search. The main condition is that the elements must be in sorted order. Here instead of heights, we will compare numbers and if the number is missing we display a message saying “not found”

Example code:

// C++ program to implement recursive Binary Search
#include <bits/stdc++.h>
using namespace std;

// A recursive binary search function. It returns
// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r — l) / 2;

// If the element is present at the middle
// itself
if (arr[mid] == x)
return mid;

// If element is smaller than mid, then
// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid — 1, x);

// Else the element can only be present
// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not
// present in array
return -1;
}

int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n — 1, x);
(result == -1) ? cout << “Element is not present in array”
: cout << “Element is present at index “ << result;
return 0;
}

So, if you find some program that works on half the existing set after each iteration, it will most commonly have the complexity of O(log n).

3. Linear Complexity

Real-life Example: In your school or college days, first-class of every teacher would probably be an introduction. First, your teacher would and then he/she would ask the whole class to introduce.

Here you must notice that if the number of students is 10 it might take probably 20 minutes. If it is 1,000,000 then it probably takes around 2,000,000 minutes. Here the most important point to note is that time increases with an increase in the number of students. So here the complexity depends on the number of students and hence the complexity is O(n), which is called Linear Complexity.

Programming Example:

If you want to traverse through an array (which is like reading a book line by line), as the number of inputs increases the output complexity increases. This is because the number of iterations increases with the increase in input.

Example code:

#include <iostream>
using namespace std;

int main() {
int numbers[5] = {7, 5, 6, 12, 35};

cout << “The numbers are: “;
for (const int &n : numbers) {
cout << n << “ “;
}

cout << “\nThe numbers are: “;
for (int i = 0; i < 5; ++i) {
cout << numbers[i] << “ “;
}

return 0;
}

So, if complexity increases proportionally to the input it is called Linear Complexity.

4. Linearithmic Complexity

Real-life Example: Let us consider that in your class you were to be sorted as per your heights. Now, what I will do is, divide the whole class into 2. Then, divide these 2 groups into 2 and repeat the same with the subdivided groups until I divide the whole group into individuals. Then I start comparing a pair of 2 and arrange them according to their heights. Again, compare 2 pairs and arrange them and so on until the whole group is sorted.

This way of sorting has a complexity of O(n log n) or Linearithmic Complexity.

Programming Example: The above-mentioned example is called binary sorting in programming, which comes under the divide and conquers method of solving a problem. Here instead of heights, we consider the value of numbers.

Example code:

// C++ program to implement recursive Binary Search
#include <bits/stdc++.h>
using namespace std;

// A recursive binary search function. It returns
// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r — l) / 2;

// If the element is present at the middle
// itself
if (arr[mid] == x)
return mid;

// If element is smaller than mid, then
// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid — 1, x);

// Else the element can only be present
// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not
// present in array
return -1;
}

int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n — 1, x);
(result == -1) ? cout << “Element is not present in array”
: cout << “Element is present at index “ << result;
return 0;
}

I remember this as one and a half complexity, which is like multiplying linear and logarithmic complexities.

5. Polynomial Complexity

Real-life Example: Suppose you go shopping with a long shopping list and before you go for billing you will surely like to check you have taken all the items present in the list. Now you must first check one item in the list and check if it is there in your basket, again the second item in the list and in the basket. So, you go through the same number of twice. So, the complexity is O(n²) or Quadratic Complexity.

Programming Example:

Check this video to understand what is insertion sort, which is an example for the program with O(n²) Complexity.

Example Program:

// C++ program for insertion sort
#include <bits/stdc++.h>
using namespace std;

/* Function to sort an array using insertion sort*/
void insertionSort(int arr[], int n)
{
int i, key, j;
for (i = 1; i < n; i++)
{
key = arr[i];
j = i — 1;

/* Move elements of arr[0..i-1], that are
greater than key, to one position ahead
of their current position */
while (j >= 0 && arr[j] > key)
{
arr[j + 1] = arr[j];
j = j — 1;
}
arr[j + 1] = key;
}
}

// A utility function to print an array of size n
void printArray(int arr[], int n)
{
int i;
for (i = 0; i < n; i++)
cout << arr[i] << “ “;
cout << endl;
}

/* Driver code */
int main()
{
int arr[] = { 12, 11, 13, 5, 6 };
int n = sizeof(arr) / sizeof(arr[0]);

insertionSort(arr, n);
printArray(arr, n);

return 0;
}

Like the above-discussed complexity, you can even find O(n³), O(n⁴),… which come under Polynomial Complexity, which tends to go through each element again and again.

These are some of the most commonly seen complexities and there are like:

6. O(log² n) {Log sqaure} Complexity

7. O(n^(1/2)) {Root n} Complexity

8. O(2^n) {Exponential} Complexity

9. O(e^n) {Exponential} Complexity

10. O(n!) {Factorial} Complexity

and so many others…

I have created graphs of these 10 complexities in the below-given link. Feel free to check them out and understand the complexities in a better way.

Now after looking at so many complexities you might be wondering which is better?

The order is:

Constant < Logarithmic < Log square < Root n < Linear < Linearithmic < Quadratic < Exponential (2^n) < Exponential (e^n)

Here complexity increases from left to right. But this is for bigger inputs only. See the graph properly and comment below if Exponential or Quadratic would be greater for lower values of input.

One more important thing is, the complexity can be different for different inputs, which are called as best case, average case and worst-case complexities. When someone talks about any complexities they mean worst-case complexity.

I hope this blog was useful.

Thank you…

--

--