Suppose you are assigned to write a program. So, you rush into the code, got some errors, fixed them, tested, everything is working fine, and you’re done!.

Hold your horses. Is your algorithm worth implementing? Will it solve the largest test cases in acceptable time? Do you know more algorithms that can solve the problem, and which of them shall you implement?

Therefore, comparing between different algorithms is essential for a better performance.

So, the question is: How to compare algorithms? The Big O Notation, sounds scary? not that much.

Big O Notation is a convenient way to express the worst-case scenario for an algorithm

Thus, Big O Notation will be our definition of a better algorithm.

# Related Asymptotic Notations

Big O is the most commonly used asymptotic notation for comparing algorithms, but there are the two most often used notations aside from the big *O* notation.

## Big Omega Ω

It represents the lower bound. Therefore, It doesn’t help much.

Ω(N) means it takes at least N steps.

## Big Theta Θ

It represents the lower bound and the upper bound of an algorithm. It’s hard to compute.

Θ(N) means it takes at least N and at most N steps.

## Big Oh O

It represents the upper bound. Big-oh is the most useful because represents the worst-case behavior. So, it guarantees that the program will terminate within a certain time period, it may stop earlier, but never later.

O(N) means it takes at most N steps.

## 📣 BONUS Question: What is the relationship between Big O, Θ, Ω and best, worst, and average case of an algorithm?

The asymptotic notations are used to express the lower (big omega), upper (big o), or lower and upper (big theta) limits of the best, average, or worst case (types of analysis) of an algorithm.

So, asymptotic notations are used to describe the best, average, or worst case (types of analysis) of an algorithm.

As an example, worst case analysis gives the maximum number of operations assuming that the input is in the worst possible state, while the big o notation express the max number of operations done in the worst case.

Although big o notation has nothing to do with the worst case analysis, we usually represent the worst case by big o notation. So, you can analyse just fine without big O notation.

For example, the time complexity of Mergesort in the worst case is Θ(nlogn). This means in the worst case analysis, Mergesort will make roughly nlogn operations.

Another example, In the average case analysis, we can use the big o notation to express the number of operations in the worst case. So, In binary search, the best case is O(1), average and worst case is O(logn).

In short, there is no kind of relationship of the type “big O is used for worst case, Theta for average case”. All types of notation can be (and sometimes are) used when talking about best, average, or worst case of an algorithm.

# Running Time Calculations

There are several algorithms we could use, and probably the best way to decide which is faster is to code them and run!. But, we would like to eliminate the bad ones early. So, analysis is usually required. Analysis also helps us to design efficient algorithms.

## A Simple Example

` int sum (int n) {`

1 int totalSum = 0;

2 for( int i = 0; i < n; i++)

3 totalSum+= i;

4 return totalSum;

}

The analysis for this piece of code is simple. **Lines 1 and 4** count for one unit of time each.

**Line 2** counts for one unit for initializing, N + 1 for the tests, and N for the increments. The test is executed N + 1 times, where the last time is when the condition is false*. *The increment is executed N times. So, the total cost is 2N + 2.

**Line 3** counts for 2 units; one addition and one assignment, and it’s executed N times, for a total of 2N units.

Therefore, we have **2 + 2N + 2 + 2N = 4N + 4**.

## Simplify The Analysis

To simplify the analysis, fortunately, since we are going to use the Big O notation, there are a lot of shortcuts we can make. We are going to throw away any constants, and also low-order terms.

The terms are essentially the things separated by plus / minus symbols.

We are going to drop the **+4**, because it’s obviously lower than **4N**(that’s the low-order term), and also throw any constants. So, we will end up having a complexity of **O(N)**, instead of **O(4N + 4)**.

If the constant is large, in this case, it is good practice to give this constant a name and to include it in the asymptotic notation. So, a program counts for 10⁵ * N units would have complexity of O(K * N).

This was a simple example to get up and running. But, there are some other general rules to follow when analyzing our code.

## General Rules

**1. Loops**

The running time of the loop is at most the running time of the statements inside the loop (including the tests) multiplied by the number of iterations.

As an example, the following program has complexity of O(N).

`for( int i = 0; i < n; i++)`

k++; // This statement will run N times.

**2. Nested Loops**

The total running time of the nested loops is the running time of the outer loop multiplied by the inner loop(s).

Let’s try the other way; Analyze Inside out.

The total running time of a statement inside a group of nested loops is the running time of that statement multiplied by the product of the sizes of all the loops.

As an example, the following program has complexity of O(N²).

`for( int i = 0; i < n; i++)`

for( int j = 0; j < n; j++)

k++; // This statement will run N^2 times.

**3. Consecutive Statements**

When there are consecutive statements, we count the statement with maximum complexity.

As an example, the following program which has O(N), followed by O(N²), has complexity of O(N²).

for( int i = 0; i < n; i++) // O(N)

k++;for( int i = 0; i < n; i++) // O(N^2)

for( int j = 0; j < n; j++)

k++;

**4. If-Else**

The running time is never more than the running time of the test(s) plus the running time of the block with maximum complexity.

`if (condition)`

block 1

else if (condition)

block 2

else

block 3

**5. Simple Statements**

Return statements, initialize a variable, increment , assigning, …etc. All of these operations counted in O(1).

# Big O Pitfalls

Since Big O Notation tells us about the upper bound of an algorithm ignoring constants and low-order terms. An algorithm may have exact number of steps in the worst case more or less** **than the Big O notation.

## Exact Steps > Big O

An algorithm takes 5 * N steps has complexity of O(N) in the worst case, which is smaller than the exact number of steps.

`for(i = 0; i < 5*n; i++ )`

k++;

## Exact Steps < Big O

As an example, if we have a loop with N = 10⁹ iterations, therefore it has complexity of O(N), but, according to the nature of the data, the goal will always be found before N hits 10⁷.

`for(i = 0; i < n; i++ )`

if(arr[i] == goal) break;

Another example is, when the number of loops are decreasing by the time. So, first time it will loop N, N-1, N-2, …, till 1. The Big O notation for the following code is O(N³), while the exact number of steps is much less than that.

`for (int i = 0; i < n; ++i)`

for (int j = i; j < n; ++j)

for (int k = j; k < n; ++k)

cunt++;

# Common Time Complexities

When speaking about the time/memory complexity of an algorithm, instead of using the formal notation, we may simply state the class of an algorithm.

Here are some classes of the common time complexities.

## Constant — O(1)

The algorithm does a constant number of operations independent on the input.

int start = 6;

int end = 100;int mid = (end — start) / 2;if(mid%2 == 0) mid--;

## Linear— O(N)

The running time of the algorithm increases linearly with the size of the input.

`for(i = 0; i < n; i++ )`

k++;

## Logarithmic— O(LogN)

The running time of the algorithm is decreased by some factor with each step. A very simple example of this type is an algorithm that keeps dividing the input by two. A binary search algorithm follows the same rule.

`while(n > 0){`

n /= 2;

}

The logarithm is base 2, that is, Log2 N.

## Linearithmic— O(NLogN)

The running time of the algorithm is as a result of performing a logarithmic operation N times.

For example, inserting N number of nodes inside a binary search tree. Each insertion takes O(LogN) time, while the entire algorithm takes linearithmic time.

Also, the average case of quicksort, heapsort, and merge sort takes O(NLogN) time.

int arr[n] = [1,6,3];for (int i = 0; i < n; ++i)

binarySearchTree.insert(arr[i]);

## Square Root — O(sqrt(N))

The running time of the algorithm is decreased by the square root of the size of the input. As an example, you could check if a number is a prime or not by just looping till it’s square root.

`bool isPrime(int n) {`

if (n == 2)

return true;

if (n < 2)

return false;

for (int i = 2; i <= sqrt(n); i ++)

if (n%i == 0) return false;

return true;

}

## Quadratic — O(N²)

The running time of the algorithm is as a result of performing a linear operation N times; So, it’s N multiplied by N. A common sorting algorithms like bubble sort, selection sort and insertion sort takes O(N²).

`for( int i = 0; i < n; i++)`

for( int j = 0; j < n; j++)

k++;

## Cubic— O(N³)

The running time of the algorithm is as a result of performing a linear operation N² times; So, it’s N multiplied by N, multiplied by N.

`for (int i = 0; i < n; ++i)`

for (int j = 0; j < n; ++j)

for (int k = 0; k < n; ++k)

cunt++;

## Polynomial — O(N^c)

The running time of the algorithm is as a result of performing a linear operation N^(c-1) times, for some constant c, where c > 1.

In other words, The running time of the algorithm is a simple polynomial function of the size of the input.

Since polynomial has complexity of O(N^c), where c > 1. Therefore, O(N²) is also a polynomial time.

`for( int i = 0; i < n; i++)`

for( int j = 0; j < n; j++)

k++;

## Exponential— O(c^N)

The running time of the algorithm is a constant to the N power, where c > 1. It’s common in situations when you traverse all the nodes in a binary tree. The complexity would be exponential in depth; O(2^(d+1)), which is O(2^d).

void traverse(Node node){

if (node == NULL){

return;

}

print(node.value); traverse(node.left);

traverse(node.right);

}

## Factorial— O(N!)

The running time of the algorithm is factorial of N. It’s common in generating permutations. The complexity would be the summation of N!/K!, where k = 0…N; O(N!/0! + N!/1! + …. + 1), which is O(N!).

const int n = 3;

int arr[n] = [1, 2, 5], cur[n];

bool vis[n];void permutation(int i) {

if (i == n){

print(cur);

return;

}

for (int j = 0; j < n; ++j){

if (!vis[j]){

vis[j] = 1;

cur[i] = num[j];

permutation(i + 1);

vis[j] = 0;

}

}

}// This will print

// {1,2,5}, {1,5,2}, {2,1,5}, {2,5,1}, {5,1,2}, {5,2,1}