# Data Structures & Algorithms: Asymptotic Analysis & Notations

In this article, the importance of asymptotic analysis is explained, followed by the introduction to asymptotic notations. The worst, average, and best case time complexity analysis are also briefly discussed.

*Why asymptotic analysis?*

*Why asymptotic analysis?*

Let’s assume we have two most popular sorting algorithms implemented -

**Insertion Sort**, which roughly takes*a*n²*time.**Merge Sort**, takes*b*n log n*time.

*Where a and b are constants, and n is the input size. *Intuitively we know, *n² > n*log n**, *for any input size *n > 0.*

** Now consider a situation:** The insertion sort is implemented in one of the fastest programming languages say

**, and the merge sort is implemented in a relatively slower programming language say**

*C++***. Also, the insertion sort is running in a 1000 times faster machine than the merge sort.**

*Python***Can we still guarantee that merge sort is faster than insertion sort?**Let’s play!

Say we use two machines — **Machine A** process ** 10¹⁰ instructions/second** and

**Machine B**process

**. We run insertion sort in Machine A and merge sort in Machine B. Ignoring the constant factors consider two set of inputs.**

*10⁷ instructions/second***Input array of size 1000:**

**Input array of size 1 Million:**

As we can see, the insertion sort performs better when the size of the array is relatively smaller. However, for large inputs insertion sort is highly inefficient when compared to merge sort. So,

## How to analyze algorithms independent of programming language, computer performance and other factors?

**Asymptotic Analysis** — *the measure of the order of growth of an algorithm in terms of n (input size). ***It guarantees that**, after a certain **n** (input size), merge sort running on the world’s slowest machine will outperform insertion sort running on the world’s fastest machine. Hence, you don’t even have to execute the algorithm itself to analyze the time complexity.

# Asymptotic Notations

Asymptotic notations are mathematical tools to represent the time complexity of algorithms for asymptotic analysis. **Theta(Θ)**,** Big O(O)**,** Omega (Ω)** are mostly used to represent the time complexity of algorithms:

**1. Θ notation:**

The theta notation bounds a function from above and below, so it defines exact asymptotic behavior.

A simple way to get Theta notation of an expression is to drop low order terms and ignore leading constants.

Example: *6n³ + 10n² + 300 = Θ(n³).*

Dropping lower order terms is fine as there will always be a value(*n1)*, after which Θ(n³) has higher values than Θ(n²) irrespective of the constants involved. For a given function g(n), we denote Θ(g(n)) is following set of functions :

`Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 `

such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n1}

## 2. Big O notation:

The Big O notation defines an upper bound of an algorithm.

For example, in the case of insertion sort, it takes linear time in the best case (*when the array is already sorted*) and quadratic time in the worst case. We can safely state that the time complexity of insertion sort is **O(n²)**, it also covers linear time.

However, in the case of Θ notation, the worst-case time complexity of insertion sort is Θ(n²), and the best case is Θ(n). Hence, Θ is tightly bound.

The Big O notation is particularly useful when we only have upper bound on time complexity of an algorithm. So we use Big O notation more than two other notations.

For a given function g(n), we denote 0(g(n)) is following set of functions :

`O(g(n)) = { f(n): there exist positive constants c and n1 `

such that 0 <= f(n) <= c*g(n) for all n >= n1}

## 3. *Ω *Notation:

*Ω*Notation:

The Omega notation provides an asymptotic lower bound.

The Ω notation can be useful when we have lower bound on time complexity of an algorithm.

For a given function g(n), we denote by Ω(g(n)) the set of functions:

`Ω (g(n)) = {f(n): there exist positive constants c and n1 `

such that 0 <= c*g(n) <= f(n) for all n >= n1}.

# Worst, average, and best case time complexity analysis

It is important to analyze an algorithm in terms of time complexity to improve it if possible.

As we discussed the asymptotic analysis appears to be the best way possible to do so. The reason is asymptotic analysis analyzes algorithms in terms of the input size. It checks how are the time growing in terms of the input size.

There are three cases to analyze an algorithm:

**Worst case analysis (Mostly Performed):**

In the worst case analysis, we calculate the **upper bound** on the run time of an algorithm. We do that by figuring out the case that maximizes the number of operations to be executed.

Example — For insertion sort, the worst case happens when the input array is reversely sorted. Hence, the worst case complexity of insertion sort is 0(n²), where n is the input size. This includes all the other possible cases of insertion sort. Hence, it is the ** most performed** analysis of algorithms.

**Average Case Analysis (Sometimes **Performed**) :**

In the average case analysis, we consider all possible inputs and calculate computing time for all of the inputs. Then, sum all the calculated values and divide the sum by the total number of inputs. For computing average case analysis, we must know (or predict) the distribution of cases.

**Best Case Analysis (D**elusive**) **:

In the best case analysis, we calculate the **lower bound** on the run time of an algorithm. We must know (or predict) the exact case that causes the minimum number of operations to be executed, to compute the best case of an algorithm.

**Upcoming articles:** Complexity Analysis of Loops, Complexity Analysis of Recursion, Space Complexity Analysis.

**References:** Introduction to Algorithms, GeeksForGeeks