Asymptotic Notation

Whenever we want to perform analysis of an algorithm, we need to calculate the complexity of that algorithm. But when we calculate complexity of an algorithm it does not provide exact amount of resource required. So instead of taking exact amount of resource we represent that complexity in a general form (Notation) which produces the basic nature of that algorithm.

Asymptotic notation of an algorithm is a mathematical representation of its complexity

In asymptotic notation, when we want to represent the complexity of an algorithm, we use only the most significant terms in the complexity of that algorithm and ignore least significant terms in the complexity of that algorithm (Here complexity may be Space Complexity or Time Complexity).

For example, consider the following time complexities of two algorithms…

  • Algorithm 1 : 5n2 + (2n + 1)
  • Algorithm 2 : 10n2 + (8n + 3)

Generally, when we analyze an algorithm, we consider the time complexity for larger values of input data (i.e. ’n’ value). In above two time complexities, for larger value of ’n’ the term in algorithm 1 ‘2n + 1’ has least significance than the term ‘5n2’, and the term in algorithm 2 ‘8n + 3’ has least significance than the term ‘10n2’.

Majorly, we use Three types of Asymptotic Notations and those are as follows…

  • Big — Oh (O)
  • Big — Omega (Ω)
  • Big — Theta (Θ)

Big — Oh Notation (O)

Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.

Big O notation is used to define the upper bound of an algorithm in terms of Time Complexity.

Some Basic Rules:

1. Nested loops are multiplied together.
2. Sequential loops are added.
3. Only the largest term is kept, all others are dropped.
4. Constants are dropped.
5. Conditional checks are constant (i.e. 1).

Here we iterate ’n’ times. Since nothing else is going on inside the loop (other then constant time printing), this algorithm is said to be O(n).

Linear Complexity:

linear time complexity

Quadratic complexity:

Each loop is ’n’. Since the inner loop is nested, it is n*n, thus it is O(n²). Hardly efficient. We can make it a bit better by doing the following:

Outer loop is still ’n’. The inner loop now executes ‘i’ times, the end being (n-1). We now have (n(n-1)). This is still in the bound of O(n²), but only in the worst case.

Finite loops are common as well, an example:

Outer loop is ’n’, inner loop is 2, this we have 2n, dropped constant gives up O(n).

Imagine a classroom of 100 students in which you gave your pen to one student. Now, you want that pen.

O(n² ): You go and ask the first student of the class , “Does Jeff have the pen? Also, you ask Jeff about other 99 students if they have the pen? If you don’t get the answer from the Jeff, you move on to the next one. In the worst case you need to ask questions — questioning each student about each other student.

O(n): I ask each student if they have the pen. If not, I move on to the next one. In the worst case I need to ask n questions.

O(log n): Now, divide the class in two groups. Ask which group has the pen, let it be A. Now, divide this group A further in two parts B and C. Again ask these two groups which have the pen. Take the group (B or C which has the pen) and further divide it. Repeat the process till you are left with 1 student who has your pen. This is what you mean by O(logN).

O(n log n): The best example of O(n log n) is a merge sort. This is a divide and conquer algorithm. Imagine you have a list of integers. We divide the list in two again and again until we are left with with a number of lists with 1 item in: each of these lists is therefore sorted. We then merge each list with it’s neighbor (comparing the first elements of each every time). We repeat this with the new composite list until we have our sorted result.

merge sort

O(2n): denotes an algorithm whose growth doubles with each addition to the input data set.The growth curve of an O(2N) function is exponential.

func fibonacci(number: Int) -> Int {
if number <= 1 { return number }
return fibonacci(number - 2) + fibonacci(number - 1)
}

O(1): describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.

func isFirstElementNull(elements: [String]) -> Bool {
return elements[0] == null
}

Big — Omega Notation (Ω)

Big — Omega notation is used to define the lower bound of an algorithm in terms of Time Complexity.

That means Big — Omega notation always indicates the minimum time required by an algorithm for all input values. That means Big — Omega notation describes the best case of an algorithm time complexity.

Big — Theta Notation (Θ)

Big — Theta notation is used to define the average bound of an algorithm in terms of Time Complexity.

That means Big — Theta notation always indicates the average time required by an algorithm for all input values. That means Big — Theta notation describes the average case of an algorithm time complexity.