# Understanding Time Complexity and its Importance in Technology

If you’re doing computer science, you must have come across the notations of time complexity someway or the other because if you haven’t, you might not be on the right track of becoming a computer programmer. There’s not a single tech interview in the world that’s not going to ask you to identify the running time complexity of a program so you better get yourself farmiliar with it. Even excluding that fact, it’s necessary for a programmer to know which algorithm has the lowest run time, while writing code,unless obviously he wants his program to lag to the extent that the user has to take a power nap before it actually gives a reaction to any form of input.

**The time complexity of an algorithm is the total amount of time required by an algorithm to complete its execution.** In simple words, every piece of code we write, takes time to execute. The time taken by any piece of code to run is known as the time complexity of that code. The lesser the time complexity, the faster the execution.

If you’ve programmed a bit before, you’re probably wondering how this can be of any use for you becuase your program was running fine even when you didn’t know all of this time complexity stuff, right? I agree with you 100% but there’s a catch. The time for a **program** to run does not depend solely on effentiency of code, it’s also dependent on the processing power of a PC. Since time complexity is used to measure the time for algorithms, the type of algorithms you’d use in a small program wouldn’t really matter because there’s hardly any work being carried out by the processor although when we write code in professional life, the code isn’t of 200 or 300 lines. It’s usually longer than a thesis written by a professor and in cases like that, a lot of processor power is being used. If your code is not effecient in terms of data structures, you might find yourself in a rather sticky situation.

Time Complexity is often measured in terms of :

**Big Oh (O) : worst case running time**

In analysis algorithm, Big Oh is often used to describe the worst-case of an algorithm by taking the highest order of a polynomial function and ignoring all the constants value since they aren’t too influential for sufficiently large input. So, if an algorithm has running time like f(n) = 3n + 100, we can simply state that algorithm has the complexity O(n) which means it always execute at most n procedures(ignoring the constant ‘100' in between and also the constant ‘3' being multiplied by n). Thus, we can guarantee that algorithm would not be bad than the worst-case.

**Example**: prove that f(n) = 5n² + 3n + 2 has Big Oh O(n²)

Since we know that the highest order of f(n) is 2, we can conclude that f(n) can not have a time complexity greater then that of n², which means it’s the worst case running time.

**Big Omega (Ω) : best case running time**

Big Omega is often used to describe the best-case running time of an algorithm by choosing the lowest order of the polynomial function and ignoring all the constants.

**Example**: prove that f(n) = 5n² + 3n has Big Ω(n)

We know that the lowest order of the polynomial function f(n) **(i.e. 1) **is less than n², thus we can conclude that f(n) has a big Ω(n)

**Big Theta (Θ) : both best and worst case running time**

Big Theta describes the running time of an algorithm which is both best case and worst case. This idea might be a big tricky to grasp at first but don’t worry, it’s not that different from Big Oh or Big Omega, though if you don’t completely get the concept of Big Oh and Big Omega, you might want to go through that again before moving onto this. An algorithm has both best and worst case running times. What does that mean? It means that the algorithm is both Big Oh and Big Omega at the same time. In simpler words, if we consider a polynomial function f(n) = 3n³ + 4n, the Big Theta of the function is going to neither be greater nor smaller than the highest order of the function. It’s going to be exactly equal to it, which in this case is going to be n³.

**Example**: prove that f(n) = 3n has Big Θ(n)

Since the highest and the lowest order of the polynomial is 1, the big Θ of f(n) is going to be n.

Even though we study all the variations of time complexity, the most commonly used is Big Oh.

### Common Running Times

Now that we have the basics out of the way, lets dive into understanding the time complexities of algorithms. In order to do that we need to define a model machine. Let us assume a machine with following configuration :

- Single processor machine
- 32 bit Operating System machine
- It performs sequential execution
- It requires 1 unit of time for Arithmetic and Logical operations
- It requires 1 unit of time for Assignment and Return value
- It requires 1 unit of time for Read and Write operations

The configuration I defined above is not universal. We’re considering that the machine we’re using is of the given configuration. There are some common running times when analyzing an algorithm which you should keep in mind :

**1) **O(1) — Constant Time

Constant time means the running time is constant, it’s *not affected by the input size (i.e. *Basic Operations (arithmetic, comparisons, accessing array’s elements, assignment*))*

Example :

`read(x) // O(1)`

a = 10; // O(1)

a = 1.000.000.000.000.000.000 // O(1)

**2)** O(n) — Linear Time

When an algorithm accepts n input size, it would perform n operations as well.

Consider the following example, below I am linearly searching for an element, this has a time complexity of O(n) because it goes through the loop n times.

`int find = 66;`

var numbers = new int[] { 33, 435, 36, 37, 43, 45, 66, 656, 2232 };

for (int i = 0; i < numbers.Length - 1; i++) {

if(find == numbers[i]) {

return;

}

}

**3)** O(log n) — Logarithmic Time

Algorithm that has running time O(log n) is slight faster than O(n). Commonly, algorithm divides the problem into sub problems with the same size. Example: binary search algorithm, binary conversion algorithm.

**4)** O(n log n) — Linearithmic Time

This running time is often found in “divide & conquer algorithms” which divide the problem into sub problems recursively and then merge them in n time. Example: Merge Sort algorithm.

**5)** O(n²) — Quadratic Time

For example Bubble Sort algorithm, in other words, a loop inside a loop :

for(i=0; i < N; i++) {

for(j=0; j < N;j++) {

statement;

}

}

**6)** O(n³) — Cubic Time

It has the same principle as O(n²).

**7)** O(2^n) — Exponential Time

It is very slow as input get larger, if n = 1000.000, T(n) would be 21000.000. Brute Force algorithm has this running time.

**8)** O(n!) —Factorial Time

Its the slowest of them all.

The graph above shows the number of operations performed by the processor plotted against the number of input elements in the algorithm which in turn signifies which running time is the fastest. The one with a steepest gradient is going to take the most time to execute while the one with the lowest gradient is going to be the fastest because as the number of elements of the input of steepless gradient increases, the number of operations performed don’t increase that much but in the case of steep gradients we can see that even with a small input, the number of operations performed are countless.