Time complexity and judging by constraints

Sathian
FACE | Amrita Bangalore
4 min readJun 6, 2021
Photo by Aron Visuals on Unsplash

An algorithm’s effectiveness is mostly decided by the time and space used by that algorithm and now we are gonna learn more about time complexity.

So, what really is time complexity? Is it finding out the time taken to run the program? If it is, then the running terminal says that “ This program has compiled in 0.0005 seconds”. Why do I need to learn this separately?

Every one of us may work on different computers which may have different properties and running a program also depends on things like processor and OS. So, we need to generalize this time taken to run the concept to work on it on various levels like comparison of algorithms and more. Time complexity actually measures the time taken by each line to execute in the program and the length of the input.

To be more precise on the definition, Time complexity is the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm.

If you are more into competitive programming then time and space complexity are the important topics that cannot be ignored. Generally, while doing competitive programming problems on various sites, the most difficult task faced is writing the code under desired complexity otherwise the program will get a TLE (Time Limit Exceeded) warning.

Asymptotic Notation

Before getting to finding the time complexity we need to learn about the basics of asymptotic notations. Asymptotic notations allow us to examine an algorithm’s running time by expressing its performance as the input size, n, of an algorithm increases. There are three types on the need-to-know basis: Big O, Big Theta, and Big Omega.

(i) Big O → Worst case scenario of the program.

(ii) Big Theta → Average case scenario of the program.

(iii) Big Omega → Best case scenario of the program.

Remembering this, for now, is sufficient for this article. To learn more about asymptotic notation, Click Here.

The time complexity of algorithms is most commonly expressed using the big O notation, since the algorithm’s performance may vary with different types of input data. Hence for an algorithm we usually use the worst-case Time complexity of an algorithm because that is the maximum time taken for any input size.

How to find the time complexity of a program?

Time complexity is nothing but counting the number of lines in the algorithm with respect to the input(n) and deriving an equation. From that equation, we will derive the best case, worst case, and average case. By more practice, we can arrive at the time complexity just by looking at the algorithm.

T(n) represents the number of lines with respect to the input. To derive Big O from T(n), follow this steps:

Lets do this with an example: T(n) = 4n² + 3n + 6.

(i) Ignore the coefficients. → T(n) = n² + n + 1

(ii) Ignore every element except the element with value . T(n) = n²

n! > k^n > n³ > n² > nlogn > n > log n > 1

Big O is O(n²) for this equation.

Let’s take an example:

#include <stdio.h>
int main() {
printf("Hello World!");
return 0;
}

T(n) = 1 therefore, its O(1).

This problem is also an example of a constant time. An algorithm is said to have constant time with order O (1) when it is not dependent on the input size n. Irrespective of the input size n, the runtime will always be the same.

#include <stdio.h>
int main() {
int n;
for(int i = 0; i <= n; i++){
printf("%d", &i);
}
return 0;
}

The for loop runs n times, thereby executing all components within the loop for n iterations.
Hence, T(n) = 1 + n => the worst-case time complexity is O(n).

This problem is also an example of linear time complexity. An algorithm is said to have a linear time complexity when the running time increases linearly with the length of the input. When the function involves checking all the values in input data, such function has time complexity with this order O(n).

#include <stdio.h>
int main() {
int n;
while(n > 0) {
n = n / 2;
printf("%d", &n);
}
}

T(n) = 1+log(n): therefore, the time complexity is O(log(n)).

This problem is also an example of logarithmic time. An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step. This indicates that the number of operations is not the same as the input size. The number of operations gets reduced as the input size increases.

#include <stdio.h>
int main() {
int n;
for(int i = 0; i <= n;i++){
for(int j=0;j<=n;j++){
printf("%d"&i);
}
}
return 0;
}

T(n) = 1 + n²: therefore, its O(n²).

This problem is also an example of Quadratic time. An algorithm is said to have a non — linear time complexity where the running time increases non-linearly (n²) with the length of the input. Generally, nested loops come under this time complexity order where for one loop takes O(n) and if the function involves loop within a loop, then it goes for O(n)*O(n) = O(n²) order.

Graph of Big O time complexities w.r.t. input size and run time

Similarly, if there are ‘m’ nested loops defined in the function, then the order is given by O (n ^ m), and these are called polynomial time complexity functions.

While there exist many other types of time complexities, these are the important ones that need to be kept in mind while deriving efficient solutions.

--

--