Algorithmic Complexity

Pooja Nanda
CS through my eyes
Published in
4 min readFeb 26, 2018

The Setting

  1. When we write a piece of code, program or software we are concerned about its efficiency. We want it to be fast and to consume less resources.This is because obviously faster is better here.
  2. We never compute the actual running time of an algorithm.This is because it can differ for different configurations of different machines.For ex. A computer might have more cores,faster processors etc. So the same algorithm will perform better on a faster computer.
  3. To solve the above problems , we do asymptotic analysis of algorithms.The idea here is to compute the performance of an algorithm based on the inputs.This means we calculate how the time taken by an algorithm increases with input size.We don’t actually calculate the running time of the algorithm.
  4. We use big oh notation to indicate worst case of , theta for average case and omega for best case.(more on this here-https://www.geeksforgeeks.org/analysis-of-algorithms-set-2-asymptotic-analysis/)

Important Points To Remember

  1. Let’s define a machine which is 32 bit architecture, single processor.It takes 1 unit of time for arithmetic and logical operations and 1 unit if time for assignment and return.

Ex.

Sum(a,b){
Return a + b;
}

Here a + b being an arithmetic function takes 1 unit of time.Return takes 1 unit of time.

So the program takes 2 units of time. Hence this is a constant time algo since whatever be the input it will take constant time of units.

Tsum = 1 + 1 = 2 units

2. We usually calculate the worst case of the complexity(Big Oh)of an algorithm.There are also average cases and best cases but worst case helps us to determine the longest running time of the algorithm.

eg. Let’s take the example of finding a number in the given list of numbers.The worst case will be when the number is not present in the list because then we have to compare every number in our list with the number we are searching for.

Suppose we want to find 5 in this list [1, 3,7,9,0] of size 5. We compare 5 with every number to check if it is equal to the number.Since we compare 5 times , the complexity is 5.This when applied to a list of n size , we need n comparisons.So complexity of this is n.

The best case would be when we find the number at the first place.Then it takes only 1 comparison.

3. As the input grows ,large we drop the constants as it doesn’t matter adding , subtracting, dividing a very large number.

O(1+n/2+100), which we just call O(n) after dropping the constants.

4.Also, we drop the less significant terms because they become less significant for larger inputs.

int a = 0, b = 0;for (i = 0; i < N; i++) {
for (j = 0; j < N; j++) {
a = a + j;
}
}
for (k = 0; k < N; k++) {
b = b + k;
}

Here complexity for first loop is i.e. nested for loop is n². Complexity of 2nd loop is n.

So O(n²+n) = O(n²) since we dropped the less significant term n

Examples

1.

int j = 0;
for(int i = 0; i < n; ++i) {
while(j < n && arr[i] < arr[j]) {
j++;
}
}

We always take the worst case scenario. So worst case will be when i runs till n.Then in while loop there are two comparisons j < n and arr[i] < arr[j]. Hence n(1+1) = 2n => O(n)

2. When we have nested loop, we calculate how many times the inner loop runs based on the value of outer loop.

int a = 0;
for (i = 0; i < N; i++) {
for (j = N; j > i; j--) {
a = a + i + j;
}
}

Worst case is when i = n

Total number of runs of j = N + (N — 1) + (N — 2) + … 1 + 0

= N * (N + 1) / 2

= 1/2 * N² + 1/2 * N

O(N²) times.

3. Find the time complexity of the following code.

int count = 0;
for (int i = N; i > 0; i /= 2) {
for (int j = 0; j < i; j++) {
count += 1
}
}

When i = n in the worst case

In the first iteration, the j loop runs N times.

In the second iteration, the j loop runs N / 2 times.

In the ith iteration, the j loop runs N / 2^i times.

So, the total number of runs of loop = N + N / 2 + N / 4 + … 1

= N * ( 1 + 1/2 + 1/4 + 1/8 + … ) < 2 * c

= O(N)

4.Find the time complexity of the following code.

int i, j, k = 0;
for (i = n/2; i <= n; i++) {
for (j = 2; j <= n; j = j * 2) {
k = k + n/2;
}
}

Approach -> See, how fast j grows when i increases by 1 every time

Now, lets just assume n = 8 for now.
We will try to see, the values of j corresponding to each i.

i = 4, j = 2, 4, 8

i = 5, j = 2, 4, 8

i = 6, j = 2, 4, 8

i = 7, j = 2, 4, 8

i = 8, j = 2, 4, 8

j grows in multiples of 2 , so log(N)

i grows linearly , so N

therefore, 0(N log N)

--

--

Pooja Nanda
CS through my eyes

CS Grad student, Computer science engineer . Likes technology , books , travelling, volunteering