Know your algorithm’s complexity

Nelson Rodrigues
3 min readSep 2, 2018

--

“gray and white optical illusion” by Brandon Mowinkel on Unsplash

Why bother with performance these days if we have better and better hardware ?

Lots of reasons …

  • Quick response: we live on a busy era nobody likes to wait for some tasks to end
  • Energy consumption: vital for mobile devices, large computer grids … Mother Earths says thanks !
  • Resource management: today we have multiple applications run in parallel, background,… If our application is wasting all resources we are doomed

Where can we improve our applications to be clever in resource consumption ?

Produce better algorithms !!!

Algorithms are everywhere, from firmware, device drives, operating system, databases, network protocols, … cars, planes, artificial satellite ,…

How can we do that ?

  • make it simple !
  • choose adequate data structures in order to be faster
  • do the algorithm’s complexity math and see where it potentially waste more resources e.g. use Big-Oh

Big-Oh

Big-oh notation measures algorithms according their running time or space requirements grow as the input size grows.

How to calculate it ?

Constant

O(1) — constant rate: execute same time independent of input size growing rate = 1

i++

Linear

O(n) — performance will grow linearly with the size of input data.

growing rate: n

for (int i = 0; i < n; i++) 
{
r += 1;
}

The loop executes N times, so the sequence of statements also executes N times. If we assume the statements are O(1), the total time for the for loop is N * O(1), which is O(N) overall.

Logarithmic

O(log(n)) — Running time of the algorithm is proportional to the number of times N can be divided : eg. typical binary search.

growing rate = log(n)

for(int i = 0;i< 1000;i++)
{
i = i/2;
}

Quadratic

O( n² ) performance will grow proportional with the size of input data

//Double loop(nested loops)
for (int c = 0; c < n; c++)
{
for (int i = 0; i < n; i++)
{
// sequence of statements
a += 1;
}
}

Growth Rate: n²

The outer loop executes N times. Every time the outer loop executes, the inner loop executes M times. As a result, the statements in the inner loop execute a total of N * M times. Thus, the complexity is O(N * M). In a common special case where the stopping condition of the inner loop is J < N instead of J < M (i.e., the inner loop also executes N times), the total complexity for the two loops is O( ).

from: Data Structures and Algorithms in Java

e.g. Mergesort, Quicksort

Exponential

O( 2^n )

Each step of a function performance, will take longer by order of magnitude of factor n

e.g. Generate all possible solutions to find a value, example a key.

Code that calls other functions, how to calculate it ?

When a statement involves a function call, the complexity of the statement includes the complexity of the function. Assume that you know that function f takes constant time, and that function g takes time proportional to (linear in) the value of its parameter k. Then the statements below have the time complexities indicated.

f(k) has O(1) g(k) has O(k)

from: Data Structures and Algorithms in Java

Big Theta and Big Omega

” … Big O, big Theta and big Omega are sets of functions. Big O is giving upper asymptotic bound, while big Omega is giving a lower bound. Big Theta gives both.

From: What exactly does big Ө notation represent?

References

Algorithms Complexity — Github repository

Big O notation — wikipedia

Data Structures and Algorithms in Java

Understanding the formal definition of Big-O

Big o Cheatsheet — Data structures and Algorithms with thier complexities

--

--