Algorithm Analysis — Big O Notation

Nemanja Žunić
Java Vault
Published in
6 min readMay 25, 2018

In the area of software development, one problem can have multiple solutions. For example, sorting an array of numerical values:

int[] numbers = {5, 1, 4, 2, 3}

There are many algorithms that solve this problem —meaning: they return a sorted collection of numbers (I’ll talk about the sorting algorithms in the future posts).

But while all sorting algorithms do the same thing correctly — sort. Some of the sorting algorithms are more efficient than others — they sort in less time, or use less memory when doing so.

So how can we measure this? Wouldn’t it be great if we had a way to calculate an efficiency any two algorithms? We could compare them and pick a better solution to use in our application.

Time Measuring

One way we could calculate an efficiency of algorithms would be to track the time it takes for an algorithm to execute.

Let’s for a moment imagine we have two sorting algorithms:

sort1(int[] array) and sort2(int[] array).

We could calculate their execution time like this:

result1 and result2 would tell us which algorithm took less time to execute and we would know which one to use.

Time Measuring drawbacks

One major drawback of this method is that we have to implement and execute both algorithms in order to measure their execution time and compare them.

The second major drawback is that this method doesn’t say how does our algorithm behave for different parameter sizes. In our example, we had an array with 5 elements as the sole parameter. But how would the algorithm behave if we pass it an array with 1 element? How would it behave when we pass it an array with 1 000 000 elements? Will sorting an array with 1 000 000 elements using sort1 algorithm be 1 000 000 times slower than sorting an array with 1 element?

Another drawback is that this method is very hardware-dependent. Running our sorting algorithms on a new iMac pro and an 8-year-old smartphone will give us different execution times. And we don’t want to take hardware into consideration when analyzing the efficiency of our algorithms.

Instruction Counting

Solution to all the drawbacks of the Time Measuring is a change of method. Instead of measuring the execution time of algorithms we can count the number of instructions that algorithm has to execute. And we can say that each of the instructions takes some constant time to execute.

Here is how we do the instruction counting on this example algorithm:

We can see that algorithm executed: 1+1+n*1+1 = 3+n instructions. Where n is a number of elements in the array.

The number of instructions that this algorithm executes only depends on n — the size of the array parameter. If we were to display this dependency as a graph:

We didn’t need to execute any code to calculate the number of instructions, we can clearly see the dependency of the input array size and execution time, and hardware had no effect on the instruction counting.

Big O Notation

Big O Notation is a method we use to calculate the worst-case scenario for an algorithm execution.

As developers, we want to plan for the worst-case and don’t want to assume that things will always go smoothly and that input parameters will be ‘ideal’.

If we had some sort1(int[] array) algorithm. When we pass it an already sorted array: sort1(new int[] {1, 2, 3, 4, 5}); it might execute fairly quickly because no sorting needs to take place. But that’s not the case we want to count on. We are more interested in worst possible execution time, for example when we pass an array that is sorted in descending order: sort1(new int[] {5, 4, 3, 2, 1});. It might take a while to sort it to: {1, 2, 3, 4, 5}.

Calculating Big O

To calculate Big O of an algorithm we use the Instruction Counting method with a few additional rules.

Rule 1. When input size doesn’t matter: O(1)

When the input size has no effect on the number of instructions algorithm executes we say that the algorithm has: Constant Complexity and mark it: O(1).

Here is an example algorithm:

We can see that the number of elements in list parameter doesn’t affect the number of instructions algorithm has to execute — no matter the size of list parameter, printSize function will always execute 2 instructions.

So function printSize has Constant Complexity or O(1).

Rule 2. Constants don’t matter

When calculating Big O complexity for an algorithm, we don’t care about constants.

Lets count instructions for the next example:

We counted: 2 + n*n*1 + n*n*1 = 2 + n^2 + n^2 = 2 + 2*(n^2) instructions.

If we use the rule that constants don’t matter, we can drop: 2 + 2* part. And we are left with O(n^2) complexity.

Rule 3. Count Instructions in the Worst-Case Scenario

As I already explained, when calculating Big O, we are interested in the worst-case scenario.

Let’s calculate instructions of this example code:

In the example, if(number > 5) conditional has an effect on the number of instructions algorithm executes.

If we called printBiggerThanFive with an array that contains all zeros: printBiggerThanFive(new int[] {0, 0, 0, 0, 0}); our algorithm would execute: System.out.println(number) zero times.

If all numbers were > 5 our algorithm would execute: System.out.println(number) n times. That is the worst-case we are looking for when calculating Big O.

We can conclude that this example algorithm has O(n) complexity.

Rule 4. Only keep the n of the highest order

Since we are assuming the worst-case scenario, we can expect that the size of the input array will be really really really large. In that case, all non-dominant terms in the number of instructions lose their significance.

Let’s look at this example:

We counted: n^2 + n instructions.

If numbers array had: n = 10 elements, our algorithm would execute: 10^2 + 10 = 100 + 10 instructions.

If numbers array had: n = 100 elements, our algorithm would execute: 100^2 + 100 = 10 000 + 100 instructions.

We can see that n starts to lose on significance when compared to n^2 as n grows.

So we decide we can drop n altogether and only keep the n^2 because it is of the higher order: this algorithm has O(n^2) complexity.

Conclusion

First I wanted to share this website that has algorithm complexities of the most common algorithms, so that you guys have a quick reference when you need it: http://bigocheatsheet.com/

Second, my stories passed 500 reads mark so I wanted to thank all you guys who follow n read my stuff, its all greatly appreciated! 🍺🎊

--

--

Nemanja Žunić
Java Vault

I write sentences that make the magic happen (software developer, basically the same thing).