Big O Notation — Time & Space Complexity in Ruby!

Mendoza Bridget
7 min readOct 21, 2019

--

Big O Cheat Sheet

In this post, we will look at the Big O Notation both time and space complexity! For all our examples we will be using Ruby.

What is Big O Notation?

Big O notation is just a fancy way of describing how your code’s performance depends on the amount of data its processing. It allows us to calculate the worst possible runtime of an algorithm, or how long it takes the algorithm to complete. In other words, it informs us of how much it will slow down based on the input size.
It’s usually talked about in two ways the time complexity(how long an algorithm takes)and space complexity(how much space an algorithm uses)
Let me put this graph here we can refer to it as we go on, it’s a graph showing the complexities compare against each other.

Big O complexity chart

Time Complexity

Before we get try wrapping our heads around the different big O notations here is a nice table that I am borrowing to show how speed will slow down based on the size of the dataset. I found the emojis good visual help.

Big O algorithms breakdown

O(1)

O(1) is referred to as constant time. Algorithms with constant do not scale with the input size, they are constant no matter how large the input. Here a couple of functions that are constant.

Examples of Constant Time Functions

add simply returns the sum and whether it’s 1 + 1 or 100 + 500 this is constant time because it will always take just two steps to complete the function
value_at the takes an array and a position. It returns the value at that position. If there is no value it will return the error else it will return the value. So regardless of the size of the array since all we are doing is picking a single value this constant time always taking just 2 steps.

We often see constant as O(1), but any number could be used and it would still be constant. We sometimes change the number to a 1, because it doesn’t matter at all about how many steps it takes. What matters is that it takes a constant number of steps.

O(log n)

O(log n) is referred to as logarithmic time. For this let’s assume we have an array with ordered elements and we are trying to find one element. How do we find it?
1. Loop through all the elements until we find it. Works but would take a while depending on the element. Finding 1 in an array of elements from 1–100 is okay, but finding 65, that will take a while.
2. Binary search, the search is narrowed down by repeatedly halving the dataset until you find the target value.
Binary Search has logarithmic time. Here is a gif to show what happens in a binary search.

Binary Search

This is what a binary search looks like in ruby. If you want to run this code to see what that looks like, here you go.

Binary Search

O(n)

O(n) is described as linear time. This means the time will increase as the dataset size increases for a given algorithm.
Let’s take an example of an array whose items we have to sum. That would look something like this.

sum function

sum simply takes an array and loops through each element adding everything then prints the sum. An array of 10 items would definitely be faster than an array of 1 million items. I have an example set up to show the time, right here.
A lot of code that iterates over arrays is O(n).

O(n log n)

O(n log n) is linearithmic time. An example of this is the merge sort. If we have an unsorted array that we want to have organised in ascending order. A merge sort will recursively break down this large array into small arrays of 2 elements or less, we then sort the two elements and after that merge them such that the elements are organised in ascending order.
Here is a gif to show what happens in a merge sort

Merge Sort

The function below shows how a merge sort can be implemented.
To run this, here is the link

O(n²)

O(n²) is quadratic, this means 10 times the data will take 100 times more time. A bit of math here n² is the same as n*n. And we just saw that O(n) is used on for loops for n² simply means you have another a nested for loop. A good example is the bubble sort, here is a gif to show what happens in bubble sort.

Bubble Sort

Here is what a bubble sort looks like in ruby.
If you want to run the example, here is the link.

bubble_sort

This also means that the more nesting on a for loop the higher the power to n, a for loop with 2 nested for loops gives(3 for loops) O(n³), 3 nested for loops gives O(n⁴) etc!
So be very careful when nesting those for loops!

O(2^n)

This is exponential and is the slowest of them all.
One example of exponential time is to find all the subsets of a set.
Let’s assume we had a function called subsets that takes an array that can be likened to our set as an argument and returns all possible subsets. The goal is to take the array and create different sets of values and none like the other.

subsets

Here is what our subset function looks like in ruby
If you want to run the function, here is the link

subset function

When the array(a) has 2 elements we have an output of 4 elements, this is 2², when there are 3 at input we have 8 at the output, 2³.
The output is growing exponentially.

How to calculate time complexity

We are going to use our existing knowledge to try and figure out what the complexity is for each of these cases. Remember we need to think, worst-case scenario!

Case 1
We have a sequential function that loops through the array to find a specific element, the .each method is similar to the for.

sequential_search

The best scenario would be
sequential_search([‘orange’], ‘orange’)
But remember the worst case, meaning we could have
sequential_search([‘here’, ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘orange’, ‘f’, ‘g’], ‘orange’)
We have to go through almost every element to find what we are looking for, this is O(n)

Case 2
We have a list of movies and each movie has a rating attached to it. We want to find a movie then use the rating of that movie to find other movies with a similar rating.
We could use a binary search to find the movie as this would faster, this is O(log n). But this is just one part we need to find the other movies so we go through the list again maybe using our sequential function this is an O(n).
This is now O(n log n). Not too bad.

We can now on a high level try to figure out our algorithms, now that we know:
halving => O(log n)
for loops => O(n)
nested for loops => O(n²)…

Here are a couple of links for a better understanding of how to calculate time complexity
- With discrete maths
- Some exercises with solutions

Space Complexity

Space complexity is the total amount of memory space used by an algorithm/program including the space of input values for execution.
Often space complexity is confused with auxiliary space. Auxiliary space is temporary or extra space used by an algorithm.
In simple terms
Space Complexity = Auxiliary space + space used by the input values

Space complexity uses the same notation as time complexity so expect O(n), O(1) etc.
Let’s look at this simple function using ruby and try to figure out the space complexity

simple sum function

In the sum function, we have an array of 3 elements that we loop through and add up the elements as we go along and when we are done we print the sum of the elements.
Now to figure out its space complexity, before we dive in it’s important to know that integers have an O(1) space complexity. So looking at the function we have an array and 2 integers(sum, i).
An array can have any number of elements, n. The space occupied by the array is 4*n bytes. Assuming 4 bytes per integer our integers occupy 8 bytes. The total space then is 4n + 8. Eliminating constants, we have 4n. So the space complexity of our function is O(n), linear complexity.

Here is a Big O cheat sheet showing the different space and time complexities for different data structures and array sorting algorithms.

--

--