Intro to Big O Notation

Denali Balser
Geek Culture
Published in
5 min readJul 28, 2021
Warp 1 or O(1)..?

Big O notation is used to categorize the run-time or space efficiency of algorithms as the input increases in size. It is a way to describe the performance and/or complexity of an algorithmic function. The Big-O categorization for an algorithm is based upon the worst-case scenario. It is the ‘language’ (or perhaps terminology) that developers use in comparing the efficiency of different approaches to a problem. Personally, math has never been my strongest suit, as such I will be using some coded examples to illustrate common Big O categories.

Background on Time & Space Complexity

Because the processing speed of individual computers can differ so much, it was clear that determining the efficiency of algorithm could not be measuring by the time it takes for a computer to run an algorithm. Instead, a concept to see how quickly the runtime grows is used. To measure time-complexity, the number of operations a computer needs to perform in order to complete the algorithm/function is used. As for deducing space-complexity, the amount of memory required for the successful completion of the algorithm/function is used.

Optimizing both time and space complexity requires knowledge of data structures and their attributes/characteristics — some are more efficient for search functionality while others are more memory-efficient for insertion purposes. I have written a post covering common data structures and will link it below:

To understand Big O notation, there are a couple of definitions that you first must understand:

  • n — represents the input of an algorithm within Big O,
Graphical Representation of Big O Notation

You can refer back the above graph as you read further.

O(1)

O(1) is the gold standard. If an algorithm is scored O(1) time or space complexity, it means that regardless of the size of the input data set the execution time or memory requirement will stay the same — one operation or memory allocation. It is constant in both time and space.

function countUpTo(n) {
return arr[0]
}
nums = [5, 6, 7, 1, 8, 2]printFirst(6) // => 5

Regardless of the size of the input value to printFirst, the function will only execute one operation, making it O(1).

O(n)

O(n) describes when the number of operations is the same as the number of items in the input. It is linear time.

function printAll(arr) {
for(let i = 0; i < arr.length; i++) {
return arr[i]
}
}
nums = [5, 6, 7, 1, 8, 2]printAll(nums) // =>
// 5
// 6
// 7
// 1
// 8
// 2

The number of time the function returns a value is dependent upon the number of items in the input, i.e. the number of operations grows proportionally with n. As with the loop above, simple loops are O(n) time complexity.

Above the number of operations performed to complete the function depend upon the input (n) as every time the an iteration of the loop is performed the length of the input is compared to the new value of i, i is incremented with each iteration, and with each iteration the value at index i is returned. This results in a variable number of operations based upon the number of loops that the function has to do in order to reach the end of the input (n).

O(n^2)

With O(n²), the number of operations performed to complete the function increase exponentially based upon the input. This occurs when a O(n) operation is nested within another O(n) operation (ex. a nested loop), in which case the runtime will grow at a rate of n². O(n²) operations are quadratic time complexity.

function bubbleSort(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = i; j < arr.length; j++) {
if(arr[i] < arr[j] {
let switch = arr[i]
arr[i] = arr[j]
arr[j] = switch
}
}
return arr
}

Here we have a representation of the bubble sort algorithm with O(n²) time complexity as we have nested two loops. If arr has n items, the outer loop runs n times and the inner loop runs n times for each iteration of the outer loop, giving us n² total operations.

If the arr has 10 items, we have to loop 100 times. If it has 1,000 items, we have to print 1,000,000 times.

O(log n)

Logarithmic runtime, or O(log n), means that the runtime grows in proportion to the logarithm of the input size. A common example you can use to visualize O(log n) is looking someone up in a phone book. In order to do so you do not need to check every person in the book until you find the one you are looking for. Instead, you can divide-and-conquer by diving the phone book seeing what letter is on the page you land on- then searching only through the subset of the phone book that contains the first letter of the last name you are searching for. You continue this pattern of diving the book and searching only the section that contains the given letter until you find you have one page left and search through the names on that page.

This divide-and-conquer method is O(log n) because it increases in efficiency with each division- as the size of the section you are are searching through is divided in half each time. The above phone book explanation is an example of a binary search. O(log n) is logarithmic time.

function logarithmic(n) {
for(let i = 1; i < n; i*=2) {
let result = i;
}
}

In the above example, the number of iterations would always be less than the log on input size — and as such the worst case time complexity would be O(log n).

--

--

Denali Balser
Geek Culture

Full-stack software engineer experienced in Ruby on Rails, React, Redux, and JavaScript based programming.