Big O Notation

When I started to study web development, I was not concerned about efficiency. I was strictly focused on getting something to work. As my understanding of the web, and general computer science, grew, I realized that my applications could be quite terrible if I did not pay attention to the processing time the application took, even if they worked! That processing time was later clarified to me, and now I am more aware about the time implications my code has on my application, and overall user experience. Now, I’d like to share what I’ve learned.

When someone asks you what the time complexity of a function is, they most likely want to know what the ratio is between the change in input vs. the change in time. We have come up with the following five ways to describe that ratio:

  • O (1) : Constant Time
  • O(N) : Linear Time
  • O(log n) : Logarithmic Time
  • O (n²) : Quadratic Time
  • O (2^n): Exponential Time

Note: There are several more Big O notations that are not discussed here.

O (1) : Constant Time

Constant time may be the easiest to conceptualize, so we will start here. See example code below:

var removeLastThreeElements = function(array) {
var number = 3;
for (var i = 0; i < number; i++) {

Notice that the length of the array that is passed into the function will never effect how long the function takes to execute. The change in input will have no effect on the change in time per input, as the loop is hard coded to iterate three times. Therefore, we have constant time for this function.

O(N) : Linear Time

Linear time is where we can first see the relationship between additional inputs, and the additional time they will add to execute an operation. See below:

var makeRange = function(array) {
array.forEach(function(item) {

The operation will take n time for any n number of items in the array. If you were to increment the array length by any number, the additional time it would take to execute the function would increase at the same rate. Let’s look at the same function, with a small twist.

var makeRange = function(array) {
array.forEach(function(item) {
for (var i = 1; i < 10; i++) {
console.log(item + i);

Notice that the loop’s condition is hard coded to less than ten, and not related to the length of the array. That makes it so that no matter the size of the array, each index will be looped over the same amount of times. Therefore, the operation maintains its constant time complexity, as shown previously.

O(log n) : Logarithmic Time

A logarithmic time complexity is the coolest to break down and understand. Let’s look at the below to see how it works.

var increasingStep = function(number) {
for (var i = 1; i < number; i = i * 2) {

Let’s say that number equals five. That means we can expect the function to log 1, 2, and 4. So, with an input of five, the function looped three times. How many logs can we expect with the number six being passed in?

Yep, the same three numbers as when we passed in five. What about passing in seven, or even eight? Still the same three numbers. Once we pass in nine, however, then the function will log a fourth number, 8. You can follow this pattern, incrementing number by one, until you get to seventeen, when the log finally gives you the next number, 16.

The ratio here is that, for every additional input (number increasing) time increases at a smaller rate. It would get to a point to where increasing number, by even a huge amount, won’t change the time it takes for the operation to finish.

Another way to look at logarithmic time complexity is by considering the population you are looking over. For example, in a binary search, every time you make a decision about whether or not your item is on the right or left of the tree, you are decreasing your total population to search. That makes the time the operation takes to finish decrease. The ratio of additional inputs compared to additional time is still the same as above, in that time it takes to traverse the collection will not increase at the same rate as the size of the collection increases.

O (n²) : Quadratic Time

Quadratic time complexity is best illustrated with nested for loops. So, lets take a look at the below to see what happens.

var terribleTime = function (array) {
for (var i = 0; i < array.length; i++) {
for (var j = 0; j < array.length; j++) {

Here, the array provided will be looped through twice. Once for the first loop, but for each index, the array will be looped over again by the second loop. Besides being destructively redundant, the function will take twice as much time for just one additional input.

Please note, nested for loops do not always represent quadratic time complexity. You need to carefully consider whether or not the collection is looped through, completely, before you conclude that it is quadratic time complexity. Consider the second code example in the linear time section above, where a for loop is run inside a for each. While the for each does execute a function on each index in the array, the for loop’s iterations are not related to the length of the array, and is hardcoded to run ten times. Therefore, even with a for loop inside a for each, the operation’s time complexity is not quadratic.

O (2^n): Exponential Time

The best way to describe exponential time is with really big numbers. Consider determining an eight mixed alpha-numeric character password using brute force. You would need to check every character and number in each spot, for every combination of 8 characters. Based on this quick calculator here, it would take approximately three years, and one-hundred and twenty eight days to crack, with a computer running Mac OS X 10.7. You can imagine what adding one more character to that password would do to any brute force operation.

To Summarize

Time complexity has a huge impact on our applications. Based on how you implement a solution, your application could take unnecessary time, and cause inefficiencies. Make sure you take the time to consider what your operations are doing, and how long they may need in the worst case scenario.