A Quick Guide on Big O Notation — Time Complexity

Mary Dick
Mary Dick
Mar 23 · 5 min read

As I’ve continued to move along with my job search, I’ve been able to gain a solid understanding of Big O notation and time complexity in algorithms. This is a very important topic to study when looking for a tech job as most tech interviews will contain at least one algorithm question, and having a base knowledge of time complexity is fundamental to impress any interviewer.

What is Time Complexity?

Time complexity is a measurement of the time it takes for a computer to run a program as a function of the length of the input. Although time is an important factor when writing an algorithm, time complexity is actually not measured by the number of milliseconds, seconds, or (god forbid) minutes that it takes for the program to complete. This is because the amount of time it takes for a computer to run a program is often dependent on the computer itself. A program that takes .01 seconds to run on one computer might take 0.08 seconds on another. This difference seems insignificant, especially given that both computers ran the program at fairly high speeds, but this inconsistency is significant enough that it’s not an adequate measurement of an algorithm’s efficiency. This is why time complexity is measured by the number of operations O that are performed by the algorithm given a number of elements n, rather than the amount of time it takes to run.

Constant Time — O(1)

Constant time implies that no matter the length of the input, the number of operations will always remain the same.

In this example, the function findThirdElement() takes in an array and returns the third element (the element at index 2). The time complexity of this function is O(1) because the number of operations that occur is independent of the length of the input. No matter how long array1 is, the function will always find and return the third element.

O(1) is the ideal measurement of time complexity because of its independence to the length of the input.

Linear Time — O(n)

Linear time implies that, as the input increases, the number of operations also increases proportionally.

In the above example, the function hasMatch() takes in an array and an integer and iterates through the array to determine whether it contains an element that is equal to the given integer. The time complexity of this function is O(n) because the number of operations is directly proportional to the length of the input array. The array currently has five elements, so the function must perform five total operations as it iterates through the array. Because we can’t guarantee that the length of the array will always be five, we assign it the variable n to represent the possible number of elements, hence O(n) notation.

O(n) is an ideal measurement of time complexity because the number of operations is directly proportional to the length of the input.

Quadratic Time — O(n²)

Quadratic time implies that the number of operations is proportional to the square of the length of the input.

In the above function, containsDuplicate(), two arrays are passed in and compared to one another. The function loops through both arrays in a nested loop, comparing each element in the first array to each element in the second array until it finds a duplicate. The time complexity of this function is O(n²) because it contains a nested loop, each loop dependent on the length of array1 and array2 respectively.

O(n²) is not an ideal measurement of time complexity because the number of operations increases exponentially as the input increases.

Exponential Time — O(2^n)

Exponential time implies that the number of operations is proportional to two to the nth power, n being the length of the input.

The fibonacci() function takes in a number, an index in the Fibonacci sequence, and calculates and returns the value of the Fibonacci sequence at that index. This is a recursive function, meaning it invokes itself at least once.

The time complexity of the function is O(2^n). This is okay on a small scale but quickly gets out of hand when the input is increased and is not a time complexity to strive for when writing algorithms.

Factorial Time — O(n!)

Factorial time implies that the number of operations is proportional to n factorial.

The function, possibleCombinations() , takes in an array and returns the total possible number of combinations between all elements. Factorial is a mathematical function in which the number, n, is multiplied by each integer between 1 and n-1. For example, 5! is evaluated as 5x4x3x2x1 = 120 . As exemplified above, the difference between 3! and 6! is enormous, which is why this is considered the absolute worst practice in writing algorithms because the number of operations increases by n! each time the input is increased. There’s pretty much never going to be a case in which O(n!) is used, but it’s important to know either way.

Conclusion

I hope this article was helpful in understanding time complexity, I plan to write a corresponding article about space complexity and the relationship between the two sometime in the future. There’s so much more to learn about algorithms and data structures, but it sure feels good to have a strong foundation on which I can build my programmatic skills.

Weekly Webtips

Explore the world of web technologies through a series of tutorials

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store