The O in Big-O Notation

Patel Sunny
7 min readDec 16, 2021

--

Last week I attended a Computer Science vs Programming workshop where I learned about the difference between the two, but most importantly how we apply Computer Science topics in our day-to-day programming. As Software Engineers our goal is to solve problems efficiently and elegantly, and this is where a tool known as Big O Notation makes its entrance. Big O Notation is a tool used in Computer Science to describe the efficiency of an algorithm with respect to the size of the input. As developers, we tend to write our own algorithms to solve problems or use built-in ones provided by our language of choice. For example, for my Python users, we have the sort() method.

You wrote an algorithm and it solves the problem, so why care about the efficiency of an algorithm? After all, your job was to solve the problem, right? Sort of, it depends! God, I love that phrase. Aside from personal projects, as engineers, we focus on building scalable applications, in other words, applications that can grow in size; handle a growing number of users, which translates to an increase in data within your system. Now within that sort of system, you may have defined a function that implements an algorithm, let’s say, that sorts the number of users by their first name. You have a working algorithm, and it ran really fast when the number of users was small. However, you noticed as the number of users increased, so did the amount of time it took for your function to sort the users.

In a highly scalable application, such an algorithm may not have a large impact on performance that warrants concern. However, let’s say that this algorithm takes a few seconds to run, or even worse a whole minute. Yikes! An application that offers so many features, and one simple algorithm is affecting its performance, that doesn’t sound good! As a developer, you’d want your client to have a seamless experience using your application, with minimal interruption.

So, it turns out that a Software Engineer’s job is not only to solve problems but depending on the requirements of the project, solve problems efficiently. The focus of this blog post is going to be on how we can describe the performance of an algorithm.

Oh Efficiency, How Do I Describe Thee

Only if they had Software Engineers during Shakespeare’s time, then the title of this section would have been an awesome name for a play, but alas that is not the case. We can try to describe the efficiency of an algorithm in absolute units of time, but wouldn’t do us any good because other items can affect the performance of an algorithm. For example, the computer that the algorithm was written on could have superior hardware, or when running the algorithm you could have had minimal background processes running so the runtime of the algorithm could have been reduced.

So absolute units of time are out of the picture because other factors can affect the performance of an algorithm. Instead, let’s just focus on how the performance of an algorithm is affected by increasing the size of the input. That is exactly what Big-O Notation does. The formal definition of Big-O Notation is as follows; “Big-O Notation is an asymptotic analysis of the behavior of an algorithm, where the asymptotic analysis is the analysis of the behavior of a function as input increases towards infinity”.

When describing an algorithm in terms of Big-O Notation there are a few things to keep in mind

  • The function should be defined in terms of the size of the input.
  • A smaller Big-O Notation is more desirable than a larger one since it indicates better performance in terms of time and space complexity. More complexity later.
  • Big-O Notation describes the worst-case scenario because the best-case scenario is a luxury that cannot be guaranteed
  • A Big-O function should be simplified to show only its most dominant mathematical term

Above I mentioned time and space complexity. The time complexity of an algorithm is the amount of time an algorithm takes to run with respect to the input size. Space complexity, as you may have guessed by now, of an algorithm is the amount of total space taken by an algorithm with respect to the input size.

The Common Complexity Classes

The above chart presents the common complexity classes you can use to describe the performance of your algorithm in terms of time and space complexity. The best way, in my opinion, to analyze the performance of an algorithm is to take the input, run it through the function, and see what type of pattern you notice. From there, compare the pattern to the patterns associated with the mathematical functions above.

Constant: O(1)

A constant complexity O(1) means the algorithm takes roughly the same number of steps regardless of the input size. A constant operation does not depend on input; therefore, in the case of a constant time algorithm, there will be no relationship between the size of the input and the number of steps required. The following is an example of an algorithm that runs at constant time.

def contant_1(n):
return n * 2 + 1
def constant_2(n):
for idx in range(100):
print(idx)

The first function is considered to run at a constant time because regardless of the input size it performs a simple operation. The number of steps is always the same regardless of the input size. In the second example, we have a forloop that prints the values 0 to 99 every time the function is called. The for loop does not depend on the size of input here. Every time you call this function, the for loop will execute the block of code 100 times.

Logarithmic: O(log(n))

When first learning about this complexity, I had difficulty wrapping my mind around it. Algorithms that display logarithmic behavior will typically have a sense of halving the size of the input, meaning that every time we double the size of the input, we only require one additional step than we did previously. An example might help here.

def logarthmic(n):
if n <= 1:
return
logarthmic(n / 2)

Let’s say n = 2. When this function is first called, our base case isn’t true and move on to the recursive step, which halves the input by 2 and invokes the function again with n = 1. This time our base case is true and we return out of the function, and that’s all folks. How many steps was that? 1 step to be exact. Now let’s double our input to n = 4. Try it out yourself. The answer should be 2.

Linear: O(n)

You can think of algorithms that run at a linear time as algorithms that will touch each element of the input once. Algorithms that iterate through the input without nested loops or recur by reducing the size of input by one each time are typically running at a linear time. As the size of the input increases so does the number of steps in a linear fashion.

def linear_1(n):
for idx in range(n):
print(idx)
def linear_2(n):
if n == 1:
return
linear_2(n-1)

Loglinear: O(n * log(n))

This complexity refers to an algorithm that runs proportionally to the logarithm of the input size. Popular algorithms, such as Merge Sort and Quick Sort fall under this category, such that they contain a combination of both linear and logarithmic behavior.

def loglinear(n):
if n <= 1:
return
for idx in range(1,n):
print(idx)
loglinear(n // 2)
loglinear(n // 2)

Polynomial: O(n ^ c)

The Polynomial complexity represents an algorithm whose performance is proportional to the exponential size of the input. So think linear, but squared or cubed. This complexity has a form of O(n^c), where n is the size of the input and c is some fixed constant. Nested loops are usually an indicator of polynomial complexity.

def quadratic(n):
for idx_1 in range(1, n):
for idx_2 in range(1, n):
def cubic(n):
for idx_1 in range(1,n):
for idx_2 in range(1,n):
for idx_3 in range(1,n):

Exponential: O(c^n)

An algorithm classified as exponential is an algorithm whose growth doubles with each addition to the input size. This complexity has a form of O(c^n), where n is the size of input and c is some fixed constant. Indicator of exponential complexity can be seen in recursive code where there is a constant number of recursive calls in each stack frame.

def exponential_2(n):
if n == 1:
return
exponential_2(n - 1)
exponential_2(n - 1)

Factorial: O(n!)

Recall what a factorial is. A factorial is the product of a sequence of n integers. You can probably figure out why an algorithm with complexity O(n!) is the worst of its kind. A typical indicator of this complexity is recursive code that has a variable number of recursive calls in each stack frame.

def factorial(n):
if n == 1:
return
for i in range(1,n):
factorial(n - 1)

Adieu!

Hope you’ve found this article to be helpful! Being able to understand the efficiency of your algorithm is an important tool to keep in your developer kit. By being able to describe the efficiency of your algorithm, you can then start thinking about how you can possibly improve its complexity. As I mentioned earlier, an inefficient algorithm can affect the performance of your application.

--

--