Analysing an Algorithm - Big O Notation

John Anto
5 min readFeb 6, 2024

--

Analysing an algorithm — Big-O notation.

Introduction

In the field of computer science, the efficiency of algorithms is an important aspect of solving problems. Time complexity analysis is a systematic approach to understanding and quantifying an algorithm’s computational efficiency. It evaluates how an algorithm’s runtime scales with input size. This helps determine the performance and assists in the selection of the best method for a specific task.

An algorithm is a set of instructions for completing a task. A popular and easy example of an algorithm would be looking up a word in the dictionary. We always open the dictionary in the middle, remembering the word's first letter, since we know the dictionary is organized alphabetically. This strategy is more efficient than starting the search from the beginning of the dictionary, and it is the basis for the binary search algorithm.

Running time of an Algorithm

Whenever we discuss an algorithm, we will discuss its running time. Generally, we want to choose the most efficient algorithm — whether we are trying to optimize for time or space. The temporal complexity analysis of an algorithm is done with the help of asymptotic curves from mathematics and hence is called asymptotic analysis.

What is the ideal way to compare the running times of an algorithm? Execution times cannot be used because it depends on the hardware. The better the hardware the less the execution times. The number of statements executed in the algorithms won’t also give us a good analysis because it varies with the programming language. A better option will be converting the running time of the algorithm as a mathematical function f(n) of the input size. This will make sure the comparison is independent of machine time, programming style, etc.

The running time of an algorithm will increase with the input size. The rate at which running time increases as a function of input is called the rate of growth. Below is a list of common rates of growth with input sizes, n.

Common rates of growth of running time of an algorithm with its input size, arranged in increasing order.

Types of analysis.

By analysing an algorithm, we will come to know with which inputs the algorithms take the least time, and with which inputs the algorithms take a longer time. The former can be called the best case, and the latter can be called the worst case for the algorithm. And thus we have 3 types of analysis:

  1. Worst-case analysis: Defines the inputs for which the algorithm takes a long time.
  2. Best-case analysis: Defines the inputs for which the algorithm takes the least time.
  3. Average-case analysis: Provides the prediction about the average running time of the algorithm.

In computer algorithms, we are more interested in the worst-case analysis of the algorithm, because it determines the performance of the algorithm as the input size increases.

Big-O Notation

Big-O is the asymptotic complexity notation for the tight upper bound for our rate of growth function f(n), of the input sizes, n. It can be represented as f(n) = O(g(n)), where g(n) is another function that gives the tight upper bound for f(n).

Mathematically, we can say that there exists positive constants c and n0 such that 0 ≤ f(n)≤ c*g(n) for all n > n0; ie., g(n) is the asymptotic tight upper bound for f(n).

Big-O notation analysis of rate of growth function f(n) of input size, n

So in simple terms, we can say that for the rate of growth function of the algorithm f(n), we will have another function c * g(n) which will always have higher values for the same inputs of f(n), when n ≥ n0, keeping an upper bound of f(n). This is described in the graph above.

For Example: Let’s say we have a rate of growth function, f(n) = n² + 1, n > 0.

From the analysis, we can say that there will be another function c * g(n) where f(n) ≤ c * g(n), n ≥ n0;

substituting the f(n), we get: n² + 1 ≤ c * g(n), n ≥ n0;

Here the n² + 1 grows rapidly in quadratic time, . So is the deciding factor of the equation. so we can substitute c * n² for g(n), where c = 1;

n² + 1 ≤ n², which is wrong, because the left-hand side has an addition of 1 with n², which will be always greater than . So we can increment our c to the nearest integer, c = 2;

n² + 1 ≤ 2 * n², is a correct expression because if we substitute values for n we get valid expressions. The least value for n, which is 1 will satisfy this condition and thus we can say here that n0 = 1, and c = 2;

So, 0 ≤ n² + 1 ≤ 2n² and 2n² will be the tight upper bound for n² + 1.

Visualizing this in a graph we get a more clear picture.

Graphical plot for big-O notation analysis.

In the graph above we can see that 2n² will always have higher values for the same input size, n as that of f(n), and represents the tight upper bound of the function f(n) = n² + 1 when n > n0; here c = 2 and n0 = 1;

Generally, we discard the lower values of n, meaning the rate of growth at lower values of n is not that important. The Big-O notation of the above rate of growth function is O(n²), which means the algorithm has a quadratic time complexity.

Conclusion

Finally, analyzing algorithm time complexity is a critical component of computer science since it allows us to measure and compare the efficiency of various algorithms. Understanding how algorithms scale with input size provides vital insights into their performance characteristics, allowing us to make more educated judgments when choosing and building algorithms for real-world applications. Time complexity analysis, commonly denoted by big O notation, provides a common vocabulary for analyzing algorithmic efficiency.

Follow me on YouTube for more insightful videos: https://youtube.com/@coderoly

--

--

John Anto

A tech enthusiast who talks about programming and works in the front-end engineering space.