The Big O Notation

AroicX
6 min readAug 8, 2022

--

NOTE: This is not everything to totally know about Big O, but it’s just a step to paint a mental picture as to what Big O is, in the easiest of ways i can explain.

So if you have ever wondered what the Big O notation is or you are like me who studied Computer Science in school but didn’t pay attention in class to understand what Big O is or probably didn’t deem it fit to learn about it. Then you are in the right spot, cause today I will be sharing with you all, from the little knowledge I have acquired as to:

  • What is the Big O Notation and Why Big O Notation
  • Time and Space Complexity

“Big O notation is a mathematical notation that describes the limiting behaviour of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.” — Wikipedia’s definition of Big O notation.

The basic concept talks about the time taking for a particular task to be completed, with a given input.

CSE 373 Slides from University of Washington

Why is the Big O Notation used —

In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. In other words, it measures a function’s time or space complexity. This means, we can know in advance how well an algorithm will perform in a specific situation. — Nicholas Obert

Let talk about order of growth in Big O with respect to time complexity

Big O Notation order of growth ranging from good — bad

O(1) — Constant Time

the O(1) is said to be in constant time because the steps doesn’t scale with the input being passed to the function.

// EXAMPLE OF A CONSTANT FUNCTION - BIG O(1)function constant(arr) {  let result = 100 * 1000; // constant function - Big 0(1)  return result; // constant function - Big 0(1)  // it is constant because the input does not affect the operation   being perform}let arr = [1,2,3,4,5];
console.log(constant(arr))
--- Output ---
100000

In the example above, we can see that input arr which is being passed to the function, does not affect the operation being performed by the function, which means that any time this function is called the result will always be constant.

0(n) — Linear Time

the O(n) is said to be linear when N is equal to the number of task to be completed within a function.

// EXAMPLE OF A LINEAR FUNCTION - BIG O(n)function linear(arr) {  for (let n= 0; n < arr.length; n++) {     // linear function - Big 0(n)
console.log(arr); // constant function - Big 0(1)
let result = 100 * 1000; // constant function - Big 0(1) console.log(result); // constant function - Big 0(1) }}let arr = [1,2,3,4,5];
console.log(linear(arr))
--- Output ---
100000

In the example above we can see that the for loop is a linear function because N is equal to the number of task to be completed. When considering the efficiency of a function.

O(log n) — Logarithmic Time

A Logarithm is the power that a number needs to be raised to, to get some other number. Log2 in base 8 = ²³ = 2x2x2 = 8.

To understand the O(log n), we have to first understand Logarithms, when an algorithm has O(log n) running time, it means that as the input size grows, the number of operations grows very slowly.

// -- EXAMPLE OF LOGARITHMIC FUNCTION  - BIG O(log n) --// This is a recursive function because it calls itselffunction logFuncA(n) {   if (n === 0) return "Done"; // constant function - Big 0(1)   n = Math.floor(n / 2); // linear function - Big 0(n)   return logFuncA(n);}// This is a iterative or non-recursive functionfunction logFuncB(n) {  while (n > 1) {  n = Math.floor(n / 2); // linear function - Big 0(n)  }

return n;
}

In the example above, logFuncA we have a recursive algorithm that

O (n log n) — Linearithmic Time

Linearithmic time O(n log n) is the Muddy Mudskipper of time complexities — the worst of the best (although, less grizzled and duplicitous). It is a moderate complexity that floats around linear time O(n) until input reaches advanced size, taking up time proportional to O(n log n) to run on inputs of size n.

Graphical representation of big O time complexities from an article by Adrian Mejia
// -- EXAMPLE OF LOGARIMTIMC FUNCTION  - BIG O(n log n) --function nLogNFunc(n) {  let y = n; // constant which is Big O(1)  while (n > 1) {      // complexity of Big O(log n)      n = Math.floor(n / 2);     for (let i = 1; i <= y; i++) {       // linear complexity Big O(n)       console.log(i);     }  }}let n = 8;
console.log(nLogNFunc(n))

O(n²) — Quadratic Time

O(n²) an algorithm which it’s performance is directly proportional to the squared size of the input data set, which mostly results into a squared matrix having same numbers of rows and cols.

// EXAMPLE OF A QUADRATIC FUNCTION - BIG O(n2)function quadratic(n) {     // quadratic function - Big 0(n2)     for (let i = 0; i < n.length; i++) {           // linear function - Big 0(n)        for (let j = 0; j < n.length; j++) {            // linear function - Big 0(n)           console.log(i, j); // constant function - Big 0(1)        }    }}let n = 4;
console.log(quadratic(n))
--- Output ---
16
where the rows = i and cols = j

In the example above, we have a two for loops (i, j) where j is nested in within i, the algorithm will loop through i n number of times to get j .

when i = 0, j = 0
i = 0, j = 1
i = 0, j = 2
i = 0, j = 3
...
4 x 4 = 16
4^2 = 16

O(n³) — Cubic Time

In the case of cubic complexity, the processing time of an algorithm is proportional to the cube of the input elements. The complexity of the following algorithm is 10*10*10 = 1,000. The three loops have a maximum of 10. — Source

The O(n³) is just like O(n³) but with an extra step, it takes O(n) in 3 place, in other to make the algorithm work.

// EXAMPLE OF A CUBIC FUNCTION - BIG O(n3)function cube(n) {// cubic function - Big 0(n3)   for (let i = 0; i < n.length; i++) {       // quadratic function - Big 0(n2)      for (let j = 0; j < n.length; j++) {        // linear function - Big 0(n)          for (let k = 0; k < n.length; k++) {              console.log(i, j, k); // constant function - Big 0(1)          }       }    }}

O(2n) — Fibonacci or Exponential Time

An exponential-time algorithm is one whose running time grows as an exponential function of the size of its input.

The Fibonacci sequence is a set of integers (the Fibonacci numbers) that starts with a zero, followed by a one, then by another one, and then by a series of steadily increasing numbers. The sequence follows the rule that each number is equal to the sum of the preceding two numbers. — Source

For example: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34

// -- EXAMPLE OF FIBONACCI SEQUENCE --function fib(n) {  if (n === 0) {    return 0;  }  if (n === 1) {    return 1;  }  return fib(n - 1) + fib(n - 2);}

Let n denote the length of the input to the algorithm .

O(n!) — Factorial Time.

O(n!) this algorithm is takes the input and multiplies every number up until the input number in other to get a result.

For example: 5! = 5 * 4 * 3 * 2 * 1 = 120

// -- EXAMPLE OF FACTORIAL FUNCTION --function f() {    if (n === 0) {
console.log("*************");
return;
}
for (let i = 0; i < n; i++) {
return f(n - 1);
}
}

Addendum

FreecodeCamp Big O Notation

Well this sums up everything i currently know about the Big O notation, will drop more posts as i learn.

P.S — — Please make sure you follow.

--

--