Time Complexity & JavaScript

Rachel Andersen
The Startup
Published in
5 min readMay 18, 2020

--

What is time complexity?

When creating a computer program, it is important to consider the amount of time taken up by the algorithms you write in order to save computing time and power and make efficient programs. The time required to perform an algorithm is its time complexity. Time complexity is described by the use of Big O notation, where input size is defined by n, while O represents the worst case scenario growth rate. Though there are many types of time complexities, in this post, I will go through the most commonly seen types:

  • Constant —O(1)
  • Linear — O(n)
  • Logarithmic — O(log n)
  • Linearithmic — (n log n)
  • Quadratic — O(n²)
  • Polynomial — O(n^c)
  • Exponential — O(2^n)
  • Factorial — O(n!)

Constant Time — O(1)

Red line denotes Constant O(1) time complexity

Constant time is denoted by O(1), and takes the same time to compute despite the size of an input n. This means that if n is 5 or 7,000, the time to process the algorithms will be the same.

Examples: finding if a number is even or odd, printing the first item from a list, checking if an item on an array is equal to a certain value

Function that prints the first element of n to the console

Linear Time — O(n)

Linear time complexity occurs when as the input n increases in size, the time for the algorithm to process also increases at a proportionate rate. In our example below, we will find the smallest number in a sorted array.

Yellow line denotes linear O(n) time complexity

Example:

finding the smallest element in a sorted array

Logarithmic — O(log n)

Red line denotes logarithmic O(log n) time complexity

Logarithmic time complexity is the result of when input n is reduced in size at each step of the algorithm. You can see that while the size of n is small, the O increases steeply, but as the n size is reduced (e.g., if it is halved at each iteration of a loop), the curve flattens and becomes less and less steep as n increases.

Examples:

finding the log of n, finding the index of an element in a sorted array with a binary search

Function finding the index of an element in an array using a binary search (code courtesy of adrianmejia.com — see references below)
Function finding the log of n

Linearithmic Time — O(n log n)

Purple line denotes linearithmic O(n log n) time complexity

Linearithmic time complexity, denoted by the purple line in the graph below, as you can see, is almost linear. However, it is slightly more efficient than linear at first. Algorithms that create a linearithmic time complexity pattern have a growth rate of (n log n).

Examples:

sorting elements in an array using a merge sort

Merge Sort Function (code courtesy of adrianmejia.com — see references below)

Quadratic Time — O(n²)

Red line denotes quadratic O(n²) time complexity

A quadratic time complexity pattern is created when the growth rate of n is n². This effect is often created when there are nested for loops. In the example below, the for loop contains an if statement that checks the indexOf items in an array. Since the indexOf method inherently implements a loop as per its construction, the example below is essentially a nested for loop. When determining time complexity, therefore, remember that higher order functions also inherently implement loops and don’t just check to see if two for loops are present.

Example:

finding duplicate elements in an array using a for loop and indexOf

Function finding whether there are duplicate element in an array

Polynomial — O(n^c)

Cubic time complexity denoted by yellow line

While quadratic time falls under the umbrella of polynomial in that its c value is 2, polynomial time complexity refers to any algorithm for which n increases by a rate of n^c. In the example below, we will consider the cubic time complexity — O(n³), as it is more common than n to any higher power. The example below contains a triple nested loop.

Example:

3 variable equation solver — triple nested for loops

Function using a triple nested for loop

Exponential Time — O(2^n)

Orange line denotes exponential O(2^n) time complexity

Algorithms that create an exponential time complexity pattern increase n at a rate of 2^n. Many examples I found involve recursive functions, so keep an eye out for recursion when you are determining time complexity patterns.

Example:

Using recursion to generate the nth number in a Fibonacci sequence, finding all subsets in a set

Function using recursion to generate the nth number in a Fibonacci sequence

Factorial Time — O(n!)

Algorithms that create a factorial time complexity pattern increase n at a rate of n!. A factorial is the product of all integers less than that number (e.g., 5! would be 5*4*3*2*1).

Example:

finding the factorial of n, find all permutations of a given set/string

Function finding the factorial of n

Conclusion

Time complexity is important to consider when working as a software engineer. How you build your algorithms heavily impacts the processing time needed for your program. In the graph below, each time complexity we discussed is laid out from Horrible to Excellent in terms of processing time.

Time Complexity Efficiency
Time Complexity for Common Data Structure Operations
Time Complexity for Array Sorting Algorithms

References:

--

--