Time Complexity

Ethan Hess
4 min readOct 8, 2018

--

Time complexity is a concept in computer science that deals with the quantification of the amount of time taken by a set of code or algorithm to process or run as a function of the amount of input. In other words, time complexity is essentially efficiency, or how long a program function takes to process a given input. Furthermore, the time complexity of algorithms is most commonly expressed using the big O notation.

To just give a quick recap on some of the most common big o notations, you can see we have our corresponding names to the actual notation itself, such as constant being O(1), logarithmic O(log n), etc. And what this does is it removes all constant factors so that the running time can be estimated in relation to N, as N approaches infinity.

statement;

In general you can think of it like this: say we have this single statement. This statement’s time complexity will always be Constant because the running time of the statement will not change in relation to N. For example, with arrays, looking up and assigning elements are constant time because all we would need to do is supply the index of the array and it will automatically give us whatever is at that specific index without having to look through the entire array.

for (let i = 0; i < N; i++) {
statement;
}

So next let’s say we have this loop with a statement inside of it. The time complexity for this loop will be Linear because the running time of the loop is directly proportional to N. So if N were to double, so would the running time. So again if we were to have an array, and this time we wanted to find, insert, or remove an element, we would have to loop through the entire length of that array to do so.

for(i = 0; i < N; i++) {   for(j = 0; j < N; j++) {     statement;   }}

So now we have two loops. This time, the time complexity for this code will be Quadratic, because the running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.

function insertionSort (items) {   for (var i = 0; i < items.length; i++) {   let value = items[i];      for (var j = i - 1; j > -1 && items[j] > value; j--) {         items[j + 1] = items[j];      }      items[j + 1] = value;   }   return items;}const list = [54, 26, 93, 17, 77, 31, 44, 55, 20];console.log(insertionSort(list)); // [17, 20, 26, 31, 44, 54, 55, 77, 93]

A perfect example of an algorithm with Quadratic time complexity would be an insertion sort algorithm. As we can see, we have an outer for loop and an inner for loop. In this case every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. This is what gives insertion sort a quadratic running time (i.e., O(n2)).

function binarySearch(arr, target) {   let left = 0;   let right = arr.length - 1;   while (left <= right) {      const mid = left + Math.floor((right - left) / 2);      if (arr[mid] === target) {         return mid;      }      if (arr[mid] < target) {        left = mid + 1;      } else {        right = mid - 1;      }   }   return -1;  }

Here is a code snippet from a binary search algorithm. Right off the bat we can see that this algorithm will have a Logarithmic Time Complexity, meaning that the running time of the algorithm is proportional to the number of times N can be divided by 2. To get a little more in depth, binary search begins by comparing the middle element of the array with the target value. If the target value matches the middle element, its position in the array is returned. If the target value is less than the middle element, the search continues in the lower half of the array. If the target value is greater than the middle element, the search continues in the upper half of the array. By doing this, the algorithm eliminates the half in which the target value cannot lie in each iteration, giving us a logarithmic time complexity

var bruteForcePassword = function(max) {   var alphabet = 'abcdefghijklmnopqrstuvwxyz';   var findPassword = function(attempt) {   if (attempt.length > 0) {      console.log('Trying ' + attempt);   }   if (attempt.length <= max) {      for (var i = 0; i < alphabet.length; i++) {         findPassword(attempt.concat(alphabet[i]));      }   }};

So next, is an example of an algorithm with Exponential time complexity. As we can see here, exponential time is usually for situations where you don’t know that much, and you have to try every possible combination or permutation. In this case we are going through the entire alphabet and trying every possible combination of letters to guess the correct password. As each combination is attempted, the time complexity will exponentially increase.

In conclusion, just remember, when you have various routes to solve a problem, it is definitely wiser to create a solution that just works first. But in the long run, you’ll want a solution that runs as quickly and efficiently as possible. And now that we have an idea on how to figure out time complexities, here are some guidelines to do just that.

1. Does this solve the problem?  Yes ? Step 2 : keep trying
2. Does it cover all edge cases? Yes ? Step 3 : keep trying
3. Are my worst case scenarios for TC as low as possible ? Yes ? Step 4 : Step 1
4. Rejoice!

--

--