# Time Complexities Of Common Array Operations In JavaScript

Often we perceive JavaScript as just a lightweight programming language that runs on the browser and hence neglecting any performance optimisations. While in many cases that works just fine, it can be very expensive in several scenarios. The takeaway from this post should not be just memorising some Time Complexities but also thinking about performance in general when dealing with JavaScript code. Before we start, if you do not have at least a basic understanding of Time Complexity and Big O Notation, I highly suggest that you look them up and learn about them a bit before continuing with this post.

Let’s make a simple array first.

`const arr = ['A', 'B', 'C', 'D', 'E', 'F'];`

# Accessing Or Editing Elements

Now let’s say we want to access ‘C’ which is the 3rd value in the array. Since we already know the index of the value, we can just do

and we will get what we need. What do you think happens under the hood when we do that? Does it keep going through the array element after element until it finds the value that has an index of 2? Nope! Think of the indices as addresses to these elements in memory. Since we have the address of ‘C’ which is index 2, we can directly retrieve it without having to go through anything else. So that means accessing values of an array have a *arr[2]***Constant Time Complexity** which we can write as **O(1)**. It doesn’t matter how many values an array has, it will take us the same time (approximately) to access an element if we know its index. Editing an element like `arr[2] = ‘G’`

is also **O(1)** since we do not need to modify any element other than the concerned element.

# Searching For An Element Linearly

We now want to do the exact opposite of what we did above. We already know the value of an element but we want to find the index of it. We can use the ES6 *Array.findIndex()* method to do this but for now we’ll stick to *Array.indexOf()*.

`arr.indexOf('C'); `*//returns 2*

What do you think happens there? You might think that we know the address of ‘C’ and so we can just go there and find its index. But that’s not true, because the index itself is the address of the element. So the only way to find the index of ‘C’ is by going through the array starting from the first element until it finds an element that has the value ‘C’. So this operation has a **Linear Time Complexity** and so can be written as **O(n).** Now you may argue that we don’t necessarily have to go through the entire array but only until the 3rd element. So shouldn’t it be **O(n/2)** instead? Well no, because when using Big O Notation, we only care about the most impacting term. When n gets big enough, the impact of other terms becomes insignificant.

**Adding Or Removing An Element**

Let’s start with adding. The most common ways I can think of to add an element to an existing array are with the *Array.push()* method which adds an element at the end of an array and the *Array.unshift()* method which adds an element to the beginning of an array. You may think they work the same way and so should have the same Time Complexity. That’s however not true. The **Array.push() **has a **Constant Time Complexity** and so is **O(1). **All it does is add an element and give it an index that’s 1 greater than the index of the last element in the array. So it doesn’t matter whether the array has 10 elements or 1000. The number of operations that needs to be performed won’t change. The same however cannot be said about *Array.unshift*(). Adding an element at the beginning of an array means the new element will have an index of 0. Which means that the index of every other element must be incremented by 1. So **Array.unshift()** has a **Linear Time Complexity** and is **O(n).** The *Array.pop()* and *Array.shift()* methods which are used to remove an element from the end and beginning of an array respectively, work similarly. **Array.pop()** is **O(1)** while **Array.shift()** is **O(n)**. We can use the **Array.splice()** method to remove an element and/or insert elements at any position in an array. When we use this method, the number of indices that need to be changed depend on which index you splice. But in the worst case scenario which is if you splice at the very start is **O(n).**

**Some Other Common Array Methods**

Methods like **Array.filter()**, **Array.map()**, **Array.find()**, **Array.findIndex()**, **Array.reduce()**, **Array.forEach()** always go through the entire array and so have **Linear Time Complexity O(n). **Lastly, I want to talk a little bit about the **Array.concat()** method.

`const arr1 = ['A', 'B', 'C', 'D', 'E', 'F'];`

const arr2 = arr1.concat('G');

In the case above, the arr1 array gets copied with an additional element with value ‘G’. So the Big O for this would be **O(n).**

`const arr3 = arr1.concat(arr2);`

In this case however, the JavaScript interpreter has to go through both arr1 and arr2 entirely to return a new array with all their values combined. However, the length of the 2 arrays aren’t equal. So technically, the Big O for this is** O(n + m)** where n depends on arr1’s length and m on arr2's.

**Conclusion**

Obviously I didn’t cover every single Array method but I think that after reading this post, you will be able to figure out **Time Complexities** of most Array methods. And more importantly, I want you to consider performance more often when writing JavaScript. Sometimes we tend to sacrifice performance to make our code look a little cleaner without realizing it. I myself was exposed to such a scenario not too long ago when working on an Uber-like app where I had to make a map display locations of various cars in realtime.