Understanding time and space complexity with Big O
One of the most foundational concepts of programming is time and space complexity and we represent this idea with the Big O notation. Let’s get the fancy words out of the way so we can start to unwrap this mystery. Time complexity simply refers to the time taken by the program to complete its task, space complexity refers to the amount of memory the program took to complete its task and the Big O notation is nothing more than a scale we use to tell which program has better time or space complexity.
Big O
The most intimidating part of this article is probably the Big O notation so let’s tackle that first. As I mentioned, Big O notation is nothing more than a scale or benchmark used to label the time and space complexity of a program/algorithm. So you might be wondering what’s the deal with it being called “Big O”, it is just a clever way of saying the order of complexity of a function/program/algorithm which in layman’s terms means that it’s a notation used to label the complexity.
The next logical question to ask would be how does Big O notation compare the complexities of different algorithms? Big O compares the input size to the number of operations required by the algorithm to complete its task. So you can probably guess here that for a fixed input size if algorithm 1 does fewer operations than algorithm 2 (for the same task) then algorithm 1 is faster and more efficient than 2. If we think about it input sizes in the real world are rarely ever the same so it is more important to understand how an algorithm behaves with an increasing input size, as two algorithms might perform the same for an input size of 100 but have drastically different efficiency when the input size is 100,000,000. This is exactly what the graph above represents, here you can see some of the most common trends that algorithms follow where the function inside the O(“function”) represents the y-axis and “n” represents the x-axis.
A one-stop destination for all your big-O needs click here.
Time Complexity
The time complexity of an algorithm is governed by how much time it takes for an increasing input size. Once again, when we write programs our main concern is how these programs will scale with a large input size as this will directly impact the time our program takes to complete a given task. Poorly designed algorithms can cost a company millions due to unsatisfied customers and computing costs. So as a developer respecting the time complexity of your program is the most important part of our job and finding the most efficient solution to the problem is our goal!
Let us take a quick look at an example of time complexity in an algorithm. Loops are a very useful tool when programming they help us iterate over data structures and access data. So the time complexity of a loop is O(n), which means that for a given number of elements to be inserted inside an array we will need to iterate over all n elements using the loop. This means that for n elements we will have n operations. Now let’s think about a situation when there is a loop inside a loop, in this case, the time complexity is O(n²) meaning that for n elements we will need to perform n squared operations (definitely not a scalable solution).
Space Complexity
The space complexity of an algorithm is the amount of space required to complete the given task. An important point to realize as a programmer is that we don’t have infinite memory, so just like the time complexity of an algorithm we also need to respect its space complexity. Let's continue our above examples with the single loop, here we are trying to use the loop to fill an array with “n” elements, so our space complexity is O(n) as the newly created array is going to take n spaces in memory.