Time and Space Complexity (Big O)

ezgi akça
7 min readApr 10, 2022

--

Hello everyone, today I am gonna talk about the time and space complexity of algorithms. What does Big(O) notation represent and how it can help us to compare different algorithms and their code qualities? Let's grab your coffee and meet me here :)

Complexity analysis is the core topic of data structures and algorithms. One problem can have multiple solutions in the programming world. Let me think about the problem of searching for a value in an array that can be solved in so many different ways. There are various searching algorithms for example linear search, bubble search, B-tree, etc. Are these methods the same for the matter is performance? No. These three code snippets can solve the same problem but one code snippet can be better than the others because each one has a different time and space complexity. While time complexity refers to how fast an algorithm completes its job, space complexity takes care of how much space is allocated from memory. Apparently, we can say that a good code can be specified as fast and requires less memory for the completion of the operation. If your code is dealing with thousands or millions of user data, the time complexity might become an issue that needs attention.

So, what is the metric to measure the complexity of different algorithms?

when we measure how fast a program, we cannot refer the time because a specified code snipped can run faster a super computer, but the same operation will takes much more time an old hardware so the metrics shouldn’t depend on the underlying hardware. That is why we use how many steps an operation takes as code’s speed. If code snipped X takes 10 steps and code snipped Y takes 100 steps, we can assume X will always be faster than Y for all circumstances.

Big O Notation

The complexity analysis of an algorithm is specified by Big O notation in computer science. We use Big O to describe the required time or memory for the execution of an operation, so we can compare how efficient algorithms and data structures are according to another. There are different complexity levels for a program as you can see in Figure1.

Figure 1.Big O Complexity Levels

These complexities of an algorithm or used data structure only depend on its input size n. For instance, we think of a program that calculates the summation of N numbers. When n is 10, this program runs 10 step. If n is 1000, this code snipped operates 1000 times to complete its job. It is really important to see how algorithm complexity grows when n number changes. The complexity can grow like constant, logarithmic, exponential, factorial, etc. We have to calculate the number of operations and number of inputs to calculate Big(O) notation.

O(N) Complexity

This is the most common complexity type and we generally use this type of program to develop searching, printing, and calculation of operations for lists. To analyze the complexity of a code snipped, we have to analyze the number of operations and the number of inputs that it takes.

Figure 2.Big O Complexity Levels
Figure 3.Big O Complexity Levels

in Figures 2 and 3, we have a fruits list and we are trying to find a value on this array. Each item in the list compares to the fruit variable which is “apple”, even if the apple is the first element, this for loop operate until to check all element if it is “apple” or not on the list, so loop operates the number of item times in the array. As a result, the complexity of this snippet is O(n), n refers to the number of elements here. When the n number grows, the complexity grows linearly in this situation.

O(1) Complexity

This type of complexity represents constant time complexity. For instance, in figure 4, we print the first element of the fruit list.

Figure 4.Big O(1)

If the fruit list contains 1 element or 100 elements, the displayFruit function prints just the very first element so it does not depend on the number of elements(n) in the list so when the n number grows, the time complexity is 1. This is the very atomic and excellent time complexity.

Figure 5.Big O(1)

If we print two fruits in the function, each operation takes 1 atomic time, and eventually, this operation takes O(2) times. But when we calculate the time complexity, we don’t use O(1), O(2), O(15) etc. because all of these notations are considered constant, in term, we use O(1) notation for growing constant time complexity so as a result, the complexity of code in figure 5 is O(1).

Calculating Time Complexity

In this section, we will follow the code snippet line by line and try to calculate its complexity.

Figure 6. Calculating Time Complexity

In the first line, we generate a fruits list and this is an assigning operation and calculate once so that its complexity is O(1), at the following lines we assign the first element to first_fruit, 0 to total, and empty list to addition_list so all of them is also assigning operation, which is O(1). In the next line, there is a for loop so it will continue the number of item times at the fruits array. That is why first and second line in for loop takes O(n) complexity, remaining print and return processes take O(1) step, consequently, the time complexity of this code is calculated as O(2n+6).

Simplifying Big O Notation

O(2n+6) is calculated as time complexity for the example above, but the final result is O(n) for this scenario. The complexity of the programs can express by different equations like n^2 + 2n+5, n^3+n^2, etc. Therefore we apply some rules to the result to simplify and understand clearly the complexity. There are five different rules to simplify the Big O complexity.

1-) Scalability: We should focus on if n(number of inputs) tends to go to ∞, how the complexity changes. Let me think about the complexity O(2n+6) when the number of inputs, n, is a million, the summation is million + 6, so 6 has a really insignificant effect on the result when comparing a million, so we can remove the constants from equations, eventually, the complexity is O(n) for this example.

2-) Worst Case Scenario: We should focus on the worst-case scenario for a code snippet. For instance, in figure 1, the searching element (apple) is the very first element so the if the condition can terminate the program with the return code, thus, complexity might be O(1), but what if the searching element is the last element or maybe the list doesn’t contain the element as like figure 3. In the worst-case scenario, every element in the array should be visited and that is why the time complexity is calculated as O(n).

3-)Remove All Constants: We should remove all constant variables from the equation in parallel the rule number 1 which is scalability.

O(N^2) Complexity

If the code contains the nested loops, it is depicted as quadratic time complexity. If you take a look at figure 7, the inner loop runs n times (7 for this example) for each item in the list, so print(num1, num2) and total +=1 runs n* n (49 times for this example) so if you calculate the complexity of numbers function complexity, the time complexity can be expressed as O(1+n^2+n^2+1), when we apply simplification rules to O(2n^2 +2) -> O(2n^2) ->O(n^2). The time complexity will be O(n^2) which is called quadratic time complexity.

Figure 7. O(N^2) Quadratic Time Complexity

Last of all

The time complexity represents your code quality or whether the developed code is chunky code or a clean and scalable snippet. The importance of the complexity analysis appears if you managed millions or trillions of data. We should take care of the time complexity rules and choose the most efficient algorithm for the particular problem.

Thank you very much, happy readings :).

--

--