The “Big O Notation” In Simple Terms.
Introduction To Time Complexity In Algorithms
Time complexity in algorithms determines its efficiency to solving problems. We can use the “Big O Notation” to determine the efficiency in approaching the problem. This is based off the runtime of an algorithm, that shows its complexity and performance. This describes how an algorithm scales with more inputs, as a way to classify algorithms and their response to changes in the input size. By knowing the efficiency of an algorithm, developers can build better applications.
We are going to look at how input scales the larger the data size. You will still get the same or similar result no matter what type of programming language or hardware is used. These are some of the more common types of the Big O. The coding examples used were written in Python.
Steps to determine the time complexity:
- Drop the less significant terms and identify the fastest growing term
- Drop the constants also called the coefficients
4 Common “O” Notations:
Constant (“Order of 1”)
O(1) — Executes at the same amount of time no matter what size the data or what size the list is. It only requires 1 iteration to complete. The 1 can be any number, it is a constant.
T = N = c * 1 = O(1)
X = ['A', 'B', 'C', 'D']
X = D
In the given example, we have a list with n = 4 elements. Suppose we want to find the 4th element which is indexed as , then all that needs to be done is read the index  which stores the data value of D.
There is no change in the time to search with the data set. When a defined list with values is already set, it just requires one iteration to find the data value. It will take a constant time, no matter what you are searching for in the data set. In this case it is fast because we know what we want to look for.
The most practical example of a constant is that used in a vending machine. The vending machine provides you options that have a value assigned to them. These values are constants that do not change and do not require processing. You simply choose the option you want to dispense the product you need from the vending machine.
print("Please select from the following items:")
print("1 - Potato Chips")
print("2 - Pretzels")
print("3 - Nachos")
print("4 - Popcorn")optNum = input("Type number: ")
var = int(optNum)if var == 1:
print("You have chosen Potato Chips")
elif var == 2:
print("You have chosen Pretzels")
elif var == 3:
print("You have chosen Nachos")
elif var == 4:
print("You have chosen Popcorn")
print("Invalid Option, try again")
If you follow the code above close enough, you will see that it just implements a simple option switch. In this case the value of 1 will always be a constant “Potato Chips” as specified in the options. All it takes is to make that selection and the operation is completed.
Linear (“Order of N”)
O(N) — The time to complete is dependent or directly proportional to the number of items. For N the number of elements requires the same number of iterations to complete.
T = aN + B = N = O(N)
X = ['A', 'B', 'D', 'E', 'F']
X = ['A', 'B', 'C', 'D', 'E', 'F']
As you can see when we add the letter C in its proper position in the list, we had to move D, E and F 1 location to the right. That means X[n+1] so if the position of D is X it will become X[2+1] or X which is one position to the right. C will then assume the position X. This required 3 additional operations for moving D, E and F.
Y = ['apple', 'banana', 'mango', 'grapes', 'pineapple', 'durian']
In this example we want to search for a target from the list of 6 elements. If target = “pineapple”, then the search will take a total of 5 tries, from apple to pineapple, before it gets the target from the list.
The linear search example can be summarized as:
- Set i, the index value to 0 and the total length of the list is set to n.
- If result (linear search) = x (target), the search terminates successfully; return i.
- Increase i by 1 with every step.
- If i < n, go to step 2. Otherwise, the search terminates unsuccessfully.
- If the search did not find the target in the list, then it will still terminate once i = n.
def search(Y, n, x):for i in range (0, n):
if (Y[i] == x):
return -1;Y = [‘apple’, ‘banana’, ‘mango’, ‘grapes’, ‘pineapple’, ‘durian’]x = “pineapple”;n = len(Y);
result = search(Y, n, x)
if(result == -1):
print(“Element is not present in the list”)
print(“Element”, x, “is present at index”, result)
In our code above, when it finishes executing it will return the result of the element’s position by its index and the value of our target x, “Element pineapple is present at index 4”. If we were to change the target to “strawberry”, the code will still go over the elements in the list and return the result “Element is not present in the list”.
Quadratic (“Order of N Squared”)
O(N²) — Time is measured as the square of N elements. The inputs are directly proportional to the exponential growth in time. It is quadratic in nature meaning the run time is proportional to the square root of the amount of data.
T = n*(n+1)/2 = n² + n/2 = O(N²)
We want to sort an array in numerical order.
X = [21, 5, 3, 18, 30, 2, 8] -> X = [2, 3, 5, 8, 18, 21, 30]
First, X compares itself to X, then X until X[n], where n is the number of elements in the list. It will go down the entire row of elements until its position in the order is determined. Then X will do the same thing and so on until the entire list is in numerical order. It must make multiple passes, compare adjacent indices and swap position with those that are out of order. This would be rather inefficient when dealing with large sets of data. This would be the worst case time complexity example in O notation.
X = [21, 5, 3, 18, 30, 2, 8]def bubbleSort(X):
n = len(X) for i in range(n):
for j in range(0, n-i-1): if X[j] > X[j+1] :
X[j], X[j+1] = X[j+1], X[j]bubbleSort(X)print("This is the sorted list: ")
for i in range(len(X)):
You can also just sort the list using the sort() function. This does not require writing much code and achieves the same result.
X.sort()[2, 3, 5, 8, 18, 21, 30]
One thing to remember is that once the elements in your list increases, the performance does not get any better with using bubble sort.
Logarithmic (“Order of Log N”)
O(Log N) — In a logarithmic time scale doubling the size of the input data set has little effect on its growth. A single iteration just halves the input data so it is more efficient for handling large sets.
T = Log N² = O(Log N)
X = [10, 15, 35, 42, 60, 70, 82, 94]
If we want to set 60 as the target, the search will go to the middle half of the list and begin it’s search. Let’s say we go to position X which is 42. If the target 60 is > x, eliminate the numbers below it in the list. If the target 60 < x, eliminate searches on the last part of the list. In this example anything below X is eliminated leaving the search to start at X which is the target we are looking for.
def binary_search(X, item):
first = 0
last = len(X)-1
found = False
while( first<=last and not found):
mid = (first + last)//2
if X[mid] == item :
found = True
print(“The element item”, item, “was found at index “, X.index(60))
if item < X[mid]:
last = mid — 1
first = mid + 1
return foundprint(binary_search([10, 15, 35, 42, 60, 70, 82, 94], 60))
This technique has obvious advantages. If we had to inspect each element one at a time starting from index 0, like in a linear search, it will take much longer. Likewise if we had to bubble sort the list and then do a linear search afterwards to find the target, that is the most inefficient. Logarithmic or binary search saves time and is also much faster when dealing with a large list of data.
We can see that these are different approaches that can measure the efficiency of an algorithm. From observation we can see that over time as data gets larger and larger, the time to execute the code slows. Even if the processor on the computer gets faster, it won’t make the code execute any faster. Coding still requires efficiency and that matters especially when performance can improve productivity and reduce costs.
From the graph we can see that the complexity increases with more elements added since it increases the number of operations to run.
O(1) is constant, no matter how many elements are added the resulting operation will always be the same time.
O(N) is linear, it increases operations as the number of elements increases in direct proportion.
O(Log N) is logarithmic or half of linear time, it halves the number of operations required given the same elements as O(n).
O(N²) shows the worst case scenario as operations increase steeply with the increase in number of elements, or quadratic time.
Coding examples can be found on my GitHub page at
Code snippets below: