Time Complexity: A Simple Explanation (with Code Examples)

Brahim Guaali
4 min readJun 1, 2023

Time complexity in computer science refers to a way of measuring how the execution time of an algorithm changes as the size of its input grows. It provides insights into the efficiency and scalability of algorithms. In this article, we will delve into various time complexities and their significance, using easy-to-understand explanations and Dart code examples.

Constant Time Complexity (O(1))

In algorithms with constant time complexity, the running time remains the same, regardless of the input size. It means the algorithm is highly efficient and does not slow down as the input grows. A typical example is accessing an element from an array using its index.

void printFirstElement(List<int> array) {
if (array.isNotEmpty) {
print(array[0]);
} else {
print("Array is empty!");
}
}

Linear Time Complexity (O(n))

Algorithms with linear time complexity have a running time that grows linearly with the input size. As the input size increases, the execution time increases proportionally. A common example is iterating over each element in an array.

void printArrayElements(List<int> array) {
for (int i = 0; i < array.length; i++) {
print(array[i]);
}
}

Quadratic Time Complexity (O(n²))

Quadratic time complexity occurs when the running time grows proportionally to the square of the input size. It is often observed in algorithms with nested loops, and the execution time increases rapidly with larger inputs. An example is the selection sort algorithm.

void selectionSort(List<int> array) {
final length = array.length;
for (int i = 0; i < length - 1; i++) {
int minIndex = i;
for (int j = i + 1; j < length; j++) {
if (array[j] < array[minIndex]) {
minIndex = j;
}
}
if (minIndex != i) {
int temp = array[i];
array[i] = array[minIndex];
array[minIndex] = temp;
}
}
}

Cubic Time Complexity (O(n³))

Cubic time complexity refers to algorithms where the running time grows proportionally to the cube of the input size. These algorithms often involve three levels of iteration or computations, and they become highly inefficient for larger inputs.

void cubicAlgorithm(int n) {
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
for (int k = 0; k < n; k++) {
// Perform some computations
print('Computation at iteration $i, $j, $k');
}
}
}
}

Binary Tree Traversal (O(n))

The time complexity of traversing a binary tree depends on the traversal type. In-order, pre-order, and post-order traversals all have a time complexity of O(n), where n is the number of nodes in the tree. These traversals visit each node once.

1 — In-order Traversal

void inOrderTraversal(TreeNode node) {
if (node == null) return;
inOrderTraversal(node.left);
print(node.val);
inOrderTraversal(node.right);
}

2 — Pre-order Traversal

void preOrderTraversal(TreeNode node) {
if (node == null) return;
print(node.val);
preOrderTraversal(node.left);
preOrderTraversal(node.right);
}

3 — Post-order Traversal

void postOrderTraversal(TreeNode node) {
if (node == null) return;
postOrderTraversal(node.left);
postOrderTraversal(node.right);
print(node.val);
}

Loglinear Time Complexity (O(n log n))

Loglinear time complexity, often denoted as O(n log n), is commonly observed in efficient sorting and searching algorithms like Merge Sort and QuickSort. It combines elements of linear (O(n)) and logarithmic (O(log n)) time complexities. As the input size increases, the running time grows at a rate that is proportional to n multiplied by the logarithm of n. This complexity is more efficient than quadratic time complexity (O(n²)), but not as efficient as linear time complexity (O(n)).

The n log n complexity arises when an algorithm divides the input into smaller subproblems and performs a logarithmic number of operations on each subproblem. Examples include divide-and-conquer algorithms that recursively split the input into halves, such as Merge Sort.

List<int> mergeSort(List<int> array) {
if (array.length <= 1) {
return array;
}

final mid = array.length ~/ 2;
final left = array.sublist(0, mid);
final right = array.sublist(mid);

final sortedLeft = mergeSort(left);
final sortedRight = mergeSort(right);

return merge(sortedLeft, sortedRight);
}

In the above code snippet, the Merge Sort algorithm is used as an example. It recursively divides the input array into smaller halves until the base case of an array with one element is reached. Then, it merges and sorts the smaller arrays back together to produce the final sorted array. The divide-and-conquer approach in Merge Sort results in a time complexity of O(n log n).

Logarithmic Time Complexity (O(log n))

Logarithmic time complexity is often seen in algorithms that divide the input in half at each step, such as binary search. In these algorithms, the input size is effectively reduced by half with each iteration. As a result, the running time grows very slowly even for large inputs.

int binarySearch(List<int> sortedArray, int target) {
int left = 0;
int right = sortedArray.length - 1;

while (left <= right) {
int mid = left + ((right - left) ~/ 2);

if (sortedArray[mid] == target) {
return mid;
} else if (sortedArray[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}

return -1;
}

In the example above, the binary search algorithm divides the search space in half repeatedly until the target element is found or the search space is exhausted. This results in a time complexity of O(log n), where n is the size of the input array.

More on this:

--

--