What are these notations”O(n), O(nlogn), O(n²)” with time complexity of an algorithm?

Burak KOCAK
5 min readFeb 1, 2023

--

time complexity chart

1. Constant time — O (1)

2. Linear time — O (n)

3. Logarithmic time — O (log n)

4. Quadratic time — O (n²)

5. Cubic time — O (n³)

The notation O(1) is used to describe the time complexity of an algorithm, which is a measure of how the running time of the algorithm grows with the size of the input. The letter “O” stands for “order of magnitude”.

For example, if an algorithm has a time complexity of O(1), it means that the running time of the algorithm does not grow with the size of the input. Instead, the running time remains constant regardless of the size of the input.

Algorithms with a time complexity of O(1) are often referred to as constant-time algorithms and are considered to be the most efficient algorithms in terms of running time. They take the same amount of time to execute, no matter how big or small the input size is. Examples of algorithms with O(1) time complexity include basic arithmetic operations, accessing an element in an array at a fixed index, and checking if an element is present in a hash table.

It’s important to note that the exact running time of an algorithm can vary based on various factors such as the implementation details, the distribution of the input data, and the specific inputs used. The time complexity provides a rough upper bound on the running time of the algorithm and can be used to compare the efficiency of different algorithms.

The notation O(n) is used to describe the time complexity of an algorithm, which is a measure of how the running time of the algorithm grows with the size of the input.

For example, if an algorithm has a time complexity of O(n), it means that the running time of the algorithm is proportional to the size of the input, represented by “n”. More specifically, it means that the running time grows linearly with the size of the input. So, if the size of the input is doubled, the running time of the algorithm is also roughly doubled.

The time complexity of an algorithm is a crucial factor in determining the efficiency of the algorithm, and it is an important consideration when choosing an algorithm for a particular problem. Algorithms with a lower time complexity are generally considered to be more efficient and faster than algorithms with a higher time complexity.

The notation O(nlogn) is used to describe the time complexity of an algorithm, which is a measure of how the running time of the algorithm grows with the size of the input.

For example, if an algorithm has a time complexity of O(nlogn), it means that the running time of the algorithm grows logarithmically with the size of the input, represented by “n”. More specifically, it means that the running time of the algorithm grows proportional to the size of the input multiplied by the logarithm of the size of the input.

Algorithms with a time complexity of O(nlogn) are generally considered to be efficient and fast, especially for large inputs. Many well-known algorithms, such as merge sort, quick sort, and binary search, have a time complexity of O(nlogn).

It’s important to note that the exact running time of an algorithm can vary based on various factors such as the implementation details, the distribution of the input data, and the specific inputs used. The time complexity provides a rough upper bound on the running time of the algorithm and can be used to compare the efficiency of different algorithms.

The notation O(n²) is used to describe the time complexity of an algorithm, which is a measure of how the running time of the algorithm grows with the size of the input.

For example, if an algorithm has a time complexity of O(n²), it means that the running time of the algorithm grows quadratically with the size of the input, represented by “n”. More specifically, it means that the running time of the algorithm grows proportional to the square of the size of the input.

Algorithms with a time complexity of O(n²) can become very slow for large inputs and are generally considered to be less efficient than algorithms with a lower time complexity. For example, the simple algorithm of comparing each pair of elements in an array to check for duplicates has a time complexity of O(n²).

It’s important to note that the exact running time of an algorithm can vary based on various factors such as the implementation details, the distribution of the input data, and the specific inputs used. The time complexity provides a rough upper bound on the running time of the algorithm and can be used to compare the efficiency of different algorithms.

The notation O(n³) is used to describe the time complexity of an algorithm, which is a measure of how the running time of the algorithm grows with the size of the input.

For example, if an algorithm has a time complexity of O(n³), it means that the running time of the algorithm grows cubically with the size of the input, represented by “n”. More specifically, it means that the running time of the algorithm grows proportional to the cube of the size of the input.

Algorithms with a time complexity of O(n³) can become very slow for large inputs and are generally considered to be less efficient than algorithms with a lower time complexity. For example, the algorithm for solving a system of linear equations using Gaussian elimination has a time complexity of O(n³), where n is the number of variables in the system.

It’s important to note that the exact running time of an algorithm can vary based on various factors such as the implementation details, the distribution of the input data, and the specific inputs used. The time complexity provides a rough upper bound on the running time of the algorithm and can be used to compare the efficiency of different algorithms.

--

--