“A firework light painting shot with wire wool taken in front of a building marked in graffiti” by Jeff Fielitz on Unsplash

1.1 Big-O

Yohan Hyunsung Yi
Journey to Tech Company
4 min readApr 30, 2018

--

It is a method to analyze the resource usage of the algorithm. The resources indicate running time, memory, storage, networking, etc., but the resources related to the interview indicate running time. The running time depends on the execution environment. Therefore, instead of measuring the running time, the execution count of the operation is counted. The number of times the operation is executed is expressed by a function relating to the size of the input data. When the number of data, n, is ∞, the time complexity is expressed by the growth rate at which the running time increases. It is not the only method, nor the best method. However, it is relatively simple and is independent of the execution environment of the algorithm. It is therefore most widely used.

  • One technique for indicating the number of executions of operations included in the algorithm
  • Show only the order of the highest order
  • It is therefore sufficient to take into account the number of times the most frequently executed operations or statements are executed

§ O(1) -Constant Time Complexity

Regardless of n, constant time is required. In this case, the time complexity is O (1).

func sample (_data:[int],_ n:int) -> int{
var k = n/2
return data[k]
}

§ O(logN) -Logarithmic Time Complexity

In logN, the symbol “log” is an underscored form, usually log2 N. An algorithm with logN time complexity will solve the problem by gradually reducing the number of data in half when the data is already given. For example, if the first N given is 16, the data will be reduced to 8, 4, 2, and 1 in turn each time you process it. That ‘s the number 4 means log2N = log2 16. It is not uncommon to encounter problems with O(logN) algorithms, but they occasionally appear in the middle of developing some algorithms.

§ O(N) -Linear Time Complexity

When N of data are input, the time complexity takes O(N) even if these pieces of data are input only once. Conversely, if the time complexity is O(N), then reading the data alone will solve the problem. An algorithm that solves this problem is called a “scanning algorithm” and the execution time is “linear.” In most cases, the O (N) algorithm will be the most ideal algorithm we are aiming for. This is because there are no algorithms faster than O(N), except when it is assumed that there is no time to read data, such as O(1) or O(logN).

The most frequently executed statement in this code is the ‘for’ statement, and the execution count is always n times. If the number of executions of the most frequently executed statements is n times, the sum of the execution times of all code is linearly proportional to n, and the sum of the execution times of all operations is also linearly proportional to N.

func sum (_data:[int],_ n:int) -> int {
var sum = 0
for i in 0...n-1{
sum = sum + data[i]
}
return sum
}

§ O(NlogN) -Long Linear Time Complexity

This execution time comes from a program that runs O(N) times at each step, breaking down the problem into smaller problems. You will see many of these time complexities among many efficient algorithms. Because the value of “logN” is much smaller than N, O(NlogN) is not much slower than O(N).

§ O(N²) -Quadratic Time Complexity

O(N²) is the most common time complexity we can see. N * (N-1) = N²-N cases are generated when N items are paired with each other, so O(N²) algorithm is often seen. The most typical example is to use a double for statement to compare all numbers and all numbers and sort by size.

The most frequently executed statement in the algorithm is in the double For statement, and in the worst case the number of executions is n(n-1) / 2.
In the worst case, the time complexity is O(n²).

func is_distinct(_data:[int],_ n:int) -> bool{
var sum = 0
for i in 0...n-1{
for j in i+1...n{
if data[i] == data[j]{
return false
}
}
}
return sum
}

§ O(N³) -Cubic Time Complexity

It shows the “cubic” execution time as a time complexity when using a triple loop statement.

§ O(2^N) -Exponential Time Complexity

If log2 X = N, then 2^N = X will appear. O(2^N) is the opposite of O(logN). O(2N) is doubling the case every time N is increased, if O(logN) is to be reduced by half. The most common example is a coin. The number of N coins with front and back sides that can be lined up is 2^N.

When we solve the problem and conceive of O(2^N) algorithms, we have to consider all possible situations by choosing whether to select each data when N data is given.

If N is exponential, such as O(2^N) or O(N²), it is a time-consuming and inefficient time complexity. O(2^N) usually does not answer within the time limit if the N exceeds 20 ~ 30. When the number of exponents increases by one, how large the total number is can be easily realized by programming a program with time complexity of O(2^N) or O(N²).

§ O(N^N) -Factorial Time Complexity

O(N^N) is the worst time complexity we can encounter. N^N and N! Belongs to the O(N^N) time complexity. Even if N is just over 10, the answer is not available in time. This time complexity occurs when you need to find all possible combinations of N data.

--

--