Big O notation in Swift

What is big O notation?

Vladimir Buikliskii
4 min readJan 24, 2023

Big O notation is the language computer scientists use when comparing the performance of algorithms. In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

Dominant operations.

The way we define Big O algorithms is by looking at the worst performance in its dominant operations.

Constant time — O(1)

func constTime(_ n: Int) -> Int {
let result = n * n
return result
}

Algorithms without a loop (for example: for-in) or algorithms that just return the result of some simple calculation have “constant time” or “O(1)”. This means that these operations are very fast.

Linear time — O(n)

func linTime(_ A: [Int]) -> Int {
for i in 0..<A.count {
if A[i] == 0 {
return 0
}
}
return 1
}

Once the performance of an algorithm becomes dependent on the size of the input being passed, we can no longer say that it has “constant time”. If the time it takes to process is a straight line, we call it “linear time”. This means that the time it takes is directly proportional to the size of the input.

Even though the loop can return immediately if the first value of the array is zero, when evaluating Big O we are always looking for performance in the worst case. It is still O(n) even with the best case O(1).

Logarithmic time — O(log n)

func logTime(_ N: Int) -> Int {
var n = N
var result = 0
while n > 1 {
n /= 2
result += 1
}
return result
}

Algorithms such as Binary Search Trees are very fast because they halve their results every time they look for a result. This halving is logarithmic, which we refer to as “O(log n)”.

Quadratic time — O(n²)

func quadraticTime(_ n: Int) -> Int {
var result = 0
for i in 0..<n {
for j in i..<n {
result += 1
print("\(i) \(j)")
}
}
return result
}

When you nest one for-in loop within another, you have a quadratic effect applied to your algorithm, which can slow things down a lot. It’s okay to get the right answer, it’s just that they’re not the most productive.

The graph below will help you better understand the performance of the algorithms:

Algorithms hitting the bottom right corner (O(1), O(logn)) are considered very good. Linear time O(n) is not bad. However, we get up into here into some exponential. And here’s our quadratic. Here we can see that as we add more and more elements, the performance of our algorithm starts to degrade, in this case in a quadratic manner.

Memory costs.

So far, everything we have considered has had to do with time. And when we talk about Big O, we usually mean time complexity. But there is another side of the coin — memory. We evaluate memory cost in the same way we evaluate time, in the sense that when estimating the memory cost of an algorithm, we look at how many variables are declared and their relative cost. Simple variable declarations are O(1). Whereas arrays and other data structures are relative size or O(n).

Trading off time for space.

One of the biggest improvements we can make in algorithms is trade off between time and space. Take, for example, a typical job interview question:

Given two arrays, create a function that let’s a user know whether these two arrays contain any common items.

// Brute force O(n^2)
func commonItemsBrute(_ A: [Int], _ B: [Int]) -> Bool {
for i in 0..<A.count {
for j in 0..<B.count {
if A[i] == B[j] {
return true
}
}
}
return false
}
commonItemsBrute([1, 2, 3], [4, 5, 6])
commonItemsBrute([1, 2, 3], [3, 5, 6])

On the other hand, if we trade memory, we could improve the time if we created a hash map of one array and then used it to quickly find the answer in another.

// Convert to hash and lookup via other index
func commonItemsHash(_ A: [Int], _ B: [Int]) -> Bool {

// Still looping. ..but not nested - 0(2) vs 0(n^2)
var hashA = [Int: Bool]()
for a in A { // O(n)
hashA[a] = true
}
// Now lookup in the hash to see if elements of B exist
for b in B {
if hashA[b] == true {
return true
}
}
return false
}
commonItemsHash([1, 2, 3], [4, 5, 6])
commonItemsHash([1, 2, 3], [3, 5, 6])

Here is an example of such an exchange of memory for time. The first option did not require additional space, but in terms of time it was very slow. By taking up a bit of memory (an extra hash map), we’ve gained a lot of time and ended up with a much faster algorithm (O(n)).

--

--