Time Complexity. Part Two:

Prasenjit D Banik
Aug 22, 2017 · 4 min read

On Part 1 I tried to explain two simplest complexities- Constant and Quadratic time complexities. I left out two other time complexities of most common algorithms- O(log(n)) and O(nlog(n)).

On this part we will talk about O(log(n)) complexity, and one of its most recurring application- Binary Search. If you want to know how a binary search algorithm looks like in real life, chances are you have already seen it. One of the most common implementation of binary search is fast autocomplete on your phone.

I came across a very easy to understand, yet elegant explanation of log(n) in a stackoverflow answer by John Ferminella-

The most common attributes of logarithmic running-time function are that:

the choice of the next element on which to perform some action is one of several possibilities, and

only one will need to be chosen.

or

the elements on which the action is performed are digits of n

This is why, for example, looking up people in a phone book is O(log n). You don’t need to check every person in the phone book to find the right one; instead, you can simply divide-and-conquer by looking based on where there name is alphabetically, and in every section you only need to explore a subset of the each section before you eventually find someone’s phone number….

Visually it looks like this-

Lets imagine we have an array n. Usually when we search, our programs instantializes the entire input. It’s dirty, and quick, and saves a lot of time in development when our input size is small. What if our array n has a few millions of integers? What if it’s even bigger? How do we search through that without running out of memory.

Easy- we divide and conquer.

  • We take half the elements, and make an array.
  • If the middle element is not the element we are looking for, we take half of the new array, and compare the middle element of now smaller array. We continue this process until the element is found or we are left with an array of only one element.
  • If the element is not found in this process of dividing and conquering, then we repeat this process on the other half of the original array.

For simplicity, first lets code this iteratively in Ruby-

def binary_search(original_array, element_im_looking_for)

max_index = original_array.length - 1
min_index = 0
while(max_index >= min_index)

midpoint = (max_index + min_index)/2

if original_array[midpoint] > element_im_looking_for
max_index = midpoint - 1

elsif original_array[midpoint] < element_im_looking_for
min_index = midpoint + 1

elsif original_array[midpoint] == element_im_looking_for
return midpoint

end

end
return nilend

This algorithm is very fast because we make our initial very large problem into very small chunks.

arr = Array.new(5000) {rand 10000}sorted_arr = arr.sortputs binary_search(sorted_arr, 50)

You can see our algorithm found index in the array where the element 50 were found. It didn’t find the element every time because of my test.

Recursively it looks like this-

def binary_search_recursively(orig_arr, search_element,
min_index = 0, max_index = orig_arr.length - 1)

return nil if min_index > max_index
midpoint = (max_index + min_index)/2 if orig_arr[midpoint] > search_element
return binary_search_recuresively(orig_arr,
search_element, min_index, midpoint - 1)

elsif orig_arr[midpoint] < search_element
return binary_search_recuresively(orig_arr,
search_element, midpoint + 1, max_index)

elsif orig_arr[midpoint] == search_element
return midpoint

end
end

When I ran the same test, it returned this-

Again it was able to find the index where our target element was seeded.

As I’ve wrote before, binary search has a worst case time complexity of O(log(n)). Which is very fast and efficient for large set of data. Ruby can implement binary search with a built-in method called Array#bsearch.

We all know about ruby find() method, which uses a constant time complexity of O(n). That’s really fast! For smaller problem sizes with search target close to the beginning of an array. Even though O(log(n)) uses divide and conquer with many steps, for a very large problem size it blows linear time find() method out of water when a data is large and target element is closer to the end.

Want to see how fast binary search really is? Lets look at a benchmark-

require 'benchmark'range = (0..100_000_000)Benchmark.bm do |x|  x.report(:find) {range.find {|number| number > 90_000_000 }}
x.report(:bsearch) {range.bsearch {|number| number > 90_000_000}}

end

This is the result-

Lets run the benchmark on a range of 500,000 numbers!

require 'benchmark'range = (0..500_000_000)Benchmark.bm do |x|  x.report(:find){range.find {|number| number > 450_000_000}}
x.report(:bsearch){range.bsearch {|number| number > 450_000_000}}
end

That’s the power of log(n) time complexity.

)
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade