Log(n) Algorithm for “Rotated” Arrays

A “rotated” array

Pictured above is an array whose elements have been presorted in lexical order and then shifted, also called rotated, by S spaces. This maintains the number of elements in the array (let’s call that number N). If there was an index, i, for every element before the shift, that index after the shift would now equal (i+S) mod N. (Note that the actual ordering doesn’t matter, just so long as it is totally ordered)

Our goal is to find the new index, i*, of an element x matching a search term from the array. We want an algorithm of order O(log(n)). Now we do know about the shifting of the array, so it is tempting to go looking for the two elements who are out of order, reassign our index o to wherever that may be and go looking from there. This will tell us exactly how much this has been shifted. That is, we could solve for S, first. The problem, of course, is that this will do at worst O(n), and we might as well have looked through every element.

The solution is, as it usually is for O(log(n)) algorithms, to opt for a divide and conquer approach. My Hack Reactor Remote cohort-mate Steven Tsao has a nice JS implementation:

function rotatedArraySearch (rotated, target) {
var minInd = 0, maxInd = rotated.length-1;

while (minInd <= maxInd) {
var midInd = Math.floor((minInd+maxInd)/2);
    if(target === rotated[midInd]){
return midInd;
}

if(rotated[midInd] <= rotated[maxInd]){
// if right is sorted and target does not fall between mid to
// max search left
if(target < rotated[midInd] || target > rotated[maxInd]){
maxInd = midInd-1;
} else {
minInd = midInd+1;
}
} else if(rotated[midInd] >= rotated[minInd] ) {
// if left is sorted and target does not fall between min to
// mid, search right
if(target>rotated[midInd] || target < rotated[minInd]){
minInd = midInd+1;
} else {
maxInd = midInd-1;
}
}
}
return -1;
}

First, we check the middle element. If our target element is not in the middle, we have some choices to make. Choosing the middle has the advantage that it partitions the array nicely into two, so checking one side means we don’t have to check the other. And we know that there is exactly 1 point in the whole array where the ordering does not hold. Let us call that point, our pivot, P. Now the index of P must either either be lesser or greater than our middle index, but it cannot be in both. (Let us take a moment to thank Dirichlet and the wonderful pigeonhole principle). Now, we must simply alter our search depending on where the index P is.

So we check that condition first by comparing the elements at the middle and last indices (or the elements in the middle and first indices). Then we want to look at the indices of our array for which the ordering holds. The next branch on the decision tree — finally — regards our target value. We want to search such that our target element is always within the range of indices we are looking at.

How can we ensure this? Well, we know that the ordering applies to at least one half of the elements, so we check there. If it is not in that half, then it must be in the other half (again with the pigeonhole principle!). So, say the sorted portion of our array is in the indices higher than our middle point; then we ask if the target value is both larger than our middle element and the last element. If so, we check the lower indices. The check where the lower indices is sorted goes symmetrically.

Eventually, we run out of elements, or find our target, and return i* or -1. This is our base step. Whether you do this recursively, or iteratively, is up to you, the order of complexity stays the same. Because we reduce our elements each time by 2 (and we use the floor of that number), and our base case is running out of elements — it is guaranteed to halt. Our algorithm is also guaranteed to find the target, if it exists, since we had proved above that the target (again, if it exists) will always be within the set of indices we are checking.

QED and Amen. Praise be to the pigeonhole principle!