Divide-and-Conquer

Garrett Halstein
Nov 3 · 3 min read

Divide-and-conquer is a style of solving problems that is not at all limited to computer science problems. It’s the idea of solving a problem in a recursive manner, or breaking a bigger problem down into smaller subproblems that can be solved more easily. There are three parts to it including the divide part, the conquer part and then combining the sub-solutions together.


Divide

The divide part focuses on splitting the big problem up into smaller and more manageable subproblems. This process continuously keeps breaking the problem down into smaller problems, which is known as the recursive case, until the subproblems are small enough. The base case is when no more recursion is necessary for a subproblem. For an example, let’s look at some pseudocode for quick sort.

// pseudocode for quick sortquicksort(int[] arr, int low, int high) {

if (low < high) {
int p = partition(arr, low, high);

quicksort(arr, low, p - 1); // calling the lower half
quicksort(arr, p + 1, high); // calling the upper half
}
}
// pseudocode for partition that makes all smaller numbers in front // of the pivot and all bigger numbers behind the pivotpartition (int arr[], int low, int high) {
// pivot (Element to be placed at right position)
pivot = arr[high];

int i = (low - 1) // Index of smaller element

for (j = low; j <= high - 1; j++) {
// If current element is smaller than the pivot
if (arr[j] < pivot) {
++i;
swap(arr[i] and arr[j])
}
}
swap(arr[i + 1] and arr[high])
return (i + 1)
}

As you can see it is calling quicksort within its own function. This is the recursive case. The base case is implicitly interpreted in the conditional of quicksort so that when the condition is not met there is nothing left to do.

Conquer

The base case is the goal. When breaking down a problem into smaller problems, it will start to become solved when it gets small enough and no more recursion is necessary. This is the conquer part. As you can see in the example the base case would be considered the conditional in quicksort . If that condition is not true then no more recursion is called in that branch of calls. It would be time to combine the solution of this branch of calls with the other branch of calls.

In the quicksort example you can see that every time quicksort is called the partition function is called. This is the part where the subproblem gets solved, or conquered, and is then combined with the other sub-solutions.

Combining

The combining part is important because even though the smaller problems are solved in order to get the solution to the big problem the smaller ones have to be combined. The combining portion is a little less noticeable in quicksort because we are working with an array, which involves passing by reference. The subproblem solutions are making direct changes to the array in solving the smaller problems, which is ultimately sorting the entire array. Combining is more apparent in something like solving the Fibonacci sequence for the nth term, which has a recursive case return value of fib(n — 1) + fib(n — 2) . Here, every sub-solution is visibly combined within the function.

Closing Remarks

As you can see this is a brief introduction to divide-and-conquer algorithms. They can be very helpful in optimizing solutions to problems, and many times even the most logical solution to implement. It’s great to practice implementing the basic algorithms like quick sort on your own to familiarize yourself with it.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade