DIVIDE AND CONQUER ALGORITHM

Yesh Pareek
4 min readApr 12, 2020

--

Designing an algorithm always comes with its problems. Although we have the best interests at heart, we tend to ignore the consequences. Well, this issue isn’t that serious because the consequences are taken care of, by other algorithms. What I mean by consequences, is disadvantages. Not all algorithms are capable of solving all the problems and provide an optimal solution. Once we find a problem which is not solvable using the current algorithm, we design another algorithm which is able to solve not only the problem in question, but all the problems of its family. In this age of rapid technological development, we aren’t able to design an algorithm generic enough to give a solution for all the problems. The algorithm in question in this post is The Divide and Conquer algorithm. Although this algorithm works wonders in terms of optimal solution, efficient algorithm, reducing the asymptotic cost of a problem and even accuracy in some cases, it is not perfect. There are certain issues that a programmer faces while implementing the algorithm.

As the divide and conquer algorithm is based on dividing the problems into smaller problems, the way this algorithm uses to do so is recursion. A recursive function is a function that calls itself in its definition. In doing so, the partial sub-problems leading to the problem that needs to be solved are automatically stored in the procedure call stack. Recursion being an easy way to reduce the line codes in layman terms is not that easy to get hold of. Many beginners and even expert programmers refrain from using the recursion method.

Many programmers try to find a way around the recursion method, as it allows them to test their skills. The divide and conquer algorithm can also be implemented using a non-recursive program. The sub-problems are stored in any explicit data structure, such as stack, queue or a priority queue. Unlike recursion, this method allows the freedom of choice of which sub-problem to solve next. Also this is a standard solution for programming languages that don’t allow recursion. This being an obvious solution to the recursion problem is so much more tedious that people tend to stick to recursion in divide and conquer strategy.

While implementing the divide and conquer strategy using recursion, one must make sure that there is sufficient memory allocated to the recursion stack. Insufficient memory will lead to the failure of algorithm due to stack overflow. Time efficient divide and conquer algorithms tend to have a small recursion depth. For example, quick sort algorithm requires no more than log2n recursive calls while sorting an array of n elements. Many compilers assume the recursive stack to be contiguous memory area, hence they allocate a fixed amount of space for it which makes it difficult to avoid stack overflow. Also, the compilers save more information than strictly necessary such as internal variables, unchanging parameters and the return address. To avoid this, one can reduce the number of parameters and internal variables.

In recursive algorithms, we need to select a base case. The base cases are the small sub-problems that terminate the recursion. It is convenient and elegant to select the smallest possible base case and it usually leads to simpler programs which do not need any further recursion and there are only a few cases to consider which are easier to solve directly. But the efficiency tends to increase when the recursion stops at relatively larger base cases. This helps to avoid the overhead of recursion to do a little or no work. In other words, we are short circuiting the recursion. It is always more efficient if we check if the next call leads to a base case instead of checking it after making the call. For example, in a binary tree, instead of visiting the child node and returning if it is NULL, we can check if the child node is NULL and avoid visiting it. This will save us almost half of the calls in some algorithms on binary tree. In divide and conquer algorithm, we divide a problem into smaller sub-problems leading to large number of cases, it dominates the overall cost. Hence, using any brute force method to solve a small sub-problem does a negligible harm to the overall complexity. For example, in quick sort algorithm, many library functions will switch to insertion sort or any method once the problem is small enough. Instead of setting an empty list, a list of size 2 can be set which will avoid most of the do nothing calls. Alternatively, set a large sized base case which itself implements divide and conquer algorithm using recursion but for only a specific amount of recursive calls. A fixed size is set achieving which will simply solve the problem using a loop or conditional statements.

For some of problems, the branched recursion will end up solving the same sub-problems multiple times. In this case, it is worth identifying and evaluating such sub-problems and storing the result. This technique is commonly known as memorization.

If one takes care of all the issues mentioned, it will be much easier and in some cases much more efficient to use the divide and conquer strategy for a problem.

--

--