Dynamic Programming: Saving Time on Recursion

Hannah Reitzel Rivera
CodeX
Published in
6 min readNov 8, 2021

Dynamic programming might be one of the most difficult concepts in learning data structures and algorithms, not because it is particularly difficult to understand solutions written using dynamic programming patterns, but instead because it is very difficult to recognize a problem that can be solved via dynamic programming and then execute the solution. Typically, one giveaway to the possibility of a DP solution is the ability to write a recursive solution. Not all recursive problems can be improved with dynamic programming, however, so the existence of a recursive solution is not a perfect giveaway.

There are two conditions that must be met in a recursive solution for dynamic programming to be used: optimal substructure and overlapping sub-problems. In order to explore dynamic programming, I want to first explain what these conditions mean and how to know if they are met.

Optimal Substructure

What does it mean for a problem to have optimal substructure? This means that the optimal solution to a problem can be found by combining the optimal solutions of its sub-problems. This is the piece of dynamic programming that causes recursion to be a clue. Because recursion, by its nature, requires that sub-problem solutions combine to produce the overall solution, a recursive solution is a clue that the problem has optimal substructure.

An example of a problem that fits this criterion is the Fibonacci sequence, which I have previously covered as an example of recursion. Because the nth number of the Fibonacci sequence is the sum of the previous two Fibonacci numbers, it fits the optimal substructure constraint of dynamic programming.

The Fibonacci sequence diagrammed to show how it has optimal substructure.
A diagram of a recursive Fibonacci number generator, showing how it fits the optimal substructure criterion.

Overlapping Sub-Problems

This constraint of problems that can be solved by dynamic programming is a bit harder to understand than optimal substructure. It essentially means that, given the brute force recursive solution to a problem, the same sub-problems will be solved repeatedly as the overall function generates the final solution. This can be seen in the diagram of a recursive solution for the nth number of the Fibonacci sequence, above: the generating function solves fibo(4) twice, fibo(3) three times, fibo(2) five times, and so on (the number of solves is, itself, a function of the Fibonacci sequence).

A diagram of a function recursively calculating the nth number of the Fibonacci sequence, showing how many times each sub-problem is repetitively solved.
The same diagram of the recursive Fibonacci number generator, with symbols representing each overlapping sub-problem to show how many times it is solved.

For a simpler problem like the sixth number of the Fibonacci sequence, this recursion does not cost too much in terms of computation time. However, as we ask for larger and larger Fibonacci numbers, the recursive solution really begins to add computational complexity. This problem has a time complexity — which I will cover in a future installment of this series but which can be read about here and here — of O(2^n), which is considered extremely poor performance. Dynamic programming seeks to eliminate the repetition of overlapping sub-problems by solving each only once, thus eliminating the high time complexity of many recursive solutions.

A Sample Problem

Throughout the rest of the article, I will demonstrate dynamic programming using the following problem (which can be found on Leetcode):

Given a triangle array, return the minimum path sum from top to bottom. For each step, you may move to an adjacent number of the row below. More formally, if you are on index i on the current row, you may move to either index i or index i + 1 on the next row.

Example 1:Input: triangle = [[2],[3,4],[6,5,7],[4,1,8,3]]
Output: 11
Explanation: The triangle looks like:
2
3 4
6 5 7
4 1 8 3
The minimum path sum from top to bottom is 2 + 3 + 5 + 1 = 11 (bolded above).
Example 2:Input: triangle = [[-10]]
Output: -10

This can clearly be solved recursively. Here is a brute force or naive recursive solution. If this is submitted on Leetcode, it will not pass because it exceeds the time limit given the number of test cases. However, it can be hand-tested by inputting a variety of test cases in the sandbox. It does work.

var minimumTotal = function(triangle, idx = 0, height = 0) {

if(height === triangle.length - 1){
return triangle[height][idx]
}

let sum = 0;
let path1 = minimumTotal(triangle, idx, height + 1)
let path2 = minimumTotal(triangle, idx + 1, height + 1)

sum = triangle[height][idx] + Math.min(path1, path2)
return sum
}

Does this problem meet the qualifications for dynamic programming? First, does it have optimal substructure? By virtue of the fact that it asks for a path, we can say yes — the path must connect (be within a certain distance of i or i + 1 from the previous number) and the path must travel the complete triangle (include all rows of the array). Therefore, the solutions to sub-problems should generate the solution to the overall problem. It has optimal substructure.

Are there overlapping sub-problems in the naive recursive solution? In this case, the answer is an easy ‘yes.’ If we examine the code in the snippet above, we can clearly see that every run of the function is recalculating path1 and path2, and that path1 for one number will be the same as path2 for the previous number, and so on. There is quite a bit of overlap.

Since the problem meets the conditions for reworking via dynamic programming, we can now cover how to alter the recursive solution so that sub-problems are solved only once, saving a great deal of time to solution.

Dynamic Programming via Tabulation

There are two basic methods that can be used to reduce the complexity of recursive solutions: memoization and tabulation. Memoization can be thought of as the ‘top-down’ approach to solving a recursive problem. While it preserves some of the recursion in the original solution, it passes the solutions to previous sub-problems down through the recursion until a solution is built. While this saves time over pure recursion, it is considered distinct from actual dynamic programming by many engineers. A common technique of dynamic programming, called tabulation, works in almost the opposite way. Rather than working top-down, tabulation works backward by going from the bottom up in a problem. In the case of the triangular array in this problem, the tabulated solution quite literally works from the bottom of the triangle toward the top.

Here is an example of a tabulated solution to the triangle array minimum sum problem above:

var minimumTotal = function(triangle) {
for(let row = triangle.length - 2; row >= 0; row--){
for(let col = 0; col < triangle[row].length; col++){
triangle[row][col] += Math.min(triangle[row + 1][col], triangle[row + 1][col + 1])
}
}
return triangle[0][0]
};

This rather clever solution (for which I do not take credit — that goes to ChippyChoppy, aka my friend Becca) uses the triangle array itself to hold the many path-based solutions until the top of the triangle is reached. Starting at the row just above the ‘bottom,’ (second to last sub-array of the triangular array), the function adds the smaller of the two pathways ‘below’ it to the current number in the array. It then moves up to the next row, doing the same action until it returns the top number of the triangle, which is now the minimum path sum. The following pseudocode shows a little more clearly what is happening:

// Test array: [[2],[3,4],[6,5,7],[4,1,8,3]]
// Starting on the [6,5,7] row, we add the smaller of the two
// numbers below each to that number
// [6,5,7] becomes [6+1,5+1,7+3] or [7,6,10]
// Moving up to the [3,4] row we do the same
// [3,4] becomes [3+6,4+6] or [9,10]
// And now we add to the 'top' or zeroth row
// [2] becomes [2+9] or 11
// return the top row, only number, which is now the sum, 11

As opposed to the 2^n time-complex recursive solution, this dynamic programming solution via tabulation has a time complexity of just O(n²), because it goes through a nested pair of loops for the complete size of the input dataset (n).

While this is certainly not a complete overview of all problems that can be solved via dynamic programming, nor every technique involved, I certainly hope it is a worthwhile introduction to the topic. Dynamic programming is an excellent way to reduce the time complexity of problems that can be solved via brute force using recursion.

--

--

Hannah Reitzel Rivera
CodeX
Writer for

Former archaeologist and high school teacher turned software engineer. Just trying to learn and solve puzzles!