The art of dynamic programming (pt. 2): classic algorithms

Three classic dynamic programming algorithms you need to know and understand

Zakarie A
Geek Culture
9 min readJul 20, 2021

--

This is the second part of an article on fundamentals of dynamic programming. In the first part, we defined dynamic programming and studied three simple examples. This post focuses on three more advanced problems: the rod-cutting problem, calculating the longest increasing subsequence and the change-making problem.

Part 1 can be found here:

The rod-cutting problem

Problem statement

Suppose we have a rod of length ℓ (where ℓ is a positive integer) which is represented by a finite set of cardinality ℓ. We can cut it as many times as we want into sub-rods (represented by subsets) with a positive integral length, each cut incurring no cost. We will then sell every sub-rod at a price P(m) depending on its length m. All we know is that prices are non-negative numbers: it is not assumed that the price map is monotonic or homogenous. The goal is to compute the maximum price at which we can sell a partition of a rod.

Suppose, for example, that P is defined for all integers between 1 and 4 by P(1) = 4, P(2) = 6, P(3) = 2 and P(4) = 1. A rod of length 4 can be partitioned as follows:

Price of every possible way of cutting a rod of length 4.

The best price is 16.

Why should we approach this problem using dynamic programming?

Let’s think of a recursive way of tackling the problem.

If the rod has length 1 then we have nothing to do: the optimal cut is the only one, {1} with price P(1). Therefore, we set Opt(1) = P(1): it is the optimal price (i.e. the highest) for a rod of size 1.

If the rod has length ℓ ≥ 2 then there are two possibilities: keeping the rod as is (with cut {ℓ} of price P(ℓ)) or splitting it into two sub-rods. Suppose we split it into two sub-rods, one of size k and one of size ℓ - k for some integer k such that 1 ≤ k ≤ ℓ - 1. We’ll keep one as is (the one of size k) and split the other one into as many sub-sub-rods as needed so that we get an optimal partition. In other words, we calculate Opt(ℓ) = max {P(k) + Opt(ℓ - k) | 1 ≤ k ≤ ℓ - 1} ⋃ P(ℓ).

Writing a solution

The following dynamic programming solution maintains a one-dimensional array containing the optimal rod of length k, for every integer k such that 1 ≤ klength.

Dynamic programming algorithm to calculate the optimal price at which we can sell a rod of a given length.

The algorithm runs in Θ(ℓ²) time, because the function is called ℓ times and the inner loop runs in Θ(k) time, for all k between 1 and ℓ.

Longest increasing subsequence

Problem statement

We consider a finite sequence S (which can be represented by an array or a list) of length N ≠ 0. We define the indexing of S to be the set of indices of all elements in S — we’ll assume in this section that it is the set of all integers between 0 and N — 1. A sub-sequence of S is a subset of the indexing of S. A sub-sequence S’ of S is said to be increasing if (i, jS’ij)S[i] ≤ S[j]. In the rest of this section, LIS(k) denotes the length of the longest increasing subsequence of some sequence, which ends at index k.

Our goal is to compute the longest increasing subsequence of a sequence S, i.e. the one with the greatest cardinality.

Why should we approach this problem using dynamic programming?

We first need to show that the problem can be solved recursively. If N = 1 then there is one single index, and two possible subsequences: the empty set and the indexing itself. Both are increasing (for the empty set, the left-hand side of the implication in the definition is always false, therefore the implication is always true; for a set of cardinality one, for any pair i, j of elements of a singleton, we have i = j, therefore S[i] = S[j], therefore S[i] ≤ S[j]). Therefore, the longest increasing subsequence of a sequence of length 1 is 1: we write LIS(0) = 1.

The induction formula is trickier. Let’s consider a sequence S of length N ≥ 2. The longest increasing subsequence of S which contains N — 1 is either {N — 1} itself if S[N — 1] is the smallest element of S, or the longest increasing subsequence ending at an index i ≤ N — 2 and whose greatest element is less than or equal to S[N — 1], so that we can include N — 1. Therefore, LIS(N — 1) = max {LIS(k) + 1 | k ∈ [0 .. N — 2], S[k] ≤ S[N — 1]} ⋃ {1}.

Once we know the length of the longest increasing subsequence of S ending at index k for every index k, we just need to find the largest value, which corresponds to the longest increasing subsequence of S. Say S = (1, 4, 2, 3, 5). To compute LIS(4), we would need to calculate LIS(0), LIS(1), LIS(2) and LIS(3), since values at indices 0, 1, 2 and 3 are all less than S[4]. Computing LIS(0) has not cost, since it is a base case. Computing LIS(1) requires to compute LIS(0), since S[0] ≤ S[1]. Computing LIS(2) requires to compute LIS(0). Computing LIS(3) requires to compute LIS(0) and LIS(2) again, which in turn requires to compute LIS(0) one more time: subproblems are overlapping.

Using the recurrence relation we derived, we can come up with the following bottom up implementation.

Dynamic programming algorithm to calculate the length of the longest increasing subsequence of a given sequence.

In addition to maintaining the cache, we use an accumulator variable lisLength which contains the length of the longest increasing subsequence in the entire sequence. When we have found the length of the longest increasing subsequence ending at index index, we check if it is longer than lisLength and update it if needed.

The structure of this function is similar to the one of the solution to the rod-cutting problem, and it runs in quadratic time as well. The space complexity is linear, since we maintain a one-dimensional array of N elements.

If we wanted to get an explicit longest increasing subsequence, we could maintain a variable last which indicates the index of the last element of a longest increasing subsequence as well as a second array, say previous, which contains value -1 at index i if the longest increasing subsequence ending at index i has length 1, and the index of the previous element of the longest increasing subsequence ending at index i otherwise.

For example, if we consider the sequence (1; 3; 2; 6; 5; 4; 9) then previous would be (-1; 0; 0; 1; 1; 1; 3) and last would be 6. Therefore, the longest increasing subsequence contains index 6. previous[6] = 3, so it also contain 3. previous[3] = 1, so it also contains 1. It is the last element, since previous[1] = -1.

The implementation is left as an exercise. The runtime and space complexity should not be worse than the ones of the version we gave in this section.

Calculating change

Problem statement

This is the last algorithm we cover in part 2. The goal of the change-making problem is to compute the least number of coins from a set C consisting of n (distinct) elements which sums up to a certain amount x. We can choose the same coin as many times as we want. For example, if C = {1, 5, 10, 8, 2} and x = 10, possible subsets include (1, 1, 1, 1, 1, 1, 1, 1, 1, 1), (8, 2), (8, 1, 1) and (10). The solution to the problem is 1, which is the cardinality of {10}.

Why should we approach this problem using dynamic programming?

As usual, we start by finding a recurrence relation which gives us the solution to the problem as a function of solutions to smaller subproblems. The base case is quite clear: if x = 0 then then we don’t need to spend any coin (this is known as the empty sum convention: the sum over the empty set is 0). Therefore, the optimal solution is Opt(0) = 0.

If x > 0 then we construct a solution by selecting an optimal solution to a smaller problem and adding new coins to make the sum equal to x.

It turns out that in the optimal solution, we only need to add one coin. This means that Opt(x) = min {Opt(xk) + 1 | k in C}.

This is not obvious: why could there not be a better solution where we add two coins, or a coins for some a ≥ 2?

I will show you how you can prove this statement for the special case when a = 2:

Proof outline:

Suppose that the optimal solution is actually Opt(x — ij) + 2, for some coins i and j. This implies that Opt(xij) + 2 < min{Opt(xi) + 1, Opt(xj) + 1} (notice the strict inequality: we wouldn’t mind if both sides were equal).

Let u := Opt(xij). This is a solution to an instance of the problem where coins must add up to xij. Therefore, there exists a collection of u coins which add up to xij. We can add coin j to this collection. This gives a collection of u + 1 coins which is a solution to an instance of the problem where coins must add up to xi. It follows that u + 1 ≥ Opt(xi), as there cannot be less coins than in the optimal solution. Substituting u, we get Opt(xij) + 1 ≥ Opt(xi). Therefore, Opt(xij) + 2 ≥ Opt(xi) + 1. The same argument, where we add coin i instead of coin j, shows that Opt(xij) + 2 ≥ Opt(xj) + 1. This gives a contradiction.

The generalisation is left as an exercise.

We now have a recurrence relation, which shows that the problem has optimal substructure. It is not difficult to find an input which would result in the computation of results to overlapping subproblems. For example, with coins {1, 5, 8, 11} and x = 22, the top of the recurrence tree would look like this:

First three levels of the recurrence tree of a naïve recursive approach. Circles have the same colour when they correspond to identical sub-problems.

Minor optimisations would allow us to remove some duplicate computations, but a non-dynamic-programming approach which uses the same recurrence relation would still end up being highly inefficient due to overlapping subproblems when the input gets very large.

Using dynamic programming with a bottom-up approach, we can come up with the following implementation:

Dynamic programming algorithm to calculate change.

Exercises

  1. (Rods) [From CLRS] What if cutting a rod incurs a constant cost κ, i.e. selling partition A gives you the sum of the price of each element of A, minus κ Card(A)? Write a dynamic programming algorithm which finds the optimal amount of money you can make by selling a partition of a rod of length ℓ. It must have O(N²) time complexity and O(N) space complexity.
  2. (Rods) Modify the bottom-up solution to the rod-cutting problem to not only calculate the optimal price, but to print one partition whose price is optimal as well. Constraint: your solution must run in O(N²) time and have O(N) space complexity.
    solution outline: given a rod of length ℓ, our algorithm finds the optimal way of cutting the rod into two parts. One of these two parts can be split further, but the other is sure to be in the optimal partition of the rod. Therefore, you can use an array which maps every length ℓ to the length i of a rod in the optimal partition. Find a way to process this array to obtain the final solution.
  3. (Longest increasing subsequence) A similar problem is the maximum subarray problem. Consider an array A of N numbers. A sub-array is a contiguous section of the original array. A maximum sub-array of A is a sub-array B such that the sum over all elements of B is maximised. Find an algorithm to find a maximum sub-array of A in linear time with respect to the length of the array.

--

--