# Introduction to Dynamic Programming

## Optimisation for the Nation!

# Audience

This article is aimed at engineers with a rudimentary experience of algorithms, looking to take their first steps into dynamic programming. A solid knowledge of recursion is required, as well as some awareness of bit operators.

We will be implementing the techniques in Java, so a base level of intimacy with the language is helpful, though not essential.

# Argument

Dynamic programming can be thought of as an optimisation of recursion. If we have a recursive problem that requires us to solve the same problem twice, it is easier to store and reuse the results of that problem.

Formally it is expressed using the below:

**Optimal substructure:**A problem can be broken down into subproblems, and the optimal solution to the subproblems can be used to find the optimal solution to the main problem.**Overlapping subproblems:**Some subproblems’ solutions will be used multiple times, so we can cache them to improve performance.

Although this sounds slightly abstract, let’s concrete it with an example. The fibonacci sequence is calculated using:

`f(n) = f(n - 1) + f(n - 2)`

Where we have the starting conditions `f(1) = 1`

and `f(0) = 0`

. Let’s walk through this.

`f(0) = 0`

f(1) = 1

f(2) = f(1) + f(0) = 1

f(3) = f(2) + f(1) = 1 + 1 = 2

f(4) = f(3) + f(2) = 2 + 1 = 3

...

We can quickly see a pattern emerging. So how would we code that up? One solution would be to use:

**public int **fibonacci(**int **n) {

**if**(n <= 1) {

**return **n;

}

**return **fibonacci(n - 1) + fibonacci(n - 2);

}

However, if we think about how this would execute for `f(4)`

, it would calculate `f(3) + f(2)`

, where `f(3)`

would need to calculate `f(2)`

as well (due to `f(3) = f(2) + f(1)`

). Therefore this calculation is happening twice, wasting resources!

A better way of doing things might be to use an array structure to store the previous two fibonacci numbers. Once calculated this would allow us to reuse them!

**private static int **fibonacci(**int **n) {

**int**[] fibonacciNumbers = **new int**[n + 1];

fibonacciNumbers[0] = 0;

fibonacciNumbers[1] = 1;

**for**(**int **i = 2; i <= n; i++) {

fibonacciNumbers[i] = fibonacciNumbers[i - 1] +

fibonacciNumbers[i - 2];

}

**return **fibonacciNumbers[n];

}

An important thing to note is that **often we trade space for time. **Here we have an array of solutions, which takes up space. In some dynamic programming problems we will find ourselves using up more memory in search of a quicker solution.

## Top Up vs Bottom Down

There are two main approaches to dynamic programming: top down (memoization) and bottom up (tabulation).

Bottom up (as it sounds), is a technique where we want to reach a state `n`

we need to start at some initial conditions and calculate our way up. Our dynamic programming answer to Fibonacci is a very good example of this. Notice how we calculate `f(0)`

, then `f(1)`

, then `f(2)`

. We’ve started at the bottom and made our way up.

Top down is the opposite. Our non-DP Fibonacci solution is a good example of that. We start with the solution we would like to find `f(n)`

, then work our way back from the solution to do the calculations we need.

Some people find it clarifying to have these ideas expressed in the following terms:

**Bottom Up (memoization):**I am going to learn Java, I am going to study an online course, I am going to read all of James’ articles, I will become a master programmer.**Top Down (tabulation):**I am going to become a master programmer, so I will read all of James’ articles, but first study an online course, but first learn Java.

We can see one starts at the beginning, working forward to our aim, whereas the other does the opposite.

## Optimisation vs Combinatorial Problems

There are generally two types of problem we might apply dynamic programming to.

**Optimisation problems:**These have been the main focus of the article so far. In these types of application we are looking to find the best solution to a given problem.**Combinatorial problems:**In this type of problem we are looking to find the number of possible different combinations of a factor. This might then be used to calculate a probability of something occurring.

Let’s explore these ideas using some examples from LeetCode.

Looking at optimisation, we have the below problem.

You are a professional robber planning to rob houses along a street. Each house has a certain amount of money stashed, the only constraint stopping you from robbing each of them is that adjacent houses have security systems connected and

it will automatically contact the police if two adjacent houses were broken into on the same night.Given an integer array

`nums`

representing the amount of money of each house, return the maximum amount of money you can rob tonightwithout alerting the police.

So how is this a dynamic programming problem? We want to **maximise **the amount we can steal, which is our first clue. Additionally, the maximum we can steal at house A is the same as the maximum we can steal at the house two behind A (because of not wanting to alert the police we can’t rob the house one behind), plus the current house’s worth.

We can do something similar for a combinatorial problem. Let’s take another example from LeetCode.

You are climbing a staircase. It takes

`n`

steps to reach the top.Each time you can either climb

`1`

or`2`

steps. In how many distinct ways can you climb to the top?

## Bit Masking

Occasionally in dynamic programming we will need to represent a set. We will explore the idea of Bit Masking outside of DP, as it is easier to examine in isolation. Let’s take another LeetCode problem.

Given a string array

`words`

, returnthe maximum value of`length(word[i]) * length(word[j])`

where the two words do not share common letters. If no such two words exist, return`0`

.

As we are looking for words sharing common letters, it is enough to represent the two words as sets of their letters, then check if there is an intersection.

`hello = {h, e, l, l, o}`

jealous = {j, e, a, l, o, u, s}

hello ∩ jealous = {e, l, o}

If the intersection is non-empty, we know there is a clash. We can use bits to represent these sets by setting a 1 wherever a letter has appeared in the word.

Notice how we don’t care how many times a letter has come up, or where, only that they have appeared.

From here we can see a reasonably clear solution to the problem. Create the bit mask of each word whilst keeping track of the maximum length of the word with that mask. We then iterate through all the masks, finding the ones with no bits in common and calculating their word length product.

## Digit Dynamic Programming

Right, hold onto your hats because it gets a bit hairy here. This took me ages to get my head round, but maybe you’re a bit smarter than me!

Sometimes it becomes useful to treat a number as an array of its digits. For example `1234`

becomes the array `[4, 3, 2, 1]`

. Notice how it has been reversed so the leftmost digit becomes the most significant one. Starting at 4 we can generate all of the numbers below it, whilst dynamically checking them for a property. We’ll bring in an example later, so don’t worry if this seems odd for the time being.

However, you should see that generating the numbers after 4 is not quite as straightforward as it seems. A rudimentary generation algorithm may come up with:

`40**`

41**

42**

43**

44**

Where `*`

represents some numbers yet to be calculated in a recursion. `40** `

could become `4000`

, `4001`

, … However, if we look, the last entry is too high! We need a way of limiting this behaviour. Enter `tight`

.

The `tight`

variable is used to tell us if we need to look at the digit in our initial number. For example, `tight`

would be set to `0`

(or off) if we were looking for the numbers after `40**`

. This means the next digit would be able to take any of the values from `400*`

to `409*`

(see why I started to get confused?).

At the start `tight`

will be set to `1`

. This means that we need to check our original number before generating the next one. We need to keep our original number in mind: `4321`

.

`40** - Is 0 less than 3? Yes, keep going`

41** - Is 0 less than 3? Yes, keep going

42** - Is 0 less than 3? Yes, keep going

43** - Is 0 less than 3? No, stop!

Once we know that we are only generating numbers that cannot reach this upper limit we can set `tight`

to off.

`400* - Generate any numbers after this, they'll all be fine`

401* - Generate any numbers after this, they'll all be fine

...

It’s a tricky one, but hopefully after a couple of read throughs you should get it. Essentially this is a clever way of generating all of the different numbers less than a number.

As we’re generating digits in this fashion we can also check them for whichever property we need. A useful fact is that if we are looking for a property `f(x)`

between two bounds, `a`

and `b`

, then we can calculate the result of this property using `f(b) — f(a)`

. Again, a little, abstract, so let’s go to our example! We’re borrowing one from Geeks For Geeks.

Given two integers

aandb. Find the sum of all the digits appearing in the integers between a and b. For example if a = 5 and b = 11, then the answer is 38 (5 + 6 + 7 + 8 + 9 + (1 + 0) + (1 + 1))

Hopefully with the above and the commented solution below an element of clarity should be shed on the situation.

Digit DP is a fairly confusing area, so supplementary reading may be required. However I hope this rough introduction provides an intuitive basis.

# Conclusion

In conclusion we have covered dynamic programming, its definition, motivation and a number of use cases.