# Dynamic Programming

Dynamic Programming (DP) is an algorithmic technique for solving an optimization problem by breaking it down into smaller sub-problems and taking advantage of the fact that the best solution to the overall problem is determined by the best solution to its sub-problems.

Dynamic Programming is primarily a recursion optimization. We can use Dynamic Programming to optimize any recursive solution that has repeated calls for the same inputs. The idea is to save the effects of sub-problems so that we don’t have to recalculate them later. The time complexity of this simple optimization is reduced from exponential to polynomial.

For example, if we write a simple recursive solution for Fibonacci Numbers, the time complexity is exponential, but if we optimize it by storing sub-problem solutions, the time complexity is linear. We can store result of each step so we can reuse it.

Check below example to understand it.

We can notice that there are many different calls of the function with the exact same input value. Fib (3) was calculated three times from scratch. Fib (2) was calculated five times! These re-calculations are quite expensive. So, we can reuse the already calculated fib and replace it with new functions.

Two ways in which dynamic programming can be applied:

**Top Down:**

The problem is broken down in this form, and if the problem has already been solved, the saved value is returned; otherwise, the function’s value is memoized, which means it will be evaluated for the first time; otherwise, the stored value will be called back. For computationally intensive systems, memorization is a perfect way to go. Don’t mix up memoization and memorization.

Memoize! = memorize

**Bottom up:**

This is a good way to minimize recursion because it reduces the time complexity that recursion creates (i.e., memory cost because of recalculation of the same values). The solutions to small problems are estimated here, which add up to the overall problem’s solution. (The examples provided later in the article will help you understand this better.)

Let’s identify some pros and cons for both methods:

**Top-down approach:**

**Pros:**

1) Easier to implement.

2) Only necessary states are computed, and if a state is not needed, it is skipped.

**Cons:**

1) Easy to run out of space in the stack.

2) Hard to reason about time and space complexity.

**Bottom-up approach:**

**Pros:**

1) Can be space optimized.

2) No recursion overhead.

3) Easy to reason about time and space complexity.

**Cons:**

1) All states must be computed since we are going in a loop.

2) Usually more difficult to implement.

# Different algorithms of dynamic programing

There are various algorithms of dynamic programming which can be useful for simplifying complicated problem by breaking it down into simpler sub-problems in a recursive manner.

These algorithms are listed below:

**1)** **Bellman Ford’s Algorithm for Shortest path: -**

This algorithm helps to find the shortest path from a vertex to all other vertices of a weighted graph. It is like Dijkstra’s algorithm, but it can work with graphs in which edges can have negative weights.

This negative weight edges can create negative weight cycles. That is a cycle that will reduce the total path distance by coming back to the same point.

Bellman Ford algorithm works by overestimating the length of the path from the starting vertex to all other vertices. Then it iteratively relaxes those estimates by finding new paths that are shorter than the previously overestimated paths. By doing this repeatedly for all vertices, we can guarantee that that result is optimized.

To find the shortest path using this algorithm, we need to follow below steps:

1) Start with the weighted graph.

2) Choose a starting vertex and assign infinity path values to all other vertices.

3) Visit each edge and relax the path distances if they are inaccurate.

4) We need to do this V times because in the worst case, a vertex’s path length might need to be readjusted V times.

5) Notice how the vertex at the top right corner had its path length adjusted.

6) After all the vertices have their path lengths, we check if a negative cycle is present.

**2)** **Knapsack problem algorithm: -**

This algorithm is very helpful problem in combinatory. It basically means a bag of given capacity.

For example, in the supermarket there are n packages (n ≤ 100) the package I has weight W[i] ≤ 100 and value V[i] ≤ 100. A thief breaks into the supermarket, the thief cannot carry weight exceeding M (M ≤ 100). The problem to be solved here is: which packages the thief will take away to get the highest value?

Input: -

· Maximum weight M and the number of packages n.

· Array of weight W[i] and corresponding value V[i].

Output: -

· Maximize value and corresponding weight in capacity.

· Which packages the thief will take away?

Knapsack algorithm can be further divided into two types:

· **0/1 Knapsack problem: — **In this algorithm type, each package can be taken or not taken. Besides, the thief cannot take a fractional amount of a taken package or take a package more than once. This type can be solved using Dynamic Programming Approach.

· **Fractional Knapsack problem: — **This type can be solved by Greedy Strategy.

**3)** **Floyd- Warshall Algorithm: -**

This algorithm is used to find all pair shortest path problem from a given weighted graph. As a result of this algorithm, it will generate a matrix, which will represent the minimum distance from any node to all other nodes in the graph.

We initialize the solution matrix same as the input graph matrix as a first step. Then we update the solution matrix by considering all vertices as an intermediate vertex. The idea is to one by one pick all vertices and updates all shortest paths which include the picked vertex as an intermediate vertex in shortest path. When we pick vertex number k as an intermediate vertex, we already have considered vertices {0,1, 2, k-1} as intermediate vertices. For every pair (i, j) of the source and destination vertices respectively, there are two possible cases.

1) K is not an intermediate vertex in shortest path from i to j.

2) K is an intermediate vertex in shortest path from i to j. We update the value of dist [i] [j] as dist [i] [k] + dist [k] [j] if dist [i] [j] > dist [i] [k] + dist [k] [j].

The following figure shows the above optimal substructure property in the all-pairs shortest path problem.

**4)** **Tower of Hanoi**

As illustrated, the Tower of Hanoi is a mathematical puzzle that consists of three towers (pegs) and several circles.

These rings come in various sizes and are arranged in ascending order, with the smaller ring sitting on top of the larger. Such puzzle variants involve the number of discs while retaining the same tower count.

**Rules:**

The goal is to transfer all the discs to another tower without disturbing the arrangement series. The following are some Tower of Hanoi rules to follow:

· For any given time, only one disc can be shifted between the towers.

· The “top” disc is the only one that can be replaced.

· A big disc cannot be stacked on top of a small disc.

**Algorithm:**

To write an algorithm for Tower of Hanoi, we must first learn how to solve the problem with less discs, such as one or two. Name, source, destination, and aux are all written on three towers (only to help moving the disks). If we only have one disc, we can quickly transfer it from the source to the destination peg.

If there are two discs,

· The smaller (top) disc is first transferred to the aux peg.

· The wider (bottom) disc is then moved to the destination peg.

· Finally, the smaller disc is moved from aux to destination peg.

As a result, we can now build an algorithm for Tower of Hanoi with more than two discs. The stack of discs is split into two sections. The largest disc (the nth disc) is in one region, while the remaining (n-1) discs are in the other.

Our goal is to move disc n from source to destination and then copy all the other (n1) discs to it. We might imagine doing the same with all the discs in a recursive fashion.

The steps to take are as follows:

Step 1: Move n-1 discs from the source to the auxiliary.

Step 2: Move the nth disc from the source to the destination.

Step 3: n-1 discs from auxiliary to destination in phase three.

**5)** **Longest Common Subsequence: -**

The longest common subsequence (LCS) is defined as the longest subsequence that is common to all the given sequences, provided that the elements of the subsequence are not required to occupy consecutive positions within the original sequences. If S1 and S2 are the two given sequences then, Z is the common subsequence of S1 and S2 if Z is a subsequence of both S1 and S2. Furthermore, Z must be a strictly increasing sequence of the indices of both S1 and S2.

In a strictly increasing sequence, the indices of the elements chosen from the original sequences must be in ascending order in Z.

If S1 = {B, C, D, A, A, C, D}

Then, {A, D, B} cannot be a subsequence of S1 as the order of elements is not the same.

**6) Matrix Chain Multiplication: -**

It is an optimization problem for finding the most efficient way to multiply a given sequence of matrices. The problem is not actually to perform the multiplication but merely to decide the sequence of the matrix multiplications involved.

The matrix multiplication is associative as no matter how the product is parenthesized, the result obtained will remain the same.

For example, for four matrices A, B, C and D, we would have:

((AB)C) D = ((A(BC)) D) = (AB) (CD) = A ((BC)D) = A (B (CD))

However, the order in which the product is parenthesized affects the number of simple arithmetic operations needed to compute the product or the efficiency.