Dynamic Programming

What is Dynamic Programming?

thisara jayaweera
3 min readJul 10, 2021

Dynamic Programming (DP) is an algorithmic approach for addressing an optimization issue by breaking down the main problem into smaller subproblems (simple version of the overall problem) to making our algorithmic solution more efficient by storing intermediate results (results of those subproblems).

When to use dynamic programming?

There are two different ways we can identify when to use dynamic programming to tackle a problem.

Fist way is to check whether the problem is aligned with the definition of dynamic programming. It means we have to check if that particular problem has optimal substructure (break a down problem into smaller problems and combine those to get the optimal solution. Basically, we can say this is recursion) and overlapping subproblems.

Other way is checking whether the problem consists of maximization, minimization, optimization, or counting. If one of these consists of the problem there is a huge chance this problem would tackle with dynamic programming.

How to Solve Problems using Dynamic Programming.

Lets take a simple example for problem solving in dynamic programming. In here I am taking find the “Longest Increasing Subsequence” (LIS)in a given integer array. (The LIS problem is to find a subsequence of a given sequence in which the subsequence’s elements are in sorted order, lowest to highest, and in which the subsequence is as long as possible). Since we have to find longest subsequence it means this is a maximization problem.

For get better understand about the problem I have take [4,1,8,2,4,9] array as an example.

Step 01: Visualize Examples.

Visualized example

In this case I have draw every possible increasing subsequences. Then I can simply understand what is the LIS. It is [1,2,4,9] and the Length of LIS is 4.As well as by looking at this diagram we can understand another combination. (When you are going to solve this kind of a problem you should focused on special cases as well like sorted arrays on both ascending and descending order. Otherwise its hard to get the generalized idea)

LLIS = Longest Path in DAG + 1

(Directed Acyclic Graph is a directed graph with no directed cycles)

Step 02: Find a subproblem.

We can define our subproblem like this,
LIS[i] = LIS ending at i-th index.
if i=0 then LLIS[i] is 1,
if i=3 then LLIS[i] is 2,
if i=4 then LLIS[i] is 3

Step 03: Find a connection among subproblems and generalize the problem

By looking at above given examples we can understand,

LLIS[i] = 1 + max( LLIS[j] )
where, 0 < j < i And array[j] < array[i]
or LLIS[i] = 1, if i=0.

we should care about array[i]

Step 04: Implement the generalized solution by solving subproblems.

At the beginning we assume that every index i of an array array[] has a LLIS of one. As we traverse the array from left to right, we examine each entry at index i. For every element j up until i (where j<i), if the element at index i is greater than the element at index j and LLIS[i] <= LLIS[j], then LLIS[i] = LLIS[j] + 1. We pick the maximum all of all LLIS values.

Algorithm for find LLIS in dynamic method

--

--

thisara jayaweera

Computer Science undergraduate of University of Colombo School of Computing