Dynamic Programming- Reforming your problem-solving skills

Shubhangi Gupta
CodinGurukul
Published in
3 min readJul 12, 2019

Dynamic Programming is an approach to solving complex problems. It works for breaking a problem into subproblems like recursion. Though, the dynamic problems and recursion conflicts as dynamic stores the result of subproblem, hence save a lot of computational time.

Advantages of dynamic problem:

  • reduce time complexity from exponential to polynomial. As it will save time from recomputing similar values.
  • Time complexity is lesser than recursion in both of the dynamic problem method.

Characteristics of Dynamic programming

Overlapping: Like divide and rule strategies, we have a divide and conquer approach in dynamic programming to subparts of problems. The overlapping situation is familiar when a subproblem needs a solution repeatedly. It has been stored in tabulated form only when if the problem needs it.

People can relate to remembering the binomial coefficient formula ‘ nCr’. So what likely to remember is when r=1 the binomial coefficient would be ’n’ and when n=r then it becomes 1. Just to save our time we remember it and this could be implemented in throughout problem every ’n’ values.

Furthermore, when you take out the sum of an array you always store the value on the temporary variable. So every time you increase array size by 1, you do not need recount again.

Implementing it in real-time programming like Fibonacci series where the value of 1 and 0 both should return 1 that is called overlapping. You need it for every problem and you need to save the solution.

Dynamic Programming Methods

1. Top-down( Memoization): Somehow a similar approach to recursion as it looks for value in the table before computing solutions. Like an array storing a value before computing the solution, lookup on the table for a definite solution. If the problem is found then the result will be taken otherwise the problem will be solved and the results are likely to be stored in the table. Here the table is filled on demand of the problems. But Longest common substring problem known as LCS problem may not fill all the entries. As LCS problem varies value to another value.

2. Bottom-up ( Tabulation): This approach is likely setting a problem to start from the initials and move on to bottom-up. When we solve an array we start from 0 to goes till n. Taking the example of binomial coefficient again firstly we resolve nC0, nC1, and nC2 for any problem, so these increasing order of building solution from bottom to up is known as bottom-up. Like Memoization tabulation also fills the tables with the results. But here in tabulated version entries are filled by first i.e. entry-level to n, one by one like a queue. Though, it ensures that each and every entry of lookup tables should be necessarily filled.

Optimal: An optimal solution is obtained after solving the subproblems.

Like in graph theory the existence of the path and then the shortest path is computed of the node to another node. The designated node’s existed path has been evaluated through medium node or nodes in a circuit. The algorithmic approach is Floyd-Warshall and Bellmen- Ford algorithms.

How to solve Dynamic Problem?

1. Identifying the Dynamic problem.

Step 1: A problem whose size has to be in increasing or decreasing certain quantity or counting the possible arrangement or some subsets. It could be solved in subparts. The approach is to break and unite to get the results.

Step 2: When it satisfies the characteristics of dynamic programming. All of the dynamic programmings satisfies overlapping subproblem substructure, few classify into the optimal.

2. Constructing step by step approach(state expression) by lesser parameters.

3. With reference to previous construct another state relation.

4. Doing either tabulation or memoization.

--

--

Shubhangi Gupta
CodinGurukul

Writer who saved drafts for future reference. Travellophile gourmand; exquisitely embellishing peace with words. No conflicts between my world and my words.