The simple formula for solving any dynamic programming problem
Introduction to dynamic programming — how to solve any dynamic programming problem using the FAO formula
Of all the possible interview topics out there, dynamic programming seems to strike the most fear into everyone’s hearts. But it doesn’t have to be that way. After holding classes for over 300 students, I started to see a pattern. Students aren’t really afraid of dynamic programming itself. They are scared because they don’t know how to approach the problems. In this blog, we are going to understand how we can formulate the solution for dynamic programming based problems.
So, let’s start by taking a look at Jonathan Paulson’s amazing Quora answer.

I suppose this gives you a hint about dynamic programming. Let me start with asking a very simple question: Do you want to solve the same problem which you have already solved? If you ask me, I would definitely say no, and so would Dynamic Programming.
What is dynamic programming?
According to Wikipedia, dynamic programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems. It’s very important to understand this concept. Dynamic programming is very similar to recursion. But when subproblems are solved for multiple times, dynamic programming utilizes memorization techniques (usually a table) to store results of subproblems so that the same subproblems won’t be solved twice.

Dynamic programming is nothing but basically recursion plus some common sense. What it means is that recursion helps us divide a large problem into smaller problems. And common sense says whatever problem you solve, you should first check if the same problem has already been solved. If not, then only solve it and store the solution somewhere for later use. It is memorizing the results of some subproblems which can be later used to solve other subproblems, and it’s called memoization. The intuition behind dynamic programming is that we trade space for time. Instead of solving all the subproblems, which would take a lot of time, we take up space to store the results of all the sub-problems to save time later.
The FAO formula
Based on our experience with Dynamic Programming, the FAO formula is very helpful while solving any dynamic programming based problem. The FAO formula is comprised of 3 steps: Find the first solution, Analyze the solution, and Optimize the solution. Let’s start with a very trivial example of generating the n-th Fibonacci number.
In mathematical terms, the sequence Fn of Fibonacci numbers is defined by the recurrence relation
Fn = Fn-1 + Fn-2, with base values F0 = 0 and F1 = 1.
So, let’s say that given a number n, print the nth Fibonacci Number.
1. Find the First Solution
The first step to solve any problem is to find the brute force solution. Here is a simple method that is a direct recursive implementation of the mathematical recurrence relation given above in Python.
def fib(n):
if n <= 0:
print("Invalid Number")
# First Number
elif n == 1:
return 0
# Second Number
elif n == 2:
return 1
else:
return fib(n-1) + fib(n-2)
if __name__ == "__main__":
n = 9
print("{}th fibonacci number is: {}".format(n, fib(n)))
2. Analyze the solution:
Time Complexity: Suppose that T(n) represents the time it takes to compute the n-th Fibonacci number with this approach. We know that the recursive equation for Fibonacci is T(n) = T(n-1) + T(n-2) + O(1). What this means is the time taken to calculate fib(n) is equal to the sum of the time taken to calculate fib(n-1) and fib(n-2) plus some constant amount of time. On solving the above recursive equation, we get the upper bound of Fibonacci as O(2^n) although this is not the tight upper bound. This is because each recursive call results in two recursive calls. Therefore the depth of our recursion is n and each level has twice as many calls.
1 + 2 + 4 + … + 2^n-1 = 2⁰ + 2¹ + 2² + ….. + 2^(n-1)= O(2^n).
You can read this Stack Overflow thread if you’re curious about how to find the tight upper bound.
Now, we can observe that this implementation does a lot of repeated work (see the following recursion tree).

So this is a bad implementation for the nth Fibonacci number. Extra Space: O(n) if we consider the function call stack size, otherwise O(1).
Now, to optimize a problem using dynamic programming, it must have two properties — the optimal substructure and overlapping subproblems. Does our problem have those?
But wait, you might say, what is an optimal substructure?
The term optimal substructure has two components — optimal and substructure. Optimal means best or most favorable, and a substructure simply means a subproblem of the main problem. A problem is said to have an optimal substructure if an optimal solution to the main problem can be constructed efficiently from optimal solutions of its subproblems. Suppose we have a network of roads and we are tasked to go from City A to City B by taking the shortest path. And suppose that the optimal solution to our main problem (the shortest path from A to B) is composed of optimal solutions of smaller subproblems such as the shortest paths between two intermediate cities. Then, this problem is said to have an optimal structure.
And what are overlapping subproblems?
A problem has overlapping subproblems if finding its solution involves solving the same subproblem multiple times. Dynamic Programming is mainly used when solutions of the same subproblems are needed again and again. In dynamic programming, computed solutions to subproblems are stored in a table so that these don’t have to be recomputed again. Dynamic Programming is not useful when there are no common (overlapping) subproblems because there is no point storing the solutions if they are not needed again.
Now in the given example, It definitely has an optimal substructure because we can get the right answer just by combining the results of the subproblems. It also has overlapping subproblems. If you call fib(6), that will recursively call fib(5) and fib(4). fib(5) then recursively calls fib(4) and fib(3). It’s clear that fib(4) is being called multiple times during the execution of fib(6) and therefore we have at least one overlapping subproblem. With these characteristics, we know we can use dynamic programming.
3. Optimize the solution
All this means is, we will save the result of each subproblem as we solve, and then check before computing any value whether if it is already computed. Doing this requires minimal changes to our recursive solution.
fib_dict = {}
def fib(n):
if n <= 0:
print("Invalid Number")
# First Number
elif n == 1:
return 0
# Second Number
elif n == 2:
return 1
elif n in fib_dict:
return fib_dict[n]
else:
value = fib(n-1) + fib(n-2)
fib_dict[n] = value
return value
if __name__ == "__main__":
n = 9
print("{}th fibonacci number is: {}".format(n, fib(n)))
A majority of the Dynamic Programming problems can be categorized into two types:
1. Optimization problems
2. Combinatorial problems
An optimization problem is a problem of finding the best solution from all feasible solutions. And combinatorial problems expect you to figure out the number of ways to do something or the probability of some event happening.
Bottom-Up Vs Top-Down:
There are two ways to approach any dynamic programming based problems.
Top-down approach: This is the direct result of the recursive formulation of any problem. The top-down approach breaks the large problem into multiple subproblems. Suppose that the solution to the given problem can be formulated recursively using the solutions to its sub-problems, and that its sub-problems are overlapping. If this is the case, one can easily memorize or store the solutions to the sub-problems in a table. Whenever we attempt to solve a new sub-problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly. Otherwise, we solve the sub-problem and add its solution to the table.
Let’s solve the same Fibonacci problem using the top-down approach. This approach starts by dividing the problem into subproblems, unlike bottom-up (which we will explain later). For example, if we want to compute Fibonacci(4), the top-down approach will do the following:
- Fibonacci(4) -> Go and compute Fibonacci(3) and Fibonacci(2) and return the results.
- Fibonacci(3) -> Go and compute Fibonacci(2) and Fibonacci(1) and return the results.
- Fibonacci(2) -> Go and compute Fibonacci(1) and Fibonacci(0) and return the results.
- Finally, Fibonacci(1) will return 1 and Fibonacci(0) will return 0.
Fib(4)
/ \
Fib(3) Fib(2)
/ \ / \
Fib(2) Fib(1) Fib(1) Fib(0)
/ \
Fib(1) Fib(0)Based on the diagram above, it seems like Fib(2) is calculated twice. But actually, fib(2) is calculated only once and stored in the table. When we need the solution of fib(2) later, we can directly refer to the solution value stored in the table.
Bottom-Up approach:
Start by computing the result for the smallest subproblem (base case). Using the subproblem result, solve another subproblem and finally solve the whole problem. Another way of understanding this would be: Try solving the sub-problems first and use their solutions to build on and arrive at solutions to bigger sub-problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub-problems by using the solutions to small sub-problems. For example, if we already know the values of Fibonacci(41) and Fibonacci(40), we can directly calculate the value of Fibonacci(42).
Example
Suppose that we want to find the nth member of a Fibonacci series.
- Then, first of all, we know that Fibonacci(0) = 0, Fibonacci(1) = 1
- Then, Fibonacci(2) = 1 (Fibonacci(0) + Fibonacci(1))
- After that, Fibonacci(3) = 2 (Fibonacci(1) + Fibonacci(2))
So, we can solve the problem step by step this way:
- Find the 0th number
- Find the 1st number
- Calculate the 2nd number using 0th and 1st numbers
- Calculate the 3rd number using 1st and 2nd numbers
- By doing this we can easily find the nth number.
Bottom-up is a way to avoid recursion, saving the memory cost that recursion incurs when it builds up the call stack.
Put simply, a bottom-up algorithm starts from the beginning, while a recursive algorithm often starts from the end and works backward.
Let’s try the same for one more problem: counting the number of ways to reach a given score in a game
Consider a game where a player can score 3 or 5 or 10 points at a time. Given a total score n, find the number of ways to reach the given score. The order of scoring does not matter.
Examples:
Input: n = 20 -> output: 4
There are the following 4 ways to reach 20:
- (10, 10)
- (5, 5, 10)
- (5, 5, 5, 5)
- (3, 3, 3, 3, 3, 5)
Input: n = 13 -> output: 2
There are the following 2 ways to reach 13:
- (3, 5, 5)
- (3, 10)
Now that we know the problem statement and how to find the solution for smaller values, how would we determine the total number of combinations of scores that add to larger values? How do we write the program to compute all of the ways to obtain larger values of N?
Here let’s assume that the array S contains the scores given and n be the total given score. For example, S = {3, 5, 10} and n can be 20, which means that we need to find the number of ways to reach the score 20 where a player can score either score 3, 5 or 10.
- Optimal Substructure
To count the total number of solutions, we can divide all set solutions into two sets.
1) Solutions that do not contain the ith index score (or S[i]).
2) Solutions that contain at least one ith index score.
Let count(S[], m, n) be the function to count the number of solutions where: m is the index of the last score that we are examining in the given array S, and n is the total given score.
It can be written as the sum of count(S[], m-1, n) and count(S[], m, n-S[m]), which is nothing but thesum of solutions that do not contain the mth score count(S[], m-1, n) and solutions that contain at least one mth score count(S[], m, n-S[m]).
Therefore, the problem has optimal substructure property as the problem can be solved using solutions to subproblems.
2) Overlapping Subproblems
Following is a simple recursive implementation of the given problem in Python. The implementation simply follows the recursive structure mentioned above.
def count(n, scores, end_index): if n == 0:
return 1 elif n < 0 or end_index < 0:
return 0 elif scores[end_index] > n:
return count(n, scores, end_index-1)
else:
return count(n-scores[end_index], scores, end_index) + count(n, scores, end_index-1)
if __name__ == "__main__":
scores = [3, 5, 10]
n = 20
print("Number of ways to score {} are {}".format(n, count(n, scores, len(scores)-1)))
It should be noted that the above function computes the same subproblems again and again. See the following recursion tree for S = {1, 2, 3} and n = 5.
The function C({1}, 3) is called two times. If we draw the complete tree, then we can see that there are many subproblems being called more than once.

Since the same subproblems are called again, this problem has the overlapping subproblems property. So the given problem has both properties of a dynamic programming problem.
Time Complexity: 2^n
I have been asked that by many how the complexity is 2^n. So I’m including a simple explanation here:
For every score, we have 2 options, either we include it or exclude it so if we think in terms of binary, it's 0(exclude) or 1(included). so for example if we have 2 scores, options will be 00, 01, 10, 11, so it's 2². For n scores, it will be 2^n. We can do better by applying Dynamic programming.
Following is the dynamic programming based solution of the above problem in Python, where we are solving every subproblem exactly once. As every time before we solve it, we check whether it has been already solved or not. If it is not solved, we solve it and store this in some data structure for later use.
dict1 = {}
def count_dp(n, scores, end_index):
key = str(n) + ":" + str(end_index)
if key in dict1:
return dict1[key]
if n == 0:
return 1
elif n < 0 or end_index < 0:
return 0
elif scores[end_index] > n:
val = count_dp(n, scores, end_index-1)
else:
val = count_dp(n - scores[end_index], scores, end_index) + count_dp(n, scores, end_index - 1)
dict1[key] = val
return valif __name__ == "__main__":
scores = [3, 5, 10]
n = 20
print("Number of ways to score {} are {}".format(n, count_dp(n, scores, len(scores)-1)))
Best of luck! If you liked this guide, feel free to forward it along! Please drop a mail with your comments info@gildacademy.in
Gild Academy provides the best interactive Online and Offline classes for data structure and Algorithms in Bangalore, India. For more info., You can visit us at Gild Academy — https://www.gildacademy.in/

