Recursion Pattern Plot, Image by Author

Recursion vs Dynamic Programming — Climbing Stairs (Leetcode 70)

Shuheng.Ma
Geek Culture
Published in
8 min readOct 14, 2021

--

In this blog, I will use Leetcode 70. Climbing Stairs as our example to illustrate the coding logic and complexity of recursion vs dynamic programming with Python.

This project was built by Shuheng Ma. To see the full code used, find GitHub. (Feel free to star/watch/fork, a lot new code coming : )

Note: If you are new to recursion or dynamic programming, I would strongly recommend if you could read the blog below first:

Recursion vs Dynamic Programming — Fibonacci

Section 1: Introduction of Recursion and Dynamic Programming

1.1 Background

Let’s start with what is Recursion

Recursion is the process in which a function calls itself until the base cases are reached. And during the process, complex situations will be traced recursively and become simpler and simpler. The whole structure of the process is tree-like. Recursion does not store any value until reaches the final stage(base case).

And Dynamic Programming is mainly an optimization compared to simple recursion. The main idea is to decompose the original question into repeatable patterns and then store the results as many sub-answers. Therefore, we do not have to re-compute the pre-step answers when needed later. In terms of big O, this optimization method generally reduces time complexities from exponential to polynomial.

1.2 How to write a recursion/dynamic programming script

Dynamic Programming and Recursion are very similar

  1. Both recursion and dynamic programming are starting with the base case where we initialize the start.

2. After we wrote the base case, we will try to find any patterns followed by the problem’s logic flow. Once we find it, we are basically done.

3. The main difference is that, for recursion, we do not store any intermediate values whereas dynamic programming does utilize that.

Let’s get a bit deeper with the Climbing Stairs.

Section 2: Example: Leetcode 70. Climbing Stairs

2.1 Problem Prompt

You are climbing a staircase. It takes n steps to reach the top.

Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?

2.2 Examples

Here are some examples that are easy to follow:

when n = 1, there is 1 method for us to arrive there. We can take 1 step to arrive n = 1.

when n = 2, there are 2 methods for us to arrive there. We can either take 1 + 1 steps or take 2 steps to be n = 2.

Section 3: Two Approaches

3.3 Recursion Approach

Let’s start with the recursion approach.

3.3.1 Recursion Code

Recursion Code, Image by Author

From the code above, we could see that the very first thing we do is always looking for the base case.

In this case, the base case would be when n = 0, there is no need to take any steps. When n = 1, there is only 1 method: step 1 unit upward. When n =2, in order to arrive, we can either upward 1 + 1 or upward 2 units which add up to 2 methods.

And after the base case, the next step is to think about the general pattern of how many distinct ways to arrive n. Unlike Fibonacci, the problem prompt did not give us the pattern.

Let’s examine a bit more complex case than the base case to find out the pattern.

3.3.2 Finding Patterns

Let’s think about how should we approach if n = 4 recursively.

In order to step on n = 4, we have to either step on n = 3 or n =2 since we can only step 1 or 2 units per time.

Recursion Pattern Plot, Image by Author

And in order to step on n =3, we can either step on n = 2 or n = 1.

Recursion Pattern Plot, Image by Author

But notice, we already have the base case for n = 2 and n =1.

We know that if there are 2 methods to step on n = 2 and 1 method for step on n = 1. Thus, there are totally three methods on n = 3 since we have to step on n = 2 or n = 1. In other words, there are 2 + 1 = 3 methods for arriving n =3.

And for n =4, we basically adding the distinct methods we have on n = 3 and n =2. Eventually, there are 3 + 2 = 5 methods for arriving n = 4.

General Pattern: Distinct ways at nth stairs = ways @ (n-1) + ways @ (n-2)

Let’s take a closer look on the visualization below.

3.3.3 Recursion Procedure Plot

Recursion Procedure Plot, Image by Author

We start from the very top where n[4] = n[3] + n[2]. And then we will try to find the value of n[3]. Eventually, when we reach the base case where n[2] = 2 and n[1] = 1, we can simply sum it up from the bottom to the top and obtain n[4] = 5.

3.4 Dynamic Programming Approach

3.4.1 Recursion Code

Dynamic Programming Code, Image by Author

From the code above, we could see that the very first thing we do is again, looking for the base case.

In this case, the base case would be when n =1, distinct ways = 1, and when n = 2, distinct ways = 2, in order to achieve the effect, we explicitly wrote these two conditions under if.

And after we finish the base case, we will create a pre-filled dynamic programming array to store all the intermediate and temporary results in order for faster computing. Since we do not know how many distinct ways there could potentially be, we will not create a fixed-length array, instead, we will create an array that growing itself along the way.

We already know there would be 1 way for n = 1 and 2 ways for n = 2, so let’s put these two cases in the array with index = 0 and index = 1.

The next step is to think about the general pattern of how many distinct ways for nth stairs will be generated afterward. Luckily, we already figure the pattern out in the previous recursion section.

General Pattern: Distinct ways at nth stairs = ways @ (n-1) + ways @ (n-2)

Therefore, we could simply generate every single stairs by using the formula above. We can store each stairs’ number of distinct ways into the dp array along the way.

And this is actually the major difference separate dynamic programming with recursion. In recursion, we do not store any intermediate results vs in dynamic programming, we do store all intermediate steps.

In order to calculate n = 4, we will first calculate n =3, and store the value into the DP list we created in advance.

3.4.2 Dynamic Programming Procedure Plot

Dynamic Programming Procedure Plot, Image by Author

We start from the very left where array[0]=1 and array[1] = 2. And then we will try to find the value of array[3] for n =4, we will find the value of array[2] first as well as store its value into the dp_list. Eventually, when we reach the right side where array[3] = 5, we can return the final result.

Section 4: Time and Space Complexity

4.1 Big O for Recursion

For recursion, the time complexity would be O(2^(n)) since every node will split into two subbranches (for accuracy, we could see it is O(2^(n-2)) since we have provided two base cases, but it would be really unnecessary to distinguish at this level).

And the space complexity would be O(n) since the depth of the tree will be proportional to the size of n.

Below is the Leetcode runtime result for both:

Leetcode Recursion Result, Image by Author

4.2 Big O for Dynamic Programming

For dynamic Programming, the time complexity would be O(n) since we only loop through it once. As you can see in the dynamic programming procedure chart, it is linear.

And the space complexity would be O(N) since we need to store all intermediate values into our dp_list. So the space we need is the same as n given.

Below is the Leetcode runtime result for both:

Leetcode Dynamic Programming Result, Image by Author

Why does the recursion method fail at n = 38?

Let’s take a look at the visualization below

4.2 Visualization of Time Complexity

Time Complexity Speed Comparison, Image by Author

The red line represents the time complexity of recursion, and the blue line represents dynamic programming. The x-axis means the size of n. And y-axis means the time the algorithm will consume in order to compute the result.

And when we try to compute n = 38, it takes our dynamic programming 38 units to calculate the value since we have O(n) for dynamic programming.

However, we will not able to find the solution until 2³⁸ = 274877906944 before the recursion can generate a solution since it has O(2^n). That’s why Leetcode gave us the Runtime Error.

Section 5: Summarization and Conclusion

As a quick recap, some take away is summarized below:

From above, we could observe that, although both recursion and dynamic programming could handle the task of computing Climbing Stairs, they do have major differences in terms of processing intermediate results and time consumption. Dynamic programming uses the same amount of space but it is way faster.

Recursion Real-Time Complexity Speed Plot, Image by Author
Recursion Method Theoretical Time Complexity Speed Plot, Image by Author

From the plot above, the x-axis represents when n = 35 to 41, and the y-axis represents the time consumption(s) according to different n for the recursion method. It is clear that the time consumption curve is closer to exponential than linear.

Although both algorithms do require almost the same level of difficulty of effort to understand the logic ( I wish my blog helped you a bit with that), it is rewarding after you grasp the core of the algorithm since plenty of array questions can be solved by dynamic programming elegantly and efficiently.

If you feel you fully understand the example above and want more challenging ones, I plan to use dynamic programming and recursion to solve a series of blogs for more difficult and real-life questions in near future. Thanks for your reading!

--

--