Recursion — A dream within a dream
There is a cheeky joke about recursion I can’t help using here “To understand recursion, you must first understand recursion”.
That may be correct with a caveat: like the Russian dolls, a recursion should not go on forever, it has to end when the base case is met.
What’s the base case?
Well, let’s start over.
What is recursion ?
Recursion is when a function calls itself, until it doesn’t.
In Computer Science, a recursion function is made of
- a recursive case, meaning that a case calls a smaller version of itself, until it reaches…
- …a base case. A base case is where the condition to stop the recursive function is met.
Without a base case, the function will loop indefinitely, and won’t fulfill the recursion definition.
Iterative vs Recursive
In coding, all incremental problems can be addressed iteratively with for and while loops. However, in some case the complexity of an iterative approach can be overwhelming, and a recursive version of it will make it more readable.
Trees and graphs traversals are examples of when recursion is best suited.
According to Cracking the Coding Interview “All recursive algorithms can [also] be implemented iteratively…”.
Both approaches accomplish the same thing, but the recursive seems cleaner, and easier to understand and to maintain.
However there is no performance benefit to use recursion. Even more, sometimes loops are better in this regard. Recursion increases memory usage with each recursive call.
Let’s walk it through an example.
The following is a recursive function the power function x^y.
Indeed, it has:
- a recursive case: where it calls a smaller version of y if y is positive (or bigger version if y is negative) until…
- … y reaches 0, which is the base case, where the function terminates.
For example, to calculate
_pow_recursion(2, 3) the recursive function does these steps:
_pow_recursion(2, 3) = 2 * _pow_recursion(2, 2)
_pow_recursion(2, 2) = 2 * _pow_recursion(2, 1)
_pow_recursion(2, 1) = 2 * _pow_recursion(2, 0)
_pow_recursion(2, 0) = 1
Each recursive call adds to the recursive stack.
Stacks are LIFO (Last In First Out) objects, meaning that any added item is “pushed” to the top of the stack. And every “popped” action will remove the item at the top of the stack.
Using a stack is a method of ordering operations for execution. The stack keeps track of function calls.
The picture below shows how the stack looks like when it is computing the recursive program.
Each successive function call gets pushed onto the top of the stack, and the function below waits for the return of the function above it.
Every time a recursive function is called a new stack frame is generated so that each set of local variables does not interfere with the any other set.
Starting from the bottom, it adds
2 * _pow_recursion(2, 3 — 1) and so on until it reaches the base case / exit condition at the very top, which is
_pow_recursion(2, 1 — 1). At this point of the stack, it can finally compute the pending multiplies by going down the stack:
From the top of the stack, it exits and goes down to
2 * 2 which is
4. Now it’s holding
4 , but cannot return until it goes down because of a pending multiply, and so on until it hits the last of the stack where it returns what it is holding; which is
Big O time complexity
The maximal number of nested calls (including the first one) is called recursion depth. In our case, it will be
y. This function is being called recursively
y times before reaching the base case so its time complexity, according to the Big O notation, is
O(y), often called linear.
If the base case exit condition is not met, a stack overflown will happen.
In Western art history, mise en abyme is a formal technique in which an image contains a smaller copy of itself, in a sequence appearing to recur infinitely.
Happy, indefinitely, reading!