A few insights into recursive programming

The what, the why, the how

Ryan Klarhölter
4 min readSep 17, 2020
Mandelbrot set

What recursion is

To understand recursion, you first have to understand recursion — this drives even Google to despair.

But seriously folks!

In programming, recursion refers to the calling of a function by itself (strictly speaking, the mutual calling of functions is also recursive). Recursion is an alternative to iteration. An example makes it clear:

The following — iterative — function calculates the faculty of a given number (i.e. the product of all natural numbers smaller or equal to this number; example: 3! = 1 × 2 × 3 = 6).

Recursively programmed, the function looks like this:

This is unusual at first. It is difficult to predict the result. You have to keep a whole queue of terms in mind before you can calculate it. Computers have quite similar difficulties with this form of recursive programming. It is therefore practical to write down the sequence of operations:

factorial(6)
→ 6 × factorial(5)
→ 6 × (5 × factorial(4))
→ 6 × (5 × (4 × factorial(3)))
→ 6 × (5 × (4 × (3 × factorial(2))))
→ 6 × (5 × (4 × (3 × (2 × factorial(1)))))
→ 6 × (5 × (4 × (3 × (2 × (1 × factorial(0))))))
→ 6 × (5 × (4 × (3 × (2 × (1 × 1)))))
→ 6 × (5 × (4 × (3 × (2 × 1))))
→ 6 × (5 × (4 × (3 × 2)))
→ 6 × (5 × (4 × 6))
→ 6 × (5 × 24)
→ 6 × 120
→ 720

Wow, you had all that in your head, and the computer in memory! So this is a weak — but unfortunately widespread — implementation of recursion.

On performance and readability or: Why recursion instead of loops?

The recursive form is shorter and more precise. There is no loop logic that distracts the reader or could contain errors. In short, it puts the “what” in the foreground, the “how” in the background (by the way, the essence of declarative programming).

Unfortunately it is also more memory intensive than its iterative variant. This is due to the fact that the computer has to remember where to go back to after every function call to solve the calculation (“n*factorial(n - 1)”). Recursive functions usually call themselves very often — accordingly, a relatively large amount of memory is required. See the long list of operations above!

If someone told you recursion was bad, it was probably because of that. In that case, they probably did not know about tail recursion. This refers to a realization of recursion that does not have the mentioned performance weakness. The code example from above would look like this:

The crucial point is in the last “return” statement: The caller no longer has to wait for a function result to perform a calculation and return a result. Instead, he passes an intermediate result to the next call. Thus there is no need for the computer to remember the return address — nothing happens there anymore, therefore it only remembers the original return address, where the function was called for the first time. This is then, concerning the performance, similar to an iterative solution, where no additional addresses have to be saved either. The signature has become a bit more complex — a loss that we have to accept.

Stupid thing: The used programming language must support tail recursion — otherwise nothing is gained. Not every language translates tail recursion to this low memory variant, but treats it like the “normal” recursion.

Special importance in functional programming

Functional programming is intended to simplify the writing and understanding of program code. It is a programming paradigm where “pure functions” are desired. This refers to functions that have two essential characteristics of mathematical functions: The output depends exclusively on the input. For f(x) = x², the result is always 4 if x = 2. It is 9 if x = 3. Functions in programming are not so predictable by definition. The developer must explicitly take care of this. A function could return 4 on a day when 2 is entered, 5 on another day or even a text. The output can be made dependent on time, weather, moon phase — almost everything. This unpredictability is impossible with functional programming.

The second characteristic is about side effects. Mathematical functions do not change anything — they are just abstract definitions. Accordingly, “pure functions” in functional programming should not change states, i.e. they should not have any effect on their environment. Even changing a loop variable would already be an effect. In pure functional languages like Haskell, this is not even possible without further ado.

So pure functions only assign an output to every input and do nothing beyond that.

In mathematics, such an assignment rule can be defined recursively as follows:

Interesting is the similarity to recursion in programming:

This is essentially the same code as above (using the “normal” recursion). I just adapted and formatted it a bit to show the similarity to the mathematical definition.

This shows that it is very practical to use recursive functions where “pure functions” are desired: Each input value is clearly assigned to an output value.

There may be other — even more important — reasons why recursion is so common in functional programming. I didn’t get to the bottom of it, but I didn’t want to leave it unmentioned in this context.

To continue learning …

--

--