Kevin Steele
Sep 5, 2018 · 4 min read

Recursion: A loop-hate story

“architectural structure photography of spiral stairs” by Agnieszka Cymbalak on Unsplash

There’s a concept in computer science that strikes equal parts confusion and fear in the hearts of beginner programmers and is even harder to explain to folks without a math, logic, or programming background. It’s name is recursion and I would like to take a few minutes to discuss this concept at some length and hopefully elucidate it for the uninitiated. But first let’s talk a little about the call stack.

“stack of brown cookies on black ceramic plate” by Nathan Dumlao on Unsplash

The call stack is a computer science concept that makes use of the abstract data type called a stack. A stack is a last in first out (LIFO) data structure that only allows manipulation of its top most element through removal, or pop, and addition to the top, or push. A call is a method call or rather the pointer to the temporarily stored data of the pending method call. In the case of nested method calls the top most element of the stack contains the pointer/reference of the data of the last method call and won’t evaluate until its done. Any data in lower layers is dependent upon the removal of its higher neighbors which occurs upon the completion of their method calls which have to finish in sequence. However, call stacks have limitations When its capacity is exceeded by too many non-finished method calls a stack overflow happens which typically ends in a crash. This is important to keep in mind with recursion going forward.

Recursion is defined by the Webster’s Dictionary as,

“a computer programming technique involving the use of a procedure, subroutine, function, or algorithm that calls itself one or more times until a specified condition is met at which time the rest of each repetition is processed from the last one called to the first ”.

But let’s unpack that for a second. What does this mean? It means that recursive code runs within itself while checking for an escape or terminating condition which would then return or evaluate to something that stops the recursion and is used to evaluate all terms leading up to and including the one indicated at the beginning of the algorithm. So imagine an algorithm that asks for the nth term in a sequenced pattern that follows a rule or can be defined by solving previous instances of the same problem. If you only know what happens in an earlier step but the algorithm has a consistency in how you get the next item in terms of the previous you can use that to get the nth member. In practice, you would have to run some algorithm where you would decrement or decrease the distance between the current term being evaluated and the end condition. At which point, it would stop and return something. This return is then used to evaluate the previous run throughs that will then define all terms until it reaches the nth one, getting our answer within context of the return of our escape or terminating condition. But what are the practical implications and applications of this?

Honestly you would usually use a normal loop. In terms of time and space complexity, in many cases one would rather use a loop to iterate. For example, to get the nth member of the fibonacci sequence you could write something like this using recursion:

It looks cool but is it any good?

But, this method has a complexity of O(2^n) which means that for each additional increase it would exponentially grow by 2 to the nth power.

However when we use a single loop to find the nth fibonacci number:

It looks like there are more steps for each iteration but they only have to be performed n number of times basically.

The complexity is O(n), so it only increases linearly with the number of elements, a vast improvement over the exponential recursive approach. If this is often the case for these smaller functions, when would we use recursion at all? Quite a few actually.

The occasions that would call for recursion would remove a large amount data to be looked through, such as in iterative branching like when performing a binary search in a sorted tree (iterative branching) or sorted array. Many video games also use recursive calls to make your typical 60fps game update its graphics and other information 60 times a second or more (while-loops work on a smaller scale but would crash or complicate the process but you would have to use a language optimized for tail recursion or face a stack overflow). Mathematics also makes handy use of recursion but only when it would remove a great number of repeated steps that you couldn’t skip over in a loop. Oh there’s also file searches and copies, the reason why when you copy or search for files your computer slows to a crawl is because often a recursive copy or search is being implemented. There should be more instances of use but those lie outside of my experience but I hope I’ve made some impact on your understanding of recursion and its uses.

Kevin Steele

Hi, I’m interested in programming, gaming, design both visual and of software. I recently started posting here: https://dev.to/kevindsteeleii

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade