A Birds Eye View of Functional Programming

TL;DR

Brooklyn Zelenka
Making Internets
7 min readOct 30, 2015

--

Functional programming (FP) helps you write robust, powerful, and maintainable programs well suited to multicore and cloud computing by focusing on controlling state and effects.

In the beginning…

Two solutions to the same problem

Computers (as we know them) were proposed as part of Alan Turing’s paper On Computable Numbers, with Application to the Entscheidungsproblem. The Entscheidungsproblem (decision problem) asks “what problems are computable/decidable?”, and Turing dreamed up a thought experiment for a machine that could be configured to work through any solvable problem.

Just a week prior to Turing publishing his paper, another mathematician (Alonzo Church) had published his own solution, using a purely-mathematical method tailored to the problem, called the Untyped Lambda Calculus. It was soon discovered that both Turing and Church’s solutions were 100% equivalent, and could be translated into each other!

As an aside, the history on the Entscheidungsproblem (decision problem) is fascinating, and I recommend looking into it further.

From Turing Machines, we developed procedural programming: Fortran, Forth, C, Pascal, and BASIC, and later Go, PHP, Julia, Perl, and Lua, plus most (but not all) object-oriented languages, like Java and C#. From Church’s Lambda Calculus, we developed functional programming: Lisp, Scheme, ML, and later Haskell, Clojure, OCaml, F#, and Erlang/Elixir (among others).

Thanks for the history lesson. What’s the upshot?

Now you can write purely mathematical descriptions of a computation, and have it translated into machine instructions (by a sufficiently powerful compiler). You get all the high-level expressivity, tidiness, and rigour of pure math, with the practicality of machines. Early attempts, such as Lisp 1.5, predate C by 14 years. Lisp also came with a bunch of innovative firsts, such as interpreters, dynamic languages, meta-programming, macros, garbage collection, and anonymous functions (and that was way back in 1958).

Both approaches solve the exact same problems, but focus on different ways of solving them. While procedural programmers focus on “the bare metal”, functional programmers like to say that we focus on “the bare mental”. Instead of worrying about how to allocate machine resources in sequence, we worry about what the code means.

Since you’re giving the compiler more information about what things are, it can make smarter optimizations. Since things are also so well encapsulated, it can even do things like break up a large calculation over multiple processors (or over a network) without having to negotiate threads, locks and so on. For example, you can do things like run your functions in parallel, rather than sequentially:

This runs longFunction on every number from 1 to 999, with each number executing on a different thread, and stitching the result back together. This speeds up the overall return time significantly. You don’t need to worry about the threading details; everything “owns its own context” (more on that later), so collisions are not possible. As you can imagine, in a world of multicore machines, cloud computing, and big data, this is pretty useful.

The Case for FP Today

Functional programming has come a long way since 1958, and made many innovations along the way. Today, we have computers that are sufficiently powerful to translate the higher level FP concepts into highly optimized machine code. For example, well-written Haskell can compile down to the exact machine instructions as painstakingly hand-tooled C, but with much more expressive code. There’s even crazier stuff, too: languages like Idris can guarantee that your program will terminate, you can restrict the time complexity of your algorithms, and much more!

Less is More

There is an old analogy with structured programming. Once upon a time, there was a widely-used instruction called the “GOTO”, which allows you to jump to a certain line in your code. For example, a loop starting on line 10 would be a sequence of instructions, terminating in a GOTO 10 (i.e.: jump to the first instruction). This was very fast and powerful, plus a very close description of what the machine is actually doing moment-to-moment.

The problem with explicit GOTOs is that they make your code very brittle and difficult to follow. If you move your code around, you need to change all of the GOTOs. Reading through code can become a jumble of incomprehensible jumps to arbitrary points in the code. The solution is to use a higher level of abstraction to achieve the same goal: loops and functions. The computer is still using GOTOs, but they’re hidden from the programmer. By adding more structure (your jumps are restricted to certain points), you gain a lot of programmer productivity, and code maintainability.

The poet always has too many words in his vocabulary, a painter too many colours in his palette, a musician too many notes on his keyboard
~Jean Cocteau

A similar argument can be made for switching to FP techniques. By limiting effects and implicit state (the GOTOs of the modern age), we gain clarity, maintainability, and elimate entire classes of bugs. Yes, you’re giving up the ease of just printing anywhere (“side-effect”), or storing data implicitly, but the rewards outweigh the costs (in my humble opinion).

Of course, this isn’t a panacea. You have to adjust the way that you think about problems, and while FP is just an excellent choice for 99% of day-to-day problems, still use the right tools for the job. In other words, just because you can, doesn’t mean that you should. The same applies to all paradigms, of course.

Functional Programming Principles

If I were to be reductionist, I would say that most FP principles come from controlling state. Of course, it’s not that simple, but you’ll notice that many of the items below support each other.

Own Your Context with Explicit State

Some people call it “stateless”, but I prefer “explicit state”. Instead of having data implicitly “somewhere over there” and reading that (implicit & shared) data while in the body of a function, simply pass all data that the function needs explicitly as arguments.

Your function is now much more consistent and portable, because you’re not coupled to some external data that others can update any time. Testing your function is incredibly easy, because you have one possible output for every set of inputs, and nothing to mock. Oh, and did I mention that this makes it so that you no longer have to worry about deadlocks and thread timing bugs? Yeah, pretty spiffy.

Limit Side Effects

This is the output side of explicit state. As much as possible, avoid talking to the world outside of the function, except for the final return of the function. There’s now (less) concern that your program will do something unexpected, like open a socket, delete files, or fire the missiles. Testing is easier, too, because you only have to check the return value!

Sure, eventually you’ll want to write to a file, or launch another process. If you encapsulate this behaviour in its own function, it’s very clear where the behaviour is coming from, and you can very easily replace it with a tracker/simulator in your tests!

Referential Transparency

When you limit yourself to simple input/output for every function (with no changes to the world outside it), you can swap out a function for its return value. You are certain that add(1, 2) always equals 3, no matter the environment. It doesn’t read values from an outside source, update a global variable, or print text to the screen. So, anywhere that you have add(1,2), you can substitute a simple 3, and vice-versa.

This sounds very simple, and it is! By following this rule, debugging and exploratory programming become a lot simpler: you can always isolate code from the rest of the system by substituting in hard-coded values, change the function used (as long as it has the same return), and mock out behaviours.

Composition

With (top-down) inheritance, you specialize a general class into more specific subsets. For example, a general Car class can be used to make more specific SportsCar and Minivan classes. This works well for a certain problems, but fails when you want to mix and match functionality. You can quickly end up with an endless regress of more and more general classes, and huge messaging interfaces between different parts of your application.

The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
~Joe Armstrong

With (bottom-up) function composition, we take small, single responsibility functions, and combine them with other functions to build up larger, more complex behaviours. We do this either by passing the result of one function as the argument of another, or by calling them in sequence (if we’re side-effecting). Composition is really as easy as:

Many languages have special syntax for this, which is the same principle as the familiar Unix pipe.

“Nobody puts data in the corner!”

Some people call functional programming a “data-oriented” paradigm. Why hide your data behind an interface? By handling data directly, you gain clarity about what’s happening at every point along the way. It’s as simple as “does the next function take arguments that look like this?”

Because they’re handling the data directly, functional programmers generally spend more time thinking about what the data should look like, and how it should be structured to make their code the most expressive.

Want to Get Started?

It’s a Spectrum

Like most things, functional programming is a spectrum. You can do functional-flavoured programming in Java, Ruby, C++, JavaScript, and many others. With the recent resurgence of interest in functional programming, there’s probably a good introduction out there no matter your background. Here are some of my recommendations:

If you’re a Rubyist, check out Clojure (don’t fear the parentheses!), and Elixir. For those coming from JavaScript, there are some really great libraries like Underscore and Bacon, plus languages that compile to JS, like Elm, ClojureScript, and PureScript. Javaists should check out the JVM languages: Clojure and Scala. Are you math-y? Check out Haskell, Idris, and Agda. If you’d like to play with the nexus point of object oriented and functional programming, check out Rust, Scala, and Swift.

Further Reading

--

--