The advantages of functional programming

Giovanni Ornaghi
Prima Engineering
Published in
9 min readJan 27, 2017

27 Jan 2017

In the past few years functional programming has caught the attention of industry programmers once again. Among such developers is the team I’m working with at Prima.it. While some of us had short stints of F#, Haskell, or LISP, for most of the team higher-order functions, pure functions, and immutability were concepts that were learned, and tested, after many years of object-oriented programming.
The reasons that lead us to move from a PHP monolith to Elixir microservices are the topic of another article. Presently I’d like to focus on what we believe are the advantages of programming in a functional style. Hopefully, at the end you’ll see why we still believe that functional programming matters.

How do we define functional programming?

It seems that every developer has her own vague definition of what functional programming (FP) represents. I’ll claim that this is not due to ignorance, but rather due to the fact that most developers see it as a language feature rather than a way to write and structure their code. Of course some languages won’t offer tail-call optimization, higher-kinded types or currying, but these are hardly what makes some code “functional”, in fact, Elixir fails on the last two examples.

The difference is in the way the programmer thinks and structures her code: from methods to functions, from classes to modules. We won’t care about the behavior of an object, its methods, focusing instead on the functions related to a particular data structure. Here methods and functions cannot be used interchangeably: methods are related to the instance of a class, they change its internal state while representing a particular behavior; functions, on the other hand, are transformations from input to output. Ideally functions are “pure”, that is, they don’t alter anything outside their scope.
From a design point of view, having modules instead of classes makes all the difference. Instead of composing classes, one composes functions. Instead of trying to model the business around objects and their state, the developer can focus on the input-output pipeline where each function is a step of the transformation. Instead of trying to map the complexity, we could scale it down by putting each part in a module, may it be an actual Module construct or a function. At the same time, one can see how many SOLID principles such as Single Responsibility still apply: functions related to a particular step of the computation would go inside the same module, which needs to be bound and named sensibly. Through the use of closures we can then achieve Inversion of Control, while Immutability often comes with the territory.

That said, for the FP examples we’ll use Elixir, the language we use and love, since it implements many concepts out of the box. That’s not to say that you cannot do FP in C#, Javascript or even PHP, it just needs a bit more work. I’ll present two interesting features (higher-order functions, immutability) that I consider among the pillars of FP. The last part of the article is about pattern matching, an interesting language feature shared by many functionally oriented idioms.

A new way of thinking: from how, to what

The most staggering difference that FP presents, compared to other mainstream programming paradigms, is the different focus required from the developer. Some simple patterns of computation are abstracted in favor of more terse, and yet more expressive, code. This is often the case with higher-order functions (HOF). HOFs are functions that accept another function as an argument, making them effectively a function upon another, thus higher-order.

The most basic examples come from the realm of iteration and recursion. One hardly needs to explain this Elixir code:

> Enum.map([1,2,3,4], fn n -> n * 2 end) 
> [2,4,6,8]

Here [1,2,3,4] is a list we’re iterating, while the second argument is a function that takes a number and outputs its double. Other examples of HOFs are fold (aka reduce), filter, and their combination. Reduce abstracts the concept of accumulation, for example summing over a list of numbers:

Enum.reduce([1,2,3,4], 0, fn n, acc -> n + acc end)

The second argument, 0, is commonly called the “seed” and represents the neutral element of the computation. The third argument is once again a function, this time taking two arguments: the current element and the result of the computation up to that point. In a future article we’ll explore fold in more depth, explaining the difference between a left and right fold. We’ll also see that fold has a very special role among HOFs.
The same way functions can be composed, so can HOFs. In Elixir the chaining (mathematically speaking, this is not exactly the same as composition, but for our purposes it will do), is achieved with the |> (pipe) operator. Take this example from our codebase, while the syntax could be a bit obscure, you should be able to get the gist:

prices 
|> Enum.zip(Enum.map(prices, &guess_price(&1, quote_data)))
|> Enum.reject(&match?({_price, {:error, _msg}}, &1))
|> Enum.map(fn {price, {:ok, val}} -> {price, val}
{price, _} -> {price, :error}
end)

The pipe has nothing magical about it. In fact, in the Elixir Core code, it’s implemented as a fold, applying one function on top of the other.

When we write our usual foreach, what we really want to do is to iterate over a collection of elements. But even in a foreach, the amount of symbols we need to keep track of is often distracting. Take this imperative any as an example (in pseudo-code):

var counter = 0; 
foreach(x in [1,2,3,4]) {
if (x % 2 == 0) {
counter++;
}
}
var has_any = counter > 0;

The point is not to write less code, be faster, and make management happy. Instead, FP claims that these abstractions help the developer focus on what’s important: the general algorithm, not the details of variable tracking and the exact order in which they’re instantiated and mutated. You don’t need to tell the computer how to iterate, just tell it what you expect as a result! Personally, I got enamored with Haskell the moment I realized that I could solve this Project Euler problem as a one liner, which expressed exactly the complicated 3-way-for-nesting operation I had in mind, but was too bored to write:

Problem: find a Pythagorean triplet for which a + b + c = 1000. Solution: 
> hd [(a, b, c) | c<-[1..500], b<-[1..c], a<-[1..c], a^2+b^2 == c^2, a+b+c == 1000]
> (375, 200, 425)

Immutability: a peek into what makes FP tick

Immutability helps immensely in reducing the complexity of a piece of software. By never being able of mutating an input, the amount of information you have to keep track of in your head shrinks considerably.
Here is an example of what kind of complexities await in the good old imperative world. This code was written by my girlfriend for her Arduino project:

... 
char letter = message[j];
int sequence[] = {0, 0, 0, 0};
recognize(letter, sequence);
...
void recognize(char letter, int sequence[]) {
if (letter == 'a') {
sequence[0] = 1;
sequence[1] = 2;
sequence[2] = 0;
sequence[3] = 0;
}
...
}

The values of the sequence array are changed inside another function, recognize, that as far as we know could be in another file or even part of a third-party library. The array serves two contradictory purposes: it’s both input and output. This is far from an academic critique, because as much as this example is contrived, you might be mutating the value of a variable without thinking too much on how it’s used in the caller’s function. It is not a case against pass-by-reference schemes either, we could make an analogous point about reinstatiating the same variable twice in the same scope: put some ifs, some for loops, and it’s already a mess.

Immutability is closely related to another important FP concept: referential transparency, by which we mean that for any given function with this property one could substitute the input value for its argument without noticing any change. As an example take the function sum defined as:

def sum(a, b), do: a + b

Calling sum(3, 4) would be the same as writing:

def sum(3, 4), do: 3 + 4

We’re substituting 3 for a and 4 for b. This property however is not shared by every function, namely impure functions. In fact, referential transparency doesn’t hold for functions like this:

var i = 0; 
function funny_sum(a, b) {
i += 1;
return a + b + i;
}

Another important point for immutability is parallel processing. By having immutable pointers one cannot accidentally mutate another process’ values. Thread-safety is then guaranteed and the performance gains come for very little expenses in terms of complexity and brain power. Languages like Elixir and F# make running parallel processes a breeze by providing strong abstractions. Here’s an example from our codebase:

[:a, :b, :c] 
|> Enum.map(fn(name) ->
Task.async(fn -> {name, apply(data, String.to_atom("get_#{name}"), [all_params])} end)
end)
|> Task.yield_many(5_000)

For each element in the initial array we generate an async task, then we yield them, giving a timeout of 5 seconds.

Lastly, immutability makes testing much easier. There’s no more need to mock complex chains of objects and, in our experience, the arguments being passed from one function to another tend to be quite small. Instead of passing a model for a complex real-world entity, often all we need is a couple of strings.
One needs not to worry about some methods changing a field of an object somewhere, and since every function needs to return something, every function is technically testable. Our codebase crawls with short but helpful inline tests, such as:

@doc """ 
iex> Service.Posizionale.Creator.plateNumber("LO006302") "LO006302"
iex> Service.Posizionale.Creator.plateNumber("PC068741") "PC068741"
iex> Service.Posizionale.Creator.plateNumber("RC04D121") "RC04D121"
iex> Service.Posizionale.Creator.plateNumber("RM9H1764") "RM9H1764" """
def plateNumber(t = <<head::bytes-size(2)>> <> <<number::bytes-size(5)>>) do
case head in @provinces && is_integer(number) do
true -> t
_ -> "*" <> t
end
end
def plateNumber(t) when byte_size(t) < 7 do
"*" <> t
end
def plateNumber(t), do: t

This function is responsible for formatting a plate number according to a third-party service specification. In this case specs and functions live on the same page.

A common feature of languages geared towards FP is pattern matching. In Elixir this aspect is particularly advanced, making the code more pleasant to both read and write. The main idea is to have a way to branch your code according to the input. A kind of polymorphism applied on the values themselves, instead of a simple type declaration.
Let’s consider this famous algorithm: starting from a positive integer number n, if n is odd, multiply it by 3 and add 1, if it is even, halve it. Continue until you reach 1. Instead of a messy junction of if-else statements, carefully vetted for their order and correctness, we can simply write:

def collatz(1), do: 1 
def collatz(n) when n % 2 == 0 do
collatz(n / 2)
end
def collatz(n), do: collatz(3*n + 1)

Pattern matching happens in the first variant of collatz, when we want to output 1 if the input is 1. In the second variant we added a when statement, also called a guard, to check the value of n. Lastly, for any remaining input we perform the appropriate operations. Having matched all other cases, n must be odd and different from 1. As a more interesting example, we can pattern-match by destructuring our input. Take this snippet from our codebase:

defp return_or_retry({_, %{status_code: 500}}, envelope, call_name, all_params) do 
Process.sleep(200)
Service.Logger.retry_on_500(call_name)
do_call(:ok, envelope, call_name, all_params)
end
defp return_or_retry(res, _, _, _), do: res

Here we’re validating the result of an HTTP call. First is the “unhappy” path: a 500 status code. Pattern matching is done on a structure inside another structure, namely, an Elixir map (the %{} part) inside a tuple ({}). As long as the map has a status_code key, it’s game. At the end is the “happy” path, everything is fine and we simply return the result. You could see pattern matching as glorified if-else, and you wouldn’t be too far off; the advantage however is in the way it allows us to isolate its different branches in different functions altogether, each with its own scope, easily and making each decision as clear as one can get.

Conclusion

In this article we simply dipped our toes in the vast sea of functional programming. FP offers great advantages and it’s a joy to use, but it also has a learning curve. Besides studying the more abstract concepts (lambda calculus, referential transparency, equational reasoning, currying…) you might want to get your hands dirty by using an FP-friendly language in your next pet project. You might start feeling like a beginner once again, having trouble with the most simple algorithm, looking about wondering what is everyone talking about. Stick with it! Our industry shifts from one technology to the next extremely fast, after a while it all becomes a blur and one wonders if we’re moving at all. FP is not a technology, it’s a whole programming paradigm, it’s a set of concepts and skills that you can take with you whatever the latest trendy framework.

The exciting Haskell you write at night will eventually leak into the boring C# you write by day, showing you that programming goes beyond syntax and tools, it starts from your brain and ends at your fingertips.

Originally published at prima.engineering on January 27, 2017.

--

--