Getting Something from Nothing
So, here’s the simplest function imaginable:
In any language where we can assign functions to values, we can think of them AS values.
It’s easy to take it for granted, but functions support what’s essentially just a special operator, the double parens, which allows us to apply values to them.
So in that sense, they’re really just “collapsable” or “squish-able” values.
Some languages actually support this sort of syntax natively, making functions like equations.
Now, our “I” function might’ve seemed sort of dull and pointless, but it does have one special property: applying a value to it literally CANNOT CAUSE AN ERROR, no matter what sort of value you squish into it. Right?
That’s because it doesn’t mutate or even examine its argument at all!
Here’s another, similar, function: K.
K essentially “remembers” the first value squished into it and then creates a function that will return that value regardless of what else you squish into it next.
K ALSO works seamlessly with ANY two values, even functional values like I. Nothing you squish into K can cause it to error.
K and I, also known as Constant and Identity, are examples of some basic FP building blocks called “combinators.”
A combinator is just any function that has no reference to anything BUT its arguments: there are never any inner (“free”) variables or even any external language methods in play.
As it happens, entire Turing complete programming languages can be built out of nothing but mixing and matching a small set of combinators.
Now that we have a sense of what combinators are, here’s yet another one you probably know: compose.
Compose simply sequences two functions, defining an specific order of execution whereby g will run and then f will run using the output of g as the input to f. It returns a function.
If we go back to the squishing idea, we can get another vantage point on this. Squish two functions together to get a function, run a value through that, get a result:
This squishing behavior is pretty cool: we can collapse any two simple functions into one new function: all we have to do is first make sure that the expected types of inputs and output all match up.
If the inputs and outputs don’t match up… well, then we KNOW that we’re going to get an error.
But now let’s say that we have a function that sometimes returns a five, but sometimes does NOT (this one uses pure randomness). We’re now faced with figuring out what the “not” return value should end up being: a null, a string message?
No matter what we do, maybeFive HAS to return something to signify the lack of a 5. Even if it doesn’t return anything, it’d still effectively return “undefined”.
And that means that no matter what we do, its output type is uncertain, which can lead to errors if we try to use its output for something!
To get things to work without any errors, the next function is forced to be MUCH more complicated than just x + 1: now we have to check that x is actually a 5 before doing anything with it. And then, for its output, we’d have figure out all over again some way of signifying that there’s no good result to return.
That then means that ALL the subsequent functions in our sequence will ALSO have to know about and check for that “not a value” value, however THEY might decide to define it. And this is exactly where most simple programs start to spiral out of control, creating cascading complexity. Every time the structure of the program changes, we now have to refactor and retest everything.
All this makes sequencing computations very tricky.
In the imperative programing style, where we do a bunch of steps and variable assignments on different lines and use conditional blocks, we often descend into unreadable “pyramids of doom,” which eventually drown out the entire readable structure of our program.
So, the imperative approach and the naive functional approach are both unworkable at scale.
What we really want is to achieve some sort of control flow and error checking without losing the power of composition…
So let’s start by stealing a page from combinators and creating a functional value that explicitly cannot contain any values at all. One that has no way to contain a value.
“Nothing” here is a functional value that just returns itself, which is a function that returns itself, and so on, forever.
What can we do with Nothing? Nothing. It just returns Nothing. There is no value to extract, and there are no methods to call on it.
BUT!…. we can use our composition combinator to sequence up two nothings in a row, and they’re guaranteed to be error-free, by design!
That’s pretty neat… but also pretty useless. So let’s take a step father: we can turn Nothing into a real higher-order Type.
A Nothing “Type” is sort of like an Array that can’t contain any items.
You can create a Nothing by calling it with any value, and you’ll just get back the type again, with no value inside.
Since ANY Nothing is as good as any other Nothing, we can just create one Nothing once and then just re-use it as needed.
So… now we have a Nothing container, containing nothing.
But this version of nothing does have a useful method: .map()
It does, naturally, nothing.
We can, however, map as many times as we want, and nothing we do can cause any errors whatsoever, even with functions explicitly designed to fail.
So, now we’re able to sequence computations via composition again!
The one tiny remaining problem is that the computations never ever run, which makes accomplishing anything very difficult indeed.
Why create something like this? …Exactly! Let’s create a Something Type too! A Something type that can do pretty much anything (to its inner value, at least)!
Something is like an Array that can only hold a single value.
And, like Arrays, it has a .map() method for running functions on that value and returning back the new result inside the same container interface.
Doing that now means that we can endlessly sequence operations on a Something type, but this time, they’ll actually do stuff.
Let’s stop here and remember that what’s going on here under the hood is still really just a form of composition.
Which, in this case, is all just a structured, functional way of queuing up simple addition!
So why did we build these things? Because NOW we can do stuff like this!
Our maybeFive function now randomly returns either a 5 or not, just like before. But the 5 or not is expressed as either a Something(5) or a Nothing. And here’s the kicker: since both “contexts” share a .map() method, that means that, as long as we map over the result, we’re back to being safe from errors again. Note that the type helps ensure that we handle both possibilities. We have to use map to do anything with a possible value.
Sometimes that’ll log 6, and sometimes it just won’t do anything. But it will never return an error, because that’s no longer possible. We’ve spelled out a computation that, despite randomly failing to return a value, cannot fail, even if any subsequent functions depend on that value to work. It’s safe by design, but without losing the simple power of composition.
Remember how I and K could never fail, since they never actually modify, examine, or operate on their inner value? Note that there’s never any “checking” of whether or not the value passing through this computation is a Something or a Nothing. The computation just delegates things to whichever map operation is defined on the provided Type container.
So the entire operation, with control flow built in, now squishes down into a single function.
Awesome. So what have we achieved with this pattern?
Well, simple functions like x=>x+1 or whatever never have to get any more complicated than exactly whatever they are meant to do to the value they receive. That’s a huge win.
This frees them to be much more generic. We had an “addOneToFive” function before because it was forced to do extra work in order to fit into the needs of the larger program.
But now it can just be called “addOne” or “increment” and work anywhere that we need that functionality.
We’ve also basically forced ourselves to deal with error conditions as we code what the inputs and outputs will be, meaning that we’re structurally preventing errors from the start rather than scrambling to deal with them later.
This means that we can actually sketch out the entire structure and control flow of the program by just thinking through how all the type-signatures will squish together, even before we write the specific code.
Finally, with just the Maybe type and just the map interface on it, we’ve only scratched the surface of the sorts of powerful interfaces that functional types can provide. There are all kinds of laws and predictable features behind these interfaces that allow us to capture things like asynchronous operations, IO, application configuration, application state, and so on. All of which share a common set of interfaces and laws.
To finish up, let’s look at a simple, semi-realistic example of our Maybe type in use:
With that outline made, we write our type signatures and then finally write out their actual implementations. They’re all extremely simple and single purpose. All the types match up. “userApi” returns a Maybe Functor (either a Something or Nothing) but its inner type matches up with the remaining functions, so we can just lift them up into the Functor pipeline by using our pointfree “map”:
And boom, now all we need to do is hook them up so that they all squish down into a single function. Done.
Again, this is only scratching the surface: if we wanted to use our Types to sub in a controlled, user-facing error message when there’s no database record: that’s easy: we’d just need the explore some of the other algebraic interfaces (.cata, .fold, .getOrElse, etc.) on our Nothing/Something types that collapses them into either a value (Something) or a specified default value (Nothing). Or we could introduce a new Type that’s built specifically for this purpose: Either.
But that’s all for me today!