Functional Programming is for Dummies

Drew Tipson
17 min readSep 4, 2015

A tutorial on Promise composition, functional Lenses, & how to avoid knowing too much about them

One of the best features of the functional programming style for perpetually distracted people like me is that you only have to understand something long enough to write a single, perfect implementation of it. Then you can simply move on to thinking about other things.

Sure, someone might come along and rework your implementation so that it’s shorter or faster… and more power to them. But if you’ve kept the functionality pure, they’re never going to be able to rework it so that it’s righter. Let’s start with a nice little example anyone can understand: taking some value and adding 1 to it.

Immaculate!

Now, sure, you could get a big head and decide you want to check the type of argument a to make dead sure it’s getting an actual number to work with, but why? And what would you do, internal to the function, if it wasn’t? addOne does exactly what it says it will, & if someone misuses it… that’s their problem.

Even if some arbitrary input like addOne([2,”bar”,{}]) outputs something crazy like “2,bar,[object Object]1” so what? It’ll do that same crazy thing, always: that’s simply what you get when you add 1 to something in javascript after all. If you really did want to check the type, the polite thing to do here would just be to write a type-checking function and then wrap it around addOne. Don’t mess with the perfection of addOne itself!

Now, you might already know what I’m going to say next: if you ever found yourself needing to addTwo, you could certainly write a whole new function, similar in form to addOne, sure. But perfecting addTwo would take precious thought and time. And given that we’ve already written the perfect addOne function… you could instead just glue together two addOnes and call it a day. This is a pattern which you might already know is called “composition.”

Did you follow all that? Well, I certainly can’t hear your answer, so here’s a better, more rhetorical question: why did you even try?

We wanted a way to addTwo and there it is at the end, have at it! Whatever number you give it, it’ll always unfailingly add 2 to it, and you don’t have to know how. It just does, and this leaves your brain free to think about whether or not you want to addTwo, which is probably a much more important question.

Now, if you ever did need to know more about how it worked, you still always have the option of looking up a few lines and seeing that addTwo is the composition of two addOnes, which roughly means that some operation (one-adding) is executed on some value, with some output, and then some operation (in this case, one-adding, again) is executed on that output. So, like we said, just function composition. And if you needed to know what, really, “composition” was… well, could you sit down and write the compose function itself from memory? Maybe. Probably? Obviously?

Well, who cares: it already works, and it will always work. Because you’ve relied on a logical guarantee in your code, you’re able to reason into a higher-level of abstraction without having to revisit every quirk of the implementation. Even if you have no idea how exactly compose itself is written out you already instinctively know that you could create an addFour function just by composing together two addTwos. And you probably also know there are some mathematical laws out there that prove all of this. Those laws are simple enough that maybe you could even call them to mind if you needed to, but please do not try to do that now, that sounds exhausting.

We’re not celebrating ignorance here: if that was where this was going I’d be advising you to stop reading now, lest you learn something. I’m just saying that once you create a certain level of trust in your functions and functional laws, you can take them for granted and spend your massive but always finite brain beans on some other problem.

Brain beans

Likewise, if I, the author of this piece, establish a certain level of trust with you, the reader, you won’t need to remember everything I’ve said. Your wondrous bean-sorting brain will simply compose together every single point I make into a slick little mental program that can stand-in for all the specifics in a pinch. “I understand how to compose Promises and use Lenses” you can say to yourself, even if, at that moment, you don’t, because you’re busy understanding other things.

The point is, you’ll be certain that if you put your mind to it at some point, not only you could figure it out, but that there’s something inherently true to figure out. But most of the time you won’t have to do any of that. Having absolute certainty that they work, you can largely make do with just general instincts about what they can do and what they can’t. That’s not only good enough to get you through the day: it’s liberating. You have a lot of other things to worry about!

So, Promises. Let’s start using them without actually thinking too much about them. Heck, let’s just start using them without even explaining what they are, which should be a real time-saver!

Anyhow, in a really fantastic piece by James Colgan that you absolutely should not go read right now (it would be totally distracting and I’m going to give you the gist of it anyhow), he claims that Promises are the Monad of asynchronous programming.

I honestly can’t tell you if that’s right or wrong (e.d.- from the future: I can now say that yes, it’s sort of true, but it’s also wrong, because Promises are a mess), because while I swear I understood Monads a week ago, I have much more important things to understand this week. What I can tell you is that his implementation of Promises as just the simple composition of functions is really insightful and cool, and it really works. Here’s a minor variation on it I’d like to talk about:

Like I’m going to keep re-iterating: you don’t need to know how that all works. Certainly, I’m about to explain how it works, and you will probably at some point, therein, understand how it works. But what’s so nice about this particular pattern is that you won’t need to. It’s just a series of instructions, conveniently named. Naming things accurately is 90% of programming, and this little construct helped us nail it.

So let’s instead first talk about that code in a very vague sense. The “program” we’re creating here is defined entirely by creating a list of things we want to happen, the output of each step seamlessly piping into the input of the next. What are those “things?” Given the title of this article you might be inclined to think that they are Promises, but in fact, none of them are. They’re all simply functions.

Now, some of those functions (namely, as you might expect, delay) do return “promises” once they run, but so what? If you don’t want to think about it, then that pesky detail is happily abstracted away for you, leaving you free to simply decide what you want to happen, and then what you want to have happen next, and so on. Some things in there are synchronous, some not, who cares? Given some starting value (zero, here), this program is going to add 1 to it, wait a bit for no real reason, log it to the console so we can be absolutely certain about what’s happened so far, and then do that 2 more times for good measure.

How? Well, already know what addOne does, so let’s look at what log does. It’s so simple I won’t even waste a gist on it:

var log = x => !console.log(x) && x;

So it’s just a function that takes an argument, has a gross side effect (eww: so it’s not really a pure function, but then, I only included it for your sake, so that’s on you), and does nothing else but return the argument it was passed. Since console.log itself returns nothing (i.e. undefined), the inverse of it is true. And since true && x = x, the function just returns the same argument it started with.

We could have called it addNothingUsefulToThisProgram but that would have been a little insulting. Take out the !console.log && part, which is really just an artifice that lets us see what the program is up to, and we basically have the classic Combinator function “identity,” which simply echoes back the very same argument it was passed, unaltered. That may seem pretty useless, but this is functional programming, and dammit, that means it still counts.

Why does it still count? And what is program? Well, it looks sort of like compose… and in fact it is sort of like compose (aside from the fact that it takes its arguments left-to-right, top-to-bottom, instead of the other way around, like with compose) But since what’s happening here is clearly asynchronous, it can’t be exactly like compose (not until ES2018 rolls around perhaps?).

Now, if you’ve already ignored my advice and already know what “compose” is, you probably saw that it simply takes two functions and then returns a new function that, given an argument “x,” will:

  • run the first function, fn1, on…
  • …the output of the second function fn2
  • …after it has first been run with argument x

fn1( fn2(x) )

Now, obviously, if the second/inner function was asynchronous (i.e. it took time to complete), the first/outer function would run before the second/inner function had finished (because javascript is a blockheaded, single-threaded language that immediately executes instructions line by line, block by block), meaning it runs on basically nothing, and… well look, you might already know that the solution to this sort of complication is Promises.

But if Promises are such a swank solution to functional composition in the world of asynchronous functions (:gasp:), then where in that “program” construct are the “Promises” hiding? Where is that paradigimagical Promise method .then()? And what is that “unit(0)” thing that we used to actually run the program, in lieu of just running it with the bare value zero?

Ok, fine: “unit” is just cheeky name for a function I usually call resolve, which is just a shortcut for writing Promise.resolve, which is just a way of creating a new Promise that’s already resolved, and wraps a particular value in a nice little container that is blissfully immune to the ravages of time. Now, I didn’t explain what a Promise is, let alone a resolved one, but just know this: if you ever needed to swap out unit(0) for a real Promise/$.Deferred-empowered-ajax call, no problem. If any of the steps in the program themselves were further ajax calls, again: no problem. Boom, less future problems!

I will admit that I used the word “unit” in this case mostly because that fantastic Colgan article [that I told you not to read just yet!!] did. But I also accepted that terminology because it makes a lot of intuitive sense for our educational purposes: the core “unit” of this type of composed computation is, in fact, a resolved Promise. And “unit” is just a way of transforming any simple value into that core computational construct that we’re going to be working with throughout: a Promise. It’s that very symmetry (using Promises all the way through) which allows our little system to handle both Promise-returners and value-returners.

Once we’ve done that, instead of writing Promise-y chain things like this:

Promise.resolve(0).then( addOne ).then( delay(1000) ).then(…

…and so forth, program can instead abstract away that underlying pattern: a series of functions (some dealing with async matters, some not) that are just being composed together in a series, using the core Promise method .then(). So here’s unit/resolve and the rest of the underlying magic (we’ll be using lodash as our _.functional _.utility _.belt):

It’s all pretty simple really!

  1. bind just takes a promise and a function and then hooks the function up to the Promise as a callback using our old friend .then.
  2. A little fancier, pipe is using some lodash methods to loop through the list of functions and, using bind, chain them all together in a linear series. The fact that all of that is wrapped in _.curry simply means that we can call the function pipe with JUST the list of functions if we want, long before we tell it what Promise it should start with. Whenever a curried pipe gets only one of its two mission-critical arguments, the curry construct insures that we’ll get back a new, pre-configured function that’s waiting to find out which Promise to run through that chain.
  3. program, finally, is simply a cutesy name for that first curried function we’re getting back when we pass a list of functions into pipe (we’re using using call here so that we can just pass along the entire list of functions passed to program as arbitrary arguments instead of as a single value passed as an array). Once created, a program will be ready to run through that series, starting with any Promise it’s handed, returning a final Promise containing the result of the composition. Pretty cool!

Now, that “add 3 slowly” program was definitely a pretty silly example to start with (oh, if you were still wondering how delay worked, well here’s how), but it was dumb and simple because we’re about to make things way more complicated. Which is exactly why it’s nice that you can, if you please, just forget entirely how program works for the time being, just trust that it does, and think more about why you’d use something like it.

Here’s one simple answer: almost nothing in this world ever comes to you in the form you want it in, nor stays the same for long. It might come in parts. Those parts might arrive the wrong format, and they might arrive at the place you need them to be at different times. You might not even have the right tools to assemble those parts, or if you do, those might come in at all different times. One part might all of a sudden change its api to parts/v2.0, requiring you to refactor everything. Life is a mess.

Functional, declarative programming tries to deal with that mess by keeping certain things so dead simple so that you don’t have to worry about the individual parts, how they work, or what crazy things they might do (mostly because, by design, they don’t do crazy things). Instead, you just get to explain what you want done. And if you ever do need to re-tool a particular part, you can do so without worrying about causing all sorts of unpredictable chaos to everything else.

So, let’s get some data from an api. Let’s stop with all the log and delay nonsense and just act like adults here.

So, that there is a mocked up api that returns a complex data object that we’ll run through a mysterious program called ageSomebodyByTwoYears, resulting, magically, in an object that’s been mutated exactly as we’d hoped. We have some new things to think about there, so hopefully you’ve made room by forgetting how bind and pipe and all that nonsense works. Let’s just deal with figuring out what the mystery program there actually was.

First off, we don’t always work with just single simple values, right? Nope, and in the above case we’ve got a JSONish object with some fairly complex data about somebody. All we really wanted to do is just addOne twice to the property “age” (or compose an addTwo and use that, or whatever). But if you just threw a function like addOne at an entire object like bigBird then result would be “[object Object]1” right?

That’s because we want to work on just the property “age” and while we do have a simple, non-confusing function that can addOne, what might be confusing is how we could target that function at a specific property alone. I mean, we could always write a very specific function that did just that…

Er, no. Ok, but I see why it didn’t work: the type signatures went wrong. Here we go:

Ok, so that works, but that ageByAYear function seems awfully over-specific and imperative… maybe this is a better way to write it?

Ok that’s great! Except… for the part where it doesn’t actually do what it was supposed to do, which was to increase the age and then return back the whole modified object. Oh. What sort of generalized pattern would let us deal with that?

Well, what’s what Lenses are for. And, specifically, a functional construct sometimes called over. Here’s a simplified sorta-but-not-really-at-all implementation of a Lens and its over-ish method, all mashed together:

Now, did you really want to read through and understand how lensOver works?

Of course not, don’t be ridiculous: I don’t even remember, and I just wrote it. Just know that it’s not really the fully powered thing that a real “Lens” is in functional programming, not by a long shot (real lenses are much more flexible). But it’s tiny and it’s good enough for our purposes here, which are to:

  1. target a simple function at a specific location in a nested object,
  2. return a clone of the entire modified object (treating the old one as if it were immutable), and
  3. illustrate the power of that basic concept.

I suppose we haven’t shown that much power yet and so you might still be inclined to keep writing one-off implementations of ageByaYear and other similar transformations. Well, here’s my first attempt to dissuade you: that sort of lensing pattern could end up happening a lot, and at all sorts of arbitrary depths in a complex data structure.

So here, we have a program that operates on two different parts of the object in turn. The first operation is similar to what we’ve already covered, applying a function to the “name” property while still passing along a modified value. But in the second, we’re first peering into a property “comments,” then, while mapping over the array we found there, we’re peering even deeper inside a each comment, then applying capitalizeFirst function to the property “body” in each of them. We’re also politely returning a new copy of the modified structure and not disturbing the original values at all. Whoa!

So, instead of writing out exactly what we want to happen in each of those blocks, we have a common pattern: “open up a complex object, peek into just one part, run it through a function, and return a clone of the entire object + the modification.” We already have the tool for the job (the well-known, well understood Functor operation, map), we just needed an equally intelligible functional construct to help us apply it in the way we wanted.

Again, we could certainly write code that does all that for each use case and lots of people do. But every time we write out those anonymous, one-off functions, two bad things happen: 1) we have to think about the specific implementation way too much and 2) we run the risk of accidentally screwing any one of them up, even with just a typo!

On the other hand, once we’ve written lensOver correctly the one time and confirmed (or perhaps even proved empirically!) that it works exactly the way we think it does, both of those problems go away forever, and the only thing we have to worry about is how and why we’re using lensOver itself. If you happen to mistype it as lendsOver then, hey, at least the typo that messes everything up is right there and obviously wrong in the declarative instructions, rather than buried away in some one-off implementation!

So, anyhow, that’s all pretty cool: we now have two interesting little programs for handling and massaging asynchronous data… and they’re built out of a bunch of simple, flexible, immutablish functions!

Better yet, as you may have secretly suspected all along, programs are themselves composable:

They both kick off with a Promise, and they both return a Promise, so why wouldn’t they compose? Of course they would.

Now we’ve been sort of underselling the Promise-y part of all this. If you’ve worked with Promises enough, you know that sometimes things go wrong and that no chain of .thens, even an implicit one, is truly happy without a .catch hanging around at the end.

But how on earth could, say, a series of just addOnes ever go wrong? Well, for one thing, if the original Promise rejected/failed. Or some other async operation explicitly rejected somewhere along the chain (ajax-ish function hitting a server failure, for instance). But failures are not always so explicit: sometimes everything works, but the value we get back just isn’t what we were hoping it would be, and it becomes pointless to continue. In this case, we won’t need, and oftentimes really won’t want, the rest of the program to run.

So let’s introduce another little construct that will allow us to reject results we don’t like: check_if

My advice: don’t inquire too deeply into what multicompare might be: it just provides a crazily overloaded syntax for specifying various sorts of comparison and truthiness tests to run on a value. Instead, let’s just talk about check_if itself.

Maybe you’re not a fan of currying, so this time I’ve written check_if without using it. But it still works very much like the other curried functions we’ve seen: when it’s called with some test parameter, it will then return a new function that just waits around for something TO test (along with an optional message to explain what the test was demanding).

Once it’s called again with something to test, it simply resolves or rejects with that something, unaltered. So what’s the use of that? Well, let’s now pretend that we have an api that can query for random birds if given a path to each bird we want to examine and potentially work with.

Just for giggles, I’ve moved the “api” call inside the program this time: now the it’s the api function inside the program that is receiving a specific api route to use (specified by a string wrapped in a resolved Promise).

Whereas all other functions we’ve been talking about have taken input values, done something, and returned output, check_if basically always returns the same values (albeit wrapped in a new error object if it fails). What it really does instead is decide the state of the Promise chain: short-circuiting it and skipping any operations after something doesn’t pass an arbitrary check that you can clearly define. And since multicompare is pretty darn flexible, you can clearly specify nearly any test condition, including any arbitrary comparison function.

In the example above, the api result is “checked” to see if it’s a value we’re happy with before doing something else with that value (a more common, less silly case might be to check a JSON response to see it represents a success or error condition). And in the second use case above, it’s not, so it instead rejects, skips the rest of the chain, and the .catch we added allows us to try and do something to recover from the disaster if we want to.

Note that we’re attaching .catch to the actual execution of the program, not pre-baking it into the program itself. The main reason we aren’t doing that is because we can’t: we didn’t really build program in a way that allows that as an option (the result of a program itself isn’t yet a Promise with Promise methods, it’s a just function that will work on and return a Promise). If we wanted to build a more complex version of program that allowed you to bake in a specific .catch function, we probably could.

But that’s really only useful if the program itself has some internal way of reliably recovering the computation or at least signifying the error in some way that’s not going to harm the composability of the program (if your program returns numbers, for instance, you can’t just return a Error or a string message like “whoops!”). So, generally, it’s better to define a .catch per usage, because it’s the specific use case that will determine what to do when the specific use case fails. tl;dr: Just make sure that there’s a catch in there somewhere at the end of every runtime chain.

[Article Ends Abruptly Without Any Closure At All To Illustrate the Importance of that Final Point]

…but if you really want more, check out the follow-up where we’ll reduce our reduce and make it even more functional-friendly.

There’s also much more I have to say about true Lenses (functional references).

Also: now that I’ve had time to think about it: yeah, Promises are Monads…but they’re not particularly nice, and these days, I highly recommend you trying out functional Tasks instead.

--

--

Drew Tipson

Primarily Javascript, potentially personal, possibly pointless. I welcome and am fascinated by your many marvelous opinions.