JavaScript: Functional & Imperative — map, filter & reduce

Jack Harvey
ClearScore
Published in
8 min readFeb 27, 2020

Two Paradigms

When we talk about a certain programming language or coding style as being ‘functional’ or ‘imperative’, what we’re referring to are programming language paradigms. These paradigms are classifications for programming languages, and both of these paradigms exist within Javascript.

The functional programming paradigm avoids the mutation of data, instead utilising “pure” functions, which always produce the same output given the same input, and do not produce any side effects. Functional programming also falls under the larger classification of declarative programming languages — languages which are composed of expressions as opposed to commands.

Imperative programming consists of a series of commands which modify the system’s state. It’s often said that imperative programming focuses on describing how a program operates.

The functional methods in JavaScript we’ll be focusing on will be the list operations: map, filter & reduce. The logic for these is defined by passing a function as a parameter, known as a lambda function.

Arguments for & against

It’s said that the functional style is much more expressive. The functional style is also typically less prone to bugs due to the principle of immutability and no side effects, which make it easier to reason and think about the program. However with the imperative style you have much more fine-grained control over what your program does, and the performance is typically better. Hopefully after reading this you will be better equipped to make up your own mind.

Comparing like for like

map

In JavaScript, the map function creates a new array with the results of performing the lambda function upon every element in the array. For example, we can double every item in an array like so:

const array = [1, 2, 3];
const newArray = array.map(number => number * 2);
console.log(newArray);
// [2, 4, 6]

One property of the map function is that the outputted array always has the same length as the inputted array. Therefore if you have an operation in which the outputted array could possibly have a different length to the inputted array, you should consider an alternative method.

If we wanted to perform this operation in an imperative style, we’d instantiate the array in the same way, but then we’d reassign each item to a value that is double itself. In code it would look like this:

const array = [1, 2, 3];
for (let i = 0; i < array.length; i++) {
array[i] = array[i] * 2;
}
console.log(array);
// [2, 4, 6]

filter

The filter function considers each item against the evaluation function, and if the evaluation function returns true for that element, it remains in the array, if not it is discarded. For example:

const array = [1, 2, 3];
const newArray = array.filter(number => number < 3);
console.log(newArray);
// [1, 2]

The equivalent operation in an imperative style would be the following, note the inverted condition:

const array = [1, 2, 3];
for (let i = 0; i < array.length; i++) {
if (array[i] > 3) {
array.splice(i, 1);
}
}
console.log(array);
// [1, 2]

We use splice here because the delete keyword doesn’t close up the gap it creates, instead leaving behind an undefined value and keeping the array length the same. The first argument for splice is the starting index, and the second is the number of items to delete. Like other imperative methods splice modifies the array in place.

reduce

The reduce method is the most powerful in the suite of list operations currently available. Like before we pass a lambda function, however now this lambda function is capable of influencing the previously returned values as we have access to the complete return value as it currently stands at each iteration. Additionally, we’re not obliged to return an array at all, we can effectively return any type, even objects.

In this particular case the lambda function is known as a reducer, and as we progress through the array we progressively modify and build up the “accumulator”, a single output value. The signature of the lambda function is a little different this time, with the first element being the accumulator, and the second element is the current element we are considering. Additionally, reduce itself accepts not only the reducer function, but an optional initial value also. This is important in certain scenarios where your operations may produce an undesired output due to type coercions.

We have the following example, where we wish to sum an entire array:

const array = [1, 2, 3, 4, 5];
const sum = array.reduce((accumulator, next) => accumulator + next);
console.log(sum);
// 15

This may be a little bit confusing, so let’s dive in a little.

On the first execution, as we didn’t specify an initial value, the initial value of accumulator defaults to the first value in the array, which is 1. next is the next element in the array, so 2. Now we state that we wish the next value of accumulator—the function's return value—to be the result of accumulator + next, which of course is 3. This process continues until the array is exhausted.

╔══════════════════╦═════════════╦══════╦══════════════╗
║ Iteration ║ Accumulator ║ Next ║ Return value ║
╠══════════════════╬═════════════╬══════╬══════════════╣
║ First iteration ║ 1 ║ 2 ║ 3 ║
║ Second iteration ║ 3 ║ 3 ║ 6 ║
║ Third iteration ║ 6 ║ 4 ║ 10 ║
║ Fourth iteration ║ 10 ║ 5 ║ 15 ║
╚══════════════════╩═════════════╩══════╩══════════════╝

In contrast to map, where the inputted array always has the same length as the outputted array, if your outputted array may be a different length, or in fact a different type, to the inputted array, you most likely want a reduce.

The equivalent program in an imperative style would look like the following:

let sum = 0;
const array = [1, 2, 3, 4, 5];
for (let i = 0; i < array.length; i++) {
sum += array[i];
}
console.log(sum);
// 15

Here we’re populating a counter value, incrementing it by the value contained in the array.

Due to the way types are evaluated in JavaScript, we can make a subtle change in order to produce a fundamentally different output. The infix + operator is overloaded in JavaScript, meaning it can perform more than one operation depending upon the arguments. The previous example passed in a Number type on the left-hand side of the + operation, therefore it was evaluated as addition. However, if we change the initial value to '' (empty string) so that the left-hand side is now a String type, the operation is evaluated as string concatenation, with the next value being coerced into a String at each stage. Thus we have the code:

const array = [1, 2, 3, 4, 5];
const sum = array.reduce((accumulator, next) => accumulator + next, '');
// '12345'

And so the series of iterations looks like the following:

╔══════════════════╦═════════════╦══════╦═══════════════╗
║ Iteration ║ Accumulator ║ Next ║ Return value ║
╠══════════════════╬═════════════╬══════╬═══════════════╣
║ First iteration ║ '' ║ 1 ║ '1' ║
║ Second iteration ║ '1' ║ 2 ║ '12' ║
║ Third iteration ║ '12' ║ 3 ║ '123' ║
║ Fourth iteration ║ '123' ║ 4 ║ '1234' ║
║ Fifth iteration ║ '1234' ║ 5 ║ '12345' ║
╚══════════════════╩═════════════╩══════╩═══════════════╝

My favourite way to use reduce is to use it to build objects. The current offering in JavaScript for treating objects are iterables is lacklustre, with reduce being possibly the only decently expressive way to move from one type to the other, with no explicit coercing or intermediate steps. Unfortunately, it only works in one direction, from Array to Object. Despite this it usually empowers you to construct very elegant solutions. For example, take the problem of tallying up the number of times a certain string happens in an array.

const array = ['a', 'b', 'a', 'c', 'b', 'a'];
const tally = array.reduce((accumulator, next) => ({
...accumulator,
[next]: accumulator[next] + 1 || 1
}), {});
console.log(tally);
// { a: 3, b: 2, c: 1 }

It only took one expression to get from a list of strings, to a fundamentally different representation of that data in the form of an object.

To explain somewhat: to get the lambda function’s return value we spread the current accumulator into the return value so that the previous data is not lost, and then we specify that the property with the value of next—i.e. 'a', 'b' or 'c' —should either be incremented if it already exists, or instantiated with a value of 1.

Good times to use for

breaking

The for loop offers us the break functionality, where the array methods do not. This means that if we have operations where we would benefit from exiting early we should consider using a for loop.

Say, for argument’s sake, you didn’t want any element to be greater than 100 and you knew your array was sorted in ascending order, you could break—exit the loop—at the iteration where you would violate this condition.

const array = [1, 2, 3, 49, 51, 53];
for (let i = 0; i < array.length; i++) {
if (array[i] > 100 / 2) {
break;
}
array[i] = array[i] * 2;
}
console.log(array);
// [2, 4, 6, 98, 51, 53]

Chunking

Sometimes the most elegant solution to a problem may be for us to step through an array with a specific step size. For example, if we had an array of data for many months, we may benefit from a step size of 12 so that we can look at data for the same month across multiple years. This is only possible with a for loop, as a functional method will progress through the array one element at a time.

To do this we modify the third part of the for loop definition, after the second semi-colon, so that the loop variable i is incremented by 12 each time.

const monthData = [/* ... */]
for (let i = 0; i < monthData.length; i += 12) {
/* do something */
}

Considering other elements

It’s more fitting to use an imperative style when we are interested in looking at adjacent elements, or any other elements, in the array.

For example, say we’re analysing a list of readings and we want to be able to identify sudden bursts or drops, which are defined as increases or decreases of some arbitrary value over the previous values. If we wanted to access the adjacent elements in the imperative style we would extend our use of the square bracket notation, and in this way there is a uniformity and consistency to the solution. However, if we were to attempt the same in the functional style, we find that we still have to implement the square bracket notation, and so now we’re mixing two ways of accessing our array elements. We can then use this bracket notation on the third (or fourth when using reduce) parameter of the lambda function, but with this we tempt the violation of immutability. Or we can use it on the array as it is defined in upper scope, but in doing so we reach out of and violate the neat and self-contained context of the lambda function.

Wrapping up

Being dogmatic in the application of one style or the other is not going to produce the best results. Keep in mind that any attempt to forcibly coerce one style or the other to behave how you want when it’s not suited to doing so will produce code that is unintelligible to your teammates, and yourself too.

Think ClearScore sounds like an interesting place to work? We’re hiring: clear-score.workable.com

--

--