Data Immutability with vanilla JavaScript

Unlike most trends in the world of JavaScript, data immutability is bound to stick with us for a while, and for good reason: firstly, because it’s not a trend: it’s a way of coding (and thinking in code) that promotes clarity, ease of use and understanding data flow, and makes code less prone to errors.

Ricardo Magalhães
Aug 26, 2017 · 8 min read

But while newer flavours of the JS language gives us a more robust toolset to work with than ever before, without the use of libraries like Immutable.js, things can still look a little bit scary when you put it all together. Getting comfortable with reading and writing the most common use cases is very helpful.

In this short post, we’ll look at pure JS ways (with ES2015++, and yes, I may have just invented this notation) to, add, remove, and update deeply nested properties in Objects, Arrays, and finding common patterns to reproduce these operations.

Playground: Direct link to the JS Bin

Objects

const person = { 
name: 'Ricardo',
location: 'Berlin',
interests: { coffee: 9, climbing: 9, wasps: 0 }
};

Changing a simple object property

const updatedPerson = Object.assign({}, person, { 
name: 'Douglas'
});

Simples. We’re telling Object.assign to take this empty {}, apply person on top, and modify the name property. The rest of our object looks the same.

Changing deeply nested properties

const updated = Object.assign({}, person, { 
location: 'Moon',
interests: {
coffee: 10 // Crap! Only this one is copied
}
});

On the surface, it might seem like this works, but this doesn’t copy the rest of the interests object. It will leave us with an updated {coffee: 10} and location: 'Moon', but it won't copy climbing or wasps. No one needs wasps, anyway. But how do we solve this?

Instead, we need to also deeply copy the interests object, like so:

const updated = Object.assign({}, person, { 
location: 'Moon',
interests: Object.assign({}, person.interests, {
coffee: 10 // All other interests are copied
})
});

Notice the double Object.assign. A bit verbose, in truth, as all objects need to be assigned in order not to lose properties.

Spread operators

const updated = { 
...person,
interests: {
...person.interests,
coffee: 10,
}
}

Much nicer to look at! Spread operators are so incredible that you should definitely read more about them at MDN.

Deleting properties

There’s a few different ways to go around it, some more efficient than others. One (slow-ish) approach is to recreate our entire object, but ignoring the properties we want to be removed. Let’s create a function that accepts our object, and the name of the property we would like to see removed:

const removeProperty = (obj, property) => { 
return Object.keys(obj).reduce((acc, key) => {
if (key !== property) {
return {...acc, [key]: obj[key]} }
return acc;
}, {})
}

Note: this was written in long form for readability’s sake. You can omit some of those return statements.

It looks a bit convoluted, but what’s happening is pretty simple: for each key that is not the one we passed, we keep adding them to the accumulator, returned by the reduce function. So now, if we wanted the interests property removed from our person object, we can use this like so:

const updated = removeProperty(person, 'interests');

Which would give us a brand new copy of the object, except for that one ignored property:

{ name: 'Ricardo', location: 'Berlin', }

Aside: using lodash

Once again, let’s try and remove the interests property like we did before, but using lodash. This time, we'll write it in a reducer-style function, just as an example:

import { omit } from lodash; const reducer = (state, action) => { 
switch (action.type) {
case 'DELETE_KEY':
return omit(state, action.key);
default:
return state;
}
}

This will work, even without the /fp subset of lodash. So if you’re already using lodash, you’ll get this for free. We could use it like this:

const newState = reducer(person, { 
type: 'DELETE_KEY',
key: 'interests'
});

…which would give us the same result. Once again, be weary of using some lodash methods when reassigning data, as most of their methods mutate the original object. Consider using the /fp subset variation.

More complex updating

Consider our original data, an array of users with a name and an ID:

const users = [ 
{name: 'john', id: 176},
{name: 'gary', id: 288},
{name: 'louise', id: 213}
];

In Redux, it’s common practice to normalise your application state, by having data grouped by ID for easier lookups. So let’s say this is what we want to do: we want a new array which has the users grouped by ID. For kicks, let’s also have the first letter of their names uppercased.

In short, we want to go from the first table, to the second one:

How do we return the object.id as a key, though? This is where you'll see the [item.id]: something notation. It allows you to dynamically pull in the value and use it as a key. So with that in mind, let's write our byId function that also uppercases the first letter:

const byId = (state) => state.reduce((acc, item) => ({ 
...acc,
[item.id]: Object.assign({}, item, {
name: item.name.charAt(0).toUpperCase() + item.name.slice(1)
})
}), {})

If this method could talk, here’s what it would say:

Hey, you there: For my state, apply the reduce method, which will give you an accumulator starting with an empty {}, and all my items. For each one, spread the accumulated properties, but add a new key with the value of each [item.id]. Inside each one of those, make a copy of item , but also modify its name property while you’re at it.

This will return a new object with the ID of each user as the key, spreading all their values into each object, and modify their name properties to have the first character uppercase.

What if we wanted to update more properties, other than just the name of the user? This is where you’ll think about combining pure functions in order to manipulate the data as you need, but always returning a new copy. Let’s refactor this a little bit by creating a updateUser function:

const updateUser = (user) => Object.assign({}, user, { 
name: user.name.charAt(0).toUpperCase() + user.name.slice(1)
});
const byId = (state) => state.reduce((acc, item) => ({
...acc,
[item.id]: updateUser(item),
}), {})

All we need now to get a new piece of state with our users grouped by ID is simply:

const usersById = byId(users);

Arrays

const original = ['a', 'c', 'd', 'e'];

Having an array, you would often want to do one of the following:

  • Insert an item by index
  • Remove an item by index
  • Remove by item
  • Insert an item to the end

Inserting by index

  1. Copy the array until the specified index
  2. Insert our item
  3. Copy the rest of the array from the specified index

So we could write a helper function with the following signature:

insertByIndex = (state, newItem, insertAt)

Where state is the original array, newItem is the value of the item we'd like to add, and insertAt is a number (index) at which location we want to insert our newItem.

A simple way to write such a helper function could be the following:

const insertByIndex = (state, newItem, insertAt) => [  
...state.slice(0, insertAt),
newItem,
...state.slice(insertAt)
]

Wait, what?

Okay, let’s break this down. We’ve already seen that the spread operator (...) copies values, and that's exactly what we're doing here. First, we're returning a new Array; copy it from the beginning until our index, insert our new value (b), then copy the rest of the array from there.

So an example of its usage would be:

insertByIndex(original, 'b', 1) 
// ["a", "b", "c", "d", "e"]

Removing by index

const removeByIndex = (arr, at) => arr.filter((item, idx) => idx !== at);

Removing by item

const removeByItem = (arr, value) => arr.filter((item) => item !== value);

Adding an item

const addItem = (arr, value) => arr.concat(value);

So if we wanted to add banana to our alphabet array (why wouldn't you?), we could do:

addItem(original, 'banana') // ["a", "c", "d", "e", "banana"]

Food for thought

When should you use a library for immutability or go raw with JavaScript? That really depends on the complexity of your data changes, the amount of overhead you can bring both to your codebase and to your team (it’s yet another thing to learn). I’d argue that knowing the barebones implementation of most patterns is useful to understand, especially when using Redux or any other similar pattern that thrives in code immutability.

Very useful resources:

Anything I might have overlooked or gotten wrong? Don’t be afraid to ping me on Twitter.

Originally published on my personal blog at blog.ricardofilipe.com

HackerNoon.com

#BlackLivesMatter

HackerNoon.com

Elijah McClain, George Floyd, Eric Garner, Breonna Taylor, Ahmaud Arbery, Michael Brown, Oscar Grant, Atatiana Jefferson, Tamir Rice, Bettie Jones, Botham Jean

Ricardo Magalhães

Written by

I work for the Internet. By day, a front-end web developer with a passion for typography and design. By night, I’m sleeping.

HackerNoon.com

Elijah McClain, George Floyd, Eric Garner, Breonna Taylor, Ahmaud Arbery, Michael Brown, Oscar Grant, Atatiana Jefferson, Tamir Rice, Bettie Jones, Botham Jean