With a little help from their engineers: how Google and Facebook decide what your internet looks like

There’s a story that technology companies like to tell themselves, and it goes something like this: “We’re building software and services to make the world a better place. We’re totally objective about the steps we take to get there, and we hire the best people through a meritocratic process to make all this possible.”

And then there’s the stories these companies tell their users: “We’re showing you this search result because it’s the “best” one.” “We’re showing you this photo because we know you care about your friend’s new baby.” “We’re playing you this song because you told us you like dancehall music.” “We’re recommending this product because you bought that one.”

But there’s another story we hear less often, though it’s the most important one: the decisions made by platforms like Facebook and Google and Spotify are based on algorithms; those algorithms are built by humans, and humans are biased creatures.

xkcd

You can think of an algorithm as a series of steps, or a set of instructions that when followed will produce a certain outcome. In that sense, your favourite recipe is an algorithm. And just as every chef brings her own sensibility to a dish, every engineer involved in designing an algorithm brings her own judgement and preconceptions to the work.

Put another way, the algorithms that determine what search results you see and whose babies appear most on your Facebook feed are not value neutral. They are infused with both the unconscious and explicit biases and preferences of their creators — and they tend to magnify them.

And it gets more complicated: algorithms interact with other algorithms in unexpected ways. As Zeynep Tufekci has written:

Programmers do not, and often cannot, predict what their complex programs will do. Google’s Internet services are billions of lines of code. Once these algorithms with an enormous number of moving parts are set loose, they then interact with the world, and learn and react. The consequences aren’t easily predictable.

What does this all mean? First, that we must not default to thinking that our view of and on the world, as filtered through our search results and what we see streaming past on Facebook and Twitter, is an “unbiased” one in any sense.

Second, advancements in areas like photo recognition have some real consequences for privacy, for example.

And finally, we need to be far more aware that algorithms have consequences for our daily lives, and that they can produce discriminatory outcomes. Here are some quick examples:

If you’d like some further reading, this is a good list from Casual Spreadsheets. And for a more technical approach, consider this tutorial from Khan Academy and Dartmouth College.