Is autocomplete evil?

How a machine’s whispers in my ear are changing the way I think

Tom Chatfield

--

“Women shouldn’t have rights.” “Women shouldn’t vote.” “Women shouldn’t work.” How prevalent are these beliefs? According to a recent United Nations campaign, such sexism is dispiritingly common — which is why they published these sentiments on a series of posters. The source? These statements were the top suggestions offered by Google’s “instant” search tool when the words “Women shouldn’t…” were typed into its search box.

Google Instant is an “autocomplete” service — which, as the name suggests, automatically suggests letters and words to complete a query, based on the company’s knowledge of billions of searches performed each day. If I enter the words “women should,” the number one suggestion on my own screen is “women shoulder bags,” followed by the distinctly more depressing “women should be seen and not heard.” If I type “men should”, the enigmatic phrase “men should weep” pops up.

The argument behind the UN campaign is that this algorithm offers a glimpse into our collective psyche — and a disturbing one at that — based on an impartial insight into what people type most frequently. End of story. Is this true? Not quite in the sense that the posters imply. Autocomplete is biased and deficient in many ways, and there are dangers ahead if we forget that. In fact, there is a good case that you should switch it off entirely.

“… like any other search algorithm, autocomplete blends a secret sauce of data points beneath its effortless interface.”

Like many of the world’s most successful technologies, the mark of autocomplete’s success is how little we notice it. The better it’s working, the more seamlessly its anticipations fit in with our expectations — to the point where it’s most noticeable when something doesn’t have this feature, or Google suddenly stops anticipating our needs.

The irony is that the more effort that’s expended elaborately calculating and configuring behind the scenes, the more unvarnished and truthful the results feel to users. Knowing what “everyone” thinks about any particular issue or question simply means starting to type, and watching the answer write itself ahead of our tapping fingers.

Yet, like any other search algorithm, autocomplete blends a secret sauce of data points beneath its effortless interface. Your language, location and timing are all major factors in results, as are measures of impact and engagement — not to mention your own browsing history and the “freshness” of any topic. In other words, what autocomplete feeds you is not the full picture, but what Google anticipates you want. It’s not about mere truth; it’s about “relevance”.

This is before you get on to censorship. Understandably, Google suppresses terms likely to encourage illegality or materials unsuitable for all users, together with numerous formulations relating to areas like racial and religious hatred. The company’s list of “potentially inappropriate search queries” is constantly updated.

“Google has built a system that claims to know not only my desires, but humanity itself, better than I could ever manage…”

None of this should be news to savvy web users. Yet many reactions to the UN Women campaign suggest, to me, a reliance on algorithmic expertise that borders on blind faith. “The ads are shocking,” explained one of the copywriters behind them, “because they show just how far we still have to go to achieve gender equality.” The implication is that these results are alarming precisely because they are impartial: unambiguous evidence of prejudice on a global scale. Yet, while the aim of the campaign is sound, the evidence is far less straightforward.

Perhaps the greatest danger is the degree to which an instantaneous answer-generator has the power not only to reflect but also to remould what the world believes — and to do so beneath the level of conscious debate. Autocomplete is coming to be seen as a form of prophecy, complete with a self-fulfilling invitation to click and agree.Yet by letting an algorithm finish our thoughts, we contribute to a feedback loop that potentially reinforces received ideas, untruths and misconceptions for future searchers.

Consider the case of a Japanese man who, earlier this year, typed his name into Google and discovered autocomplete associating him with criminal acts. He won a court case compelling the company to modify the results. The Japanese case echoed a previous instance in Australia where, effectively, the autocomplete algorithm was judged to be guilty of libel after it suggested the word “bankrupt” be appended to a doctor’s name. And there are plenty of other examples to pick from.

So far as Google engineers are concerned, these are mere blips in the data. What they are offering is ever-improving efficiency: a collaboration between humans and machines that saves time, eliminates errors and frustrations, and enriches our lives with its constant trickle of data. All of which is true — and none the less disturbing for all that.

As the company’s help page puts it, “even when you don’t know exactly what you’re looking for, predictions help guide your search.” Google has built a system that claims to know not only my desires, but humanity itself, better than I could ever manage — and it gently rams the fact down my throat every time I start to type.

Even if I ignore everything it advises, it’s impossible to “unsee” the shifting litany of suggestions generated by every character I enter; a fact that some have turned into a poetry of inferred connections well beyond mere logic or social comment.

Via Google Poetics at http://www.googlepoetics.com/post/66197839799/www-googlepoetics-com#notes

Did you know you can turn autocomplete off just by changing one setting? I’d recommend you give it a try, if only to perform a simple test: does having a computer whispering in your ear change the way you think about the world? By all means switch it on again afterwards — but don’t pretend you’re doing your thinking alone.

A version of this piece first appeared as my fortnightly Life:Connected column for the BBC

--

--

Tom Chatfield
Tom Chatfield

Written by Tom Chatfield

Author, tech philosopher. Critical thinking textbooks, tech thrillers, explorations of what it means to use tech well http://tomchatfield.net

No responses yet