Why computers will become invisible
Our extraordinary intimacy with unseen technology
The author Douglas Adams once made a witty point about technology: the inventions we label “technologies” are simply those which haven’t yet become an invisible, effortless part of our lives.
“We no longer think of chairs as technology,” he argued. “But there was a time when we hadn’t worked out how many legs chairs should have, how tall they should be, and they would often ‘crash’ when we tried to use them. Before long, computers will be as trivial and plentiful as chairs… and we will cease to be aware of the things.”
Adams’s prediction was prescient. Computers have been such a prominent, dazzling force in our lives for the past few decades that it’s easy to forget that subsequent generations might not even consider them to be technology. Today, screens draw constant attention to themselves; our high-visibility machines are a demanding, delightful pit into which we pour our waking hours. Yet we are on the cusp of the moment when computing finally slips beneath our awareness.
Computer scientists have been predicting such a moment for decades. The phrase “ubiquitous computing” was coined at the Xerox Palo Alto Research Center in the late 1980s by the scientist Mark Weiser, and described a world in which computers would become what Weiser later termed “calm technologies”: unseen, silent servants, available everywhere and anywhere.
Although we may not think about it as such, computing capability of this kind has been a fact of life for several years. What we are only beginning to see, however, is a movement away from screens towards self-effacing rather than attention-hungry machines: towards technologies that will help shape our identities and actions as discreetly as the clothes on our backs.
Take Google Glass. Recent news stories have focused more on intrusion than invisibility. (There’s even a young word, “Glassholes”, describing the kind of users who get kicked out of cafes). Beyond the hand-wringing, though, Glass represents the tip of a rapidly-emerging iceberg of devices that are “invisible” in the most literal sense: because a user’s primary interface with them is not through looking at or typing onto a screen, but via speech, location and movement.
…technologies that are invisible in the most literal sense: because their primary interface is not a screen, but speech, location and movement
This category also includes everything from discrete smartwatches and fitness devices to voice-activated in-car services. Equally surreptitious are the rising number of “smart” buildings — from shops and museums to cars and offices — that interface with smartphones and apps almost without us noticing, and offer enhancements ranging from streamlining payments to “knowing” our light, temperature and room preferences.
The consequences of all this will be profound. Consider what it means to have a primarily spoken rather than screen-based relationship with a computer. When you’re speaking and listening rather than reading off a screen, you’re not researching and comparing results, or selecting from a list — you’re being given answers. Or, more precisely, you’re being given one answer, customised to match not only your profile and preferences, but where you are, what you’re doing, and who with.
Google researchers, for example, have long spoken about the idea of an “intelligent cloud” that answers your questions directly, adapted to match its increasingly intimate knowledge about you and everybody else. Where is the best restaurant nearby? How do I get here? Why should I buy that?
Our relationships with computers, in this context, may come to feel more like companionship than sitting down to “use” a device: a lifelong conversation with systems that know many things about us more intimately than mere people.
Our relationship with computers may come to feel more like companionship than sitting down to use a device
Such invisibility begs several questions. If our computers provide such firm answers, but keep their workings and presence below our awareness, will we be too quick to trust the information that they provide — or too willing to take their models of the world for the real thing? As motorists already know to their cost, even a sat-nav’s suggestions can be hopelessly wrong.
That’s not to mention the potential for surveillance. More than a decade ago, critics of ubiquitous computing suggested it is “the feverish dream of spooks and spies — a bug in every object”. Given this year’s revelations about the NSA monitoring our communication, it was a prescient fear, and one that has had recent commentators reaching for that familiar adjective “Orwellian.”
There are, of course, causes for celebration about this technology too: hopes for a world in which computers, like chairs, simply support us without draining a particle more of our attention or effort than required.
And anxiety can only take us so far. As Douglas Adams also put it, everything that already exists when you’re born is just normal — while “anything that gets invented after you’re 30 is against the natural order of things and the beginning of the end of civilisation as we know it.” One generation’s eyesore is another’s barely-glimpsed backdrop.
Yet, as computers slip ever further beneath our awareness, it is important that we continue to ask certain questions. What should an unseen machine be permitted to hear and see of our own, and others’, lives? Can we trust what they tell us? What will it mean for such tools to serve our best interests — and how will we switch them off?
A version of this piece first appeared as Tom’s fortnightly Life:Connected column for BBC Future