Evan Selinger
Arc Digital
Published in
8 min readSep 13, 2016

--

Do you find yourself overwhelmed by information overload, stuck with tons of e-mail, social media posts and messages, and texts to respond to? The folks at Google feel your pain, and they’re offering a curious solution.

Let artificial intelligence be your ghostwriter.

Google’s Allo

Google’s new messaging app, Allo, examines the content of incoming correspondence — including photos — and suggests quick replies to send back. Say your friend sends over a photo of her kid on a swing. Allo might suggest “so cute!” or “swing, baby, swing!”. Click on one of the suggestions and, voilà, you’ve preserved social relations without the extra emotional workload of figuring out what the heck to say.

Sounds like magical mind-reading, right? How else could technology adequately suggest how individuals with distinctive personalities might reply?

Google believes the coached responses Allo provides can look like credible personal gestures. By developing algorithms that data mine our past conversations in order to identify patterns within them, Google expects Allo to successfully predict what reactions we would have plausibly made on our own, had we taken the time to personally respond. As the Google blog states, “The more you use Allo the more ‘you’ the suggestions will become.” This doubling seems like the realized fantasy of cloning yourself to get more done.

This is where Silicon Valley is headed these days.

They’re dressing up their AI as personalized digital assistants poised to ease our social lives. After all, if AI can guide a car automatically through the streets, leaving us time to read a novel while we commute to work, why can’t it do the same with our social lives? By automating our responses to friends and family — in a way that preserves the personal style characteristic of the responses we would organically leave anyway — we would gain more time to read books, watch movies, and get on with our lives.

Think of this innovation, we’re told, as an egalitarian gesture. The democratization of the personal assistant: now all of us, not just the wealthy, can have access to servants whose function is to free us up to enjoy more meaningful pursuits.

But this is where the danger lies. It turns out that putting our social interactions on autopilot can leach away the very things that make us human to begin with. At the extreme, using AI to make our social lives more “efficient” just might corrode them beyond recognition.

To appreciate why, let’s consider delegated communication as form of outsourcing.

Outsourcing means more than transferring U.S. jobs to employees in other countries who are cheaper to hire. It’s an activity that’s fundamental to our humanity — a way to cope with having limited knowledge, skills, and resources. To both survive and thrive, labor needs to be divided and we must constantly give all kinds of personal and professional tasks to third parties.

Although outsourcing is inevitable, not all outsourcing is good for us. In What Money Can’t Buy, Harvard ethicist Michael Sandel gives a great example of what many of us would consider an improper instance.

Sandel asks us to imagine someone delivering a moving best man speech who hides the fact that he outsourced its writing — purchasing the text online from a service that excels in generating poignant prose. Even if the toast was born of good intentions, a genuine desire to deliver a memorable and moving presentation that makes everyone happy, a problem remains: the groom’s best friend, a highly trusted confidant, passed off a commodity as something else. Deceptively, he presented another’s work as heartfelt sentiment that came to mind after deep soul-searching. The lack of authenticity strikes most of us as appalling, which is why the best man wouldn’t open his speech honestly by revealing its origins.

While Sandel’s example retains its force even if it’s a machine, not a human-based online service, that has created the enchanting speech, the problem doesn’t go away when we consider outsourcing communication in more mundane, everyday uses. To get a better sense of the main problems with using Allo and related smart communications products, it helps to consider their features in light of the six basic existential characteristics that apply to all forms of outsourcing.

When we outsource, we accept assistance and this lessens how much personal effort we need to put into accomplishing our goals. As effort diminishes, passivity takes hold. At a certain level, we become lazy — more like passive spectators than active participants.

Since outsourcing decreases effort, it also reduces our experience of agency. By participating in less of a process than we would if we fully performed it ourselves, our behavior becomes less intentional and deliberate. Some measure of control is lost.

By abdicating control over a process, we become less responsible for how things turn out. We become less blameworthy (the ordinarily reliable GPS messed up the directions!), but also less entitled to feel proud of positive results. After all, they came about through collaboration, and the outsourcee deserves some of the credit.

Because outsourcing involves delegation, it can limit our understanding of how a process works. Even if proxies seem reliable, we don’t always know the full extent of how they work on our behalf. In the case of technology, outsourcing happens by translating our requests into algorithmic or mechanical processes that computers — which don’t always behave the way we do — can perform.

By diminishing our participation, outsourcing leads to some degree of detachment. The more intense the detachment, the less intimate an experience becomes. At the extreme, less intimacy means more alienation.

When outsourcing becomes habitual, we become dependent on a third party for getting stuff done. At the extreme, dependency can result in de-skilling. We can forget how to perform a task or become less capable of doing it. Or, we can lose the motivation to increase our knowledge and skills.

Outsourcing, then, doesn’t just affect how a task is completed. When deciding whether or not to outsource, we need to consider whether it’s worth losing agency, responsibility, control, intimacy, and possibly knowledge and skill. If it isn’t, we probably should do the activity ourselves.

These considerations shed light on why we should be weary of cyber-servants like Allo offering to speak on our behalf while making their conversational contributions invisible to the people we care most about — namely, our friends and family.

When we outsource communication to algorithms, we put less effort into thinking about what we want to and ought to say — to choosing the right words to convey while focusing directly on our interlocutor’s intellectual and emotional needs. In taking a more passive approach to social engagement, one that checks intentionality and deliberation at the door, we abdicate control and potentially embrace lazy-speak that is no longer something we can take responsibility for when it makes others happy.

This one-two punch can result in less conscientiousness, both to ourselves and to the folks we care about.

We demean ourselves by accepting the premise that, at bottom, we are and always have been utterly predictable and repetitive creatures of habit. In so doing, we miss seeing how our deterministic behavior is the result of accepting help from limited processing systems. To work, they impose the burden on us of having to pretend that we’re naturally redundant, as opposed to being engineered, through translation, to behave like clichés. At the extreme, such a totalizing dependence on smart communication systems would lead to becoming socially de-skilled.

Others lose out because what algorithmic ventriloquism does is reduce conversation to hyper-efficient gestures that allow To Do list items (not human beings!) to be crossed out as quickly as possible. This detached and instrumental outlook can be alienating as it prevents real connection and true heartfelt exchanges from happening.

You might believe we’re making a mountain out of a molehill because no one is forced to listen to computer generated conversational prompts that are given by software we choose to download. A libertarian, for example, might say that under these circumstances technology companies can’t hijack our communication. Good decision-making is all that resistance requires. And we’re each individually responsible for ignoring recommendations that are odds with our core values.

This objection ignores two important things. Cognitive science has established that inertia has a powerful effect on human behavior. Give good enough prompts and many of us will be tempted to accept them — not least because once they are presented, rejecting them requires more effort than accepting them: deliberation and willpower come into play.

With so many people regularly lamenting correspondence fatigue as their inboxes threaten to swallow them whole, automated communication begins to look less like an interpersonal shortcut, and more like a highly appealing techno-fix that can solve a time-management problem caused by technological and social forces. Allo’s smart replies, passable as your own, become the offer you can’t refuse.

We’re not arguing that it’s always wrong to use smart communication software. Nor are we insisting that we should reject it, full stop. In some limited circumstances, ​the help can certainly be useful and, despite the downside, justifiable. But the more we get used to outsourcing our thoughts and feelings, the more we become like Sandel’s example of the sneaky best man whose fine prose actually diminishes the value of speaking.

Evan Selinger, philosophy professor at Rochester Institute of Technology and Brett Frischmann, professor at Cardozo Law School​,​ are co-writing Being Human in the 21st Century (Cambridge University Press, 2017).

--

--

Evan Selinger
Arc Digital

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere. http://eselinger.org/