On Google Duplex, Fake News, AI & Ethics

Jason Herndon
Silicon Slopes
Published in
4 min readMay 30, 2018

Last week, Google unveiled Duplex and showed that their Assistant will soon schedule hair appointments, make reservations, and check business hours for you. Soon there are sure to be additional, enticing skill sets added, like doctors appointments, food orders, etc.

In order build Duplex, Google so very nearly replicated human conversation that those interacting with it had no idea they were talking to a robot. It’s far from passing the Turing test though it is a big leap forward for voice. But for some, it’s also a big leap back in ethics.

There have been a number of thoughtful articles that expose the specific concerns that a concept like Duplex raises. Mainly, the idea that people interacting with something that sounds human but isn’t poses ethical concerns.

There are implicit and real ethical issues around having a robot sound like a human. But I don’t believe that philosophically they’ve given us anything different than what the internet, or even more analog forms of publishing, have given us in terms of a moral quandary.

For instance, you’re reading this article online. Pixels have been arranged to express thoughts that you, reader, assume accurately convey my original thoughts. Dare I mention the Facebook Cambridge Analytica scandal with regard to the potential influence on the election? While that fiasco has led to widespread belief that social channels like Facebook have serious ethical questions to answer about the content they surface, literally no one is blaming the tech itself.

Does Duplex present ethical questions? Absolutely. Should we let that deter us from running into the future with optimism? Absolutely not.

We use technology to communicate. Saying that “Google shouldn’t have invented the ability to reproduce human speech for use in communication with other people so that I don’t have to” seems to be to be tantamount to saying that “Gutenberg shouldn’t have invented the printing press so that I don’t have to talk to every potential reader directly.”

We don’t have debate over the idea that books are intrinsically unethical as mediums of expression — so, why should have them about voice? Intrinsically, like any other technology, it is a medium which serves as an extension of ourselves.

In his book, The Secular City, Harvard divinity professor Harvey Cox referenced a study in a city that asked the question to apartment tenants “Do you know your neighbor?” Most tenants didn’t in fact know their neighbors — something those conducting the study saw negatively. Cox noted that the expectation that one should know their neighbor was outdated. In a modern age, with its ease of transportation, people formed friendships based on factors apart from geographic proximity.

Cox argued that treating neighbors, shopkeepers and salesman in such transactional ways isn’t inhumane, but rather a social norm brought to us by modern technology. Sometimes we don’t want human interaction. And, that’s okay. Remember when Starbucks launched their “Race Together” campaign that encouraged baristas to start conversations on race with customers? A lot of the public response boiled down to “I just want coffee, not a conversation.”

Duplex sounds like a human so it’s understandable that people have concerns about the ethical implications but not all human interactions actually need to be anything more than transactional. This position isn’t unethical, it’s a new norm.

Conversational interfaces push new norms on how we communicate. People are right to be concerned that Duplex may cause problems but any medium can be exploited. And since the conversation has turned to ethics, I’d like to go ahead and complicate things.

What’s the ethical implication of not pursuing AI as time-freeing assistance?

I’m not even talking about the kind of AI that writers like Kurzweil and de Garis debate will either be our salvation or our doom as a species. I’m talking about the kind of AI that makes simple computation and tasks easier. The kind of AI that makes self-driving cars a reality, that powers advances in farming and food production, and the kind of efficiencies that reshape economic drivers and shift power in positive, democratic ways.

Do we have an ethical responsibility to the millions who die in car accidents each year? Do we have an ethical responsibility to the millions who are hungry around the world? Do we have a responsibility to mankind itself to not create the kind of world that is so digitally consuming, and yet fundamentally non-human, that we kill ourselves from the stress of it all?

I think we do.

I say, let Google solve self driving cars. Let Apple automate my day. Let big tech figure out how to make my life easier. And yes, please, while you do that, do be clear about my data, consent, privacy and all of our interactions. And ’til then — no, I don’t mind if Google makes an appointment for me so I have time to focus on what matters… I’m not very precious about my salon appointments.

--

--

Jason Herndon
Silicon Slopes

VP of Technical Innovation at RAIN in Lehi, UT. Leader. Developer. Writer. Minimalist. Storyteller. Backpacker.