Sundar Pichai during the Duplex demo at Google I/O 2018 — (Credit: 9to5 Google)

Google Duplex, technologically amazing, ethically complex

An ethical debate that we should start to have right now

Joseph Emmi
May 31, 2018 · 3 min read

At this point you might already see the video of the live demo during the latest Google I/O, or at least heard something about Duplex, Google Assistant’s newest feature, which main highlight is its ability to not only graciously take care of mundane activities like making phone calls for appointments or dinner reservations, but to do so while incredibly sounding like human to the point that the receiver end is unable to distinguish is talking to a machine.

The way the Assistant fully understands the context and reacts in real-time nuances it’s both incredible and unprecedented. It is also worrying.

You probably also have an opinion about it, maybe a view of the possibilities and the colossal technological achievement this represents, that on one side.

On the other side, you might have some views or concerns about what this means and the foreseeable implications, and that’s probably where a lot of the conversations are going, including this one right now.

Basically, Google Duplex is indisputably amazing, but it also immediately opens the door for a very complex ethical debate that we should start to have right now.

Why?

  1. Fake news
  2. Scams
  3. Identity thefts

Just to name a few that came to the top of my head.

In an era where it is becoming increasingly difficult to differentiate what’s true and what’s not, a technology like Duplex can only make things more difficult.

  • How are we going to be treating these type of interactions?
  • Are we going to be warned in advance that we are going to be dealing with a machine instead of a human?
  • Who is going to be responsible for the actions taken by the Assistant? Is it Google or is the owner of the device?

Hypothetical cases

What if…

What if this type of technology could be used to replicate a family member’s voice, or even your own and this way get incredible sensible information, persuade someone else to take an action or make a decision?

What if it could be used to replicate the voice of the president of a country, making fake allegations that can trigger conflicts in another part of the planet? Or even worse, how to know if it’s true when a public figure denies those type of allegations based on possible precedents where this technology was misused?

It could be difficult to get the truth or might take some time, and in some cases, it could be too late.

A great summary provided Bridget Carey from CNET

The whole point of this, is not too seem overly dramatic or catastrophic, as I truly believe in the real capabilities and benefits that this technology can bring and the advancement it represents; but to realise that, this technology that until a few weeks ago we thought was years away, is here and is soon rolling out to the real world.

Even if it’s on a limited user-base, makes it necessary to pay attention to the concerns, fears and questions many are echoing; because not taking under consideration any of the possible negative connotations would not only be naive, it will be irresponsible.

Remember, Facebook was just an inoffensive social media platform to connect with your friends and family, right?

What could go wrong?

This article is part of my #100DayProject #100DaysofWriting — Day 59 of 100

The Bridge

A crossroad between Technology, Business and Design

Joseph Emmi

Written by

Technology + Business + Design + Entrepreneurship

The Bridge

A crossroad between Technology, Business and Design

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade