My concerns about Google Duplex go deeper than the ethics of machines masquerading as humans. The underlying rationale for the project appears to reflect a worldview that I think deserves to be questioned. Google’s explanations of Duplex’s purpose and value seem to contain a bias about the very meaning of why we are alive.
According to the Google AI blog, Duplex is “a system for accomplishing tasks”. The Google Assistant blog describes the purpose of Duplex is to “help you get things done to give you time back”. Why do we need time back? So we can get more and better things done.
Behind these descriptions lies the model that life is about doing things. The more things you can do the better. The more you can outsource “wasteful” things, the more you can focus on “valuable” things. I fear that this model reflects an inaccurate, and impoverished, view of life.
Google seems to assume the only reason one calls a service provider is to “get something done” (e.g., schedule an appointment). They miss the relational aspect. Maintaining a personal relationship with my dentist, for example, makes it more likely I will get help when I need it with things like insurance issues or appointment rescheduling.
Beyond help, however, it also makes my life, as well as that of the person on the other end of the phone, more meaningful and satisfying. We don’t just exchange information or complete transactions. It’s not an API call. We actually talk to each other. Part of the point of talking to each other is just that. We feel more alive as a result of having engaged with other humans. In fact, according to Abeba Birhane, the whole Cartesian notion that humans exist independently from each other is wrong.
At the dawn of the information age, there was a debate about the meaning of “information”. Was it something you can extract, encode, and pass around? Or did it acquire meaning only in the relationship between the parties exchanging it? For better or worse, the former definition won out. Google appears to be encoding the victorious definition into our understanding of ourselves.
In “The Human Use of Human Beings”, Norbert Weiner counseled against creating equivalencies between human and computerized activities. When we do, he warned, the human will always lose. A computer can always do better at anything that’s been defined as a computational task.
Redefining human activities as computational ones is the very definition of usurping humanity. Google seems to be defining the essence of being human as the efficiency with which we can exchange information and complete tasks. Ironically, they have developed a system that relieves us of having to complete tasks, but only so that we can complete more of them! It’s life as a JIRA project.
We live in a moment of anxiety about our own creations. AI, social media, autonomous vehicles, all seem to be playing out a little differently from the ways in which we intended them. We’re being forced to reflect on the nature of making things. How do we ensure goodness, and not just efficiency? We need to let go of the belief that we entirely control our creations. We need to create space beyond constructing/controlling/accomplishing. We need mindfulness in order to see what people really need, as opposed to what we want them to need. To build truly beneficial things, we need to start by not-building.
This imperative makes redefining life as “getting things done” problematic. Being, breathing, seeing, relating, empathizing, and listening are all critical to living a decent life and participating in a decent world. None of them, however, can be chopped up into user stories or sound-bites, however, human-sounding. The things we do and the information we exchange must live within them, not the other way around.