We need to talk about robots

Jarryd Daymond
Openfield
Published in
12 min readJul 8, 2019
Robot planet (source: Stefan Keller).

Introduction

How do you have a national conversation? This was a question I found myself grappling with last year when I was part of an education project in New Zealand. We were trying to ignite an engaging and meaningful dialogue to lay the foundation for a better and more inclusive education system for future generations of New Zealanders. I was now standing in Christchurch’s cavernous Horncastle Arena along with 750 people, cloaked in darkness apart from six white Pouwhenua — carved wooden posts used by Māori, the indigenous peoples of New Zealand — which hung from the ceiling above the crowd, looming and luminous. The darkness was punctured by stark voice recordings which filled the arena with insights from the future — AI was coming, jobs were changing, education was evolving. I hoped this citizen Summit would start a national conversation, but I had my doubts.

Addressing the future of education is a broad and ambitious endeavour — where do you start? We needed to break down the main conversation into “manageable chunks” that people could engage with, so we identified six holistic sub-topics that would create different entry points into the main education conversation, such as “Lifelong learning”, “Skills, competencies & behaviours”, “Enabling self-fulfilling lives” and “Thriving as a society”. Our sub-topics were each hosted in a specific space which comprised configurations of eight people around coffee tables to allow for intimate conversations. For us, the journey of a national Conversation would start with coffee table chat, each one a small fire in the darkness but significant in helping spark an education revolution.

I took many lessons from the Summits which have sparked my thinking in relation to other topics, and here I explore their possible application in the context of the proliferation of artificial intelligence and robots. Specifically, I suggest that as people we need to exercise our moral agency and collectively engage in new approaches to explore hard conversations about what the robot society will hold.

The robots are coming; the robots are welcome?

The robots and AI are coming (Although distinct phenomena, I use robots and AI interchangeably because both are able to “act” such they can lessen the need for humans to act which is central to my later argument). At a recent sales event, the CEO of Google declared that we are entering the “age of AI”. Jeff Bezos of Amazon went further and called it a “renaissance” and “golden age”. The robots are coming and some people, particularly those poised to benefit from it, are excited about how it will transform a range of industries, from self-driving cars to healthcare diagnostics and targeted treatments to flying drones and advanced fulfilment centres to… [insert any conceivable industry]. And the claims go on. “AI will be your physician. AI will be your financial advisor. AI will be your teacher and that of your children. AI will be your fashion designer. AI will be your chef AI will be your entertainer. And more…” AI will even deliver higher-order achievements, such as democratising access to goods and services. According to many people, then, it seems that the robots are coming, and they are most welcome.

Boris Karloff as Frankenstein’s monster (1931).

But for all those excited about the coming age of AI, there are also many commentators taking a more cautionary tone regarding robots and AI. This combination of excitement and caution is unsurprising — it has been present for as long as humans have been dreaming of creating human-like actors. Indeed, this year is witnessing interpretations, film screenings, orchestral performances, all-day readings and other celebratory tributes commemorating the bicentennial anniversary of Mary Shelley’s Frankenstein,which memorably challenged audiences regarding the dangerous pursuit of divine-ish knowledge.

Karel Capek’s play R.U.R. (1920) first coined the word “robots”.

Similarly, Karel Capek’s play R.U.R., which first introduced the word “robots” into the world lexicon, cautioned of a struggle between human creators and their robotic creations. The theme of struggle and threat continues to permeate the discourse around robots and, now, AI. For example, Elon Musk has labelled AI an “existential threat” to humanity. Likewise, the late Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race.” Professor Genevieve Bell, in her brilliant 2017 Boyer Lectures, concludes that:

“The idea of raising a machine in our likeness is a lasting human preoccupation, but it seems the notion of things coming to consciousness is also riven through with anxiety.”

The age of AI is not all about excitement, and apparently, some people are anxious about what it will mean to be human in a robot society.

Killer robots aside, what are you worried about?

Anxiety over the robot revolution is multidimensional. At its most extreme, there are fears of killer robots — the uprising, overthrowing, existentially-threatening dimension that could put an end to the human race. I am yet to personally meet someone espousing this view. A more commonly held anxiety relates to whether robots and AI will take our jobs. Debate abounds as to whether total jobs will increase or decrease, but irrespective of the net number of jobs created/destroyed, we can safely assume that the impact of AI, automation and robots will be experienced on an enormous scale by the workforce. Research by the McKinsey Global institute suggests that by 2030, between 75 million to 375 million people globally may need to find a new type of job. Even if you’re going to find a job, having to switch occupational categories can bring about anxiety and stress for workers. However, this is not an anxiety I am interested in discussing in this essay. My anxiety is to do with how living in a robot society might subtly, but fundamentally, diminish our humanness.

Among other things, the reality of robots brings with it a raft of moral and ethical considerations. These considerations are noticeable even in the few examples I have already mentioned in this essay: Frankenstein challenged the appropriateness of humans “playing God” and pursuing god-like knowledge without restraints; the play R.U.R.confronts uswith a new form of modern day slavery (The General Manager of the factory producing the robots in the play says “I wanted to turn the whole of mankind into an aristocracy of the world … nourished by millions of mechanical slaves.”); and the prospect of a robot precipitated post-work world raises ethical and moral questions relating to equality and fairness as well as the significance of work as a human endeavour. Even away from these topics and examples, which lend themselves to blunt Marxist critiques, there are issues which touch more subtly on ethical considerations. Professor Genevieve Bell claims:

“AI is clearly more than just a technology or a set of technical affordances… It is also an assemblage of cultural and technical things, and human agendas. It was fuelled by ideas about our humanness and our capacities, and of course our biases and flaws too.”

A former Vice President at Intel, Professor Bell is an anthropologist specialising in understanding the intersection of cultural practice and technology development. It is little wonder, then, that she draws attention to the biases in the algorithms which are the building blocks of AI. It is revealing that tech companies, including Google, have recently employed ethicists and there have also been enough recent global scandals to alert us of the potential nefariousness that can accompany artificial intelligence. On a different note, there have been legal cases on animal rights in the U.S. which could set a precedent for the potential legal status of robots should they acquire sentiency — talk about a Pandora’s box of ethical dilemmas! To put it lightly, the ethics accompanying robots are not insignificant.

These are some of the ethical issues we need to consider in relation to robots. However, my most pressing anxiety about living in a robot society is that it might diminish our humanness by suppressing our moral agency. People are moral agents because we are capable of taking moral ownership and responsibility for our agency, which is our ability to acquire and process information to develop and pursue goals. John Danaher argues that the rise of robots will reduce our ability and willingness to act in the world as responsible moral agents (I am indebted to Danaher for the way his work has shaped my thinking on this topic). As our moral agency wanes, our moral patiency will wax. Danaher describes a moral patient as “a being upon whom well-being (and other valuable states) are bestowed, but which does not (or cannot) take an interest in the autonomous formulation and pursuit of its own moral goals.” The argument is that the rise of the robots, and their ability to perform actions on behalfpeople, will lead to a decline of human moral agency and a rise of moral patiency.Why does this matter? Isn’t it good if robots can do more and we can do less? Why does moral agency matter?

John Danaher suggests that all this fuss over robots, actions and agency matters because moral agency is a foundational value of our civilisation. The prominence of moral agency in Western moral and political philosophy is evidenced in three ways. First, moral agency is based on the virtue ethics of the Ancient Greeks. Second, moral agency is a significant bulwark against unacceptable coercive interference in the lives of citizens of liberal democratic states. Finally, the moral progress of modern history has been predicated on recognition and acceptance of the moral agency of all people regardless of race, gender, religion, sexuality and so on. Danaher argues that, if the capacity of people to exercise moral agency is undermined, this represents the undermining of a value which is central to the moral progress and identity of modern civilisation.

Wall-E replica (source: Ravi Shah).

Pop culture and science fiction provide hyperbolic and satirical illustrations of how robots might diminish the moral agency of humans. For example, the Pixar film, Wall-E, presents humans as increasingly sluggish and ineffectual in the context of highly capable robots. They become “passive recipients of the benefits that technology bestows, not active agents changing the world in which they live.” Danaher concludes:

“Technology won’t rob us of our status as moral agents, merely suppress it. If we think agency is an important value, and we want to protect the value structure of contemporary civilization, we need to exercise it now.”

Put differently, the apparent benefits of robots might be detrimental to our humanity in a subtle but fundamental way. In isolation or small instances, decreasing moral agency is not significant. However, if the rise of robots can lead to large scale increases in moral patiency then this is something that requires reasoned debate. I believe that a robot society will increase the importance of preserving human agency, and I have some ideas about how to exercise our agency and counter moral patiency.

Talking is a good way to stay human in a robot society

Some of my ideas about how to exercise our agency and counter moral patiency have been informed, in part, by my experience of the national conversation and education Summits in New Zealand. I have made sense of that experience using the wisdom of Peter Drucker and principles suggested by Genevieve Bell, to suggest an approach to stay human in a robot society. Professor Bell, in her Boyer lectures, also hoped to spark a national discussion — in her case about howwe should responsibly thrive in “smart, fast and connected digital world.” She suggests four things: build new approaches, invest in hard conversations, strive for accountability,and make our own future. The Summits showed me that genuine, citizen-fuelled,participatory processes are an excellent way to realise Professor Bell’s suggestions, which I now discuss.

Building new approaches

Our approach to the Summits was to co-design them with diverse stakeholder groups and a design council which played a governance role for the project. The co-design process allowed us to “build a new approach” to a national education conversation. It also helped focus, iterate and validate the design of the project.The approach was important given the premise of the Summits was to engage largely with a wide cross-section of society to co-design the future of education for New Zealand. This diversity of perspectives was equally critical throughout the preparation to ensure we designed a respectful and engaging process for all. Management guru Peter Drucker resolutely believed in the potential of people, and he understood that effective managers get things done through people. In the same way, governments of healthy societies get things done through citizens. Extending Drucker’s human-centred preoccupation to a societal level is to emphasise the need for citizens to inform the makeup of our institutions and the services they deliver.

Investing in hard conversations

The Summit participants found discussing the future of New Zealand’s education system to be both a challenging and rewarding experience. The whole process represented a significant investment of time, money, emotion and energy in a difficult conversation. During the two Summits, thousands of discussions happened in small, diverse groups with participants coming to the conversation from many different places. Yet, together we ensured the dialogue was authentic, inclusive and valuable, and that the Summit created a safe place to explore frustrations, as well as hopes and dreams. The most enduring lesson I learnt from the Summits was that people thrived in a context which was human-centred. My fear as to whether we would start a national conversation gave way to the realisation that just being included in the conversation was empowering and people embraced being given a voice in a topic so close to their hearts. It made me reflect that we — even those of us who live in democracies — are not often afforded the chance to “have our say” about our institutions, communities and society more broadly in a meaningful way and through respectful forums. Providing opportunities for citizens to genuinely engage in participatory decision-making forums amplifies our moral agency — we are no longer slipping into moral patiency when we seek to understand societal issues, respectfully debate perspectives with a range of people, and collectively try to take a reasoned, moral perspective on an issue.

Striving for accountability

By their nature, participatory forums and decision-making introduce an element of accountability into public debate. Accountability is even more important in relation to the veiled world of AI and algorithms. As Bell puts it, “there’s little dignity in a life that’s shaped by algorithms, in which you have no say, and into which you have no visibility.” We should strive for accountability and demand it of our leaders who, according to Drucker “are responsible and accountable for the performance of their institutions, and … the community as a whole.” By hosting the most inclusive and diverse education conversation ever held in New Zealand, the Summits reframed the conversation so it shifted from being a public servants’problem to a problem of shared responsibility in a participatory citizen-driven conversation. By reframing the problem this way, it both increased the accountability of the education public servants and extended the accountability to society more broadly.

Conclusion: Making our own future in a robot society

Professor Bell’s final recommended action is to make our own future, which for her refers to an Australian effort to craft a unique approach to the smart, fast and connected digital world we face. I conclude this essay by advocating that we, the people of today, should exercise our moral agency and collectively engage in new approaches to explore hard conversations about the imminent robot society. These participatory, public processes will be important forums to explore the ethical issues accompanying the rise of robots. In exploring ethical issues, we must draw on the “knowledge and insights of the humanities and the social sciences” which Peter Drucker advocated for managers — those of psychology and philosophy, economics and history, the physical sciences and ethics. Putting ethical considerations under a public microscope will serve to bring greater accountability to the murky realm of artificial intelligence and the sometimes-nefarious algorithms and biases underpinning AI. Finally, by seeking to make our own future in this way, we will help to stave off the crisis of moral patiency which I believe will accompany the robot society. What better example of exercising moral agency is there than a large, diverse and inclusive group of people coming to together to engage in a genuine dialogue about robots and their ethical implications?

Education Summit, Auckland (May 2018).

--

--

Jarryd Daymond
Openfield

I research and design innovation and collaborative practices. My work transcends corporate, academic and entrepreneurial domains.