ICTC’s Tech & Human Rights Series

Robots & Robot Ethics

A Conversation with Dr. AJung Moon

ICTC-CTIC
ICTC-CTIC

--

Original Interview Took Place April 22, 2020

Recently, ICTC spoke with Dr. AJung Moon, an experimental roboticist. Dr. Moon is currently an Assistant Professor in the Department of Electrical & Computer Engineering at McGill University, where she investigates how robots and AI systems influence the way people move, behave, and make decisions in order to inform how we can design and deploy such autonomous intelligent systems more responsibly. She also has a background in start-ups and advising organizations such as the UN. In this conversation, as part of ICTC’s Technology and Human Rights Series, Kiera and Dr. Moon discuss robot ethics, AI ethics, and lessons from the international arena.

Photo by Science in HD on Unsplash

Kiera: Thank you so much for joining me, Dr. Moon! I appreciate your time. To begin, can you tell me a little about your background and how you ended up where you are today?

Dr. Moon: Sure. I’m an experimental roboticist by training, but I come from a multitude of different backgrounds. I worked as a Senior Advisor for an expert task force put together by the UN Secretary General, I used to run a startup, and now I am a professor at McGill in the Electrical and Computer Engineering department. My research is primarily focused on, how do you design robots that can interact with people not only in a safe manner but with ethics in mind? I also look at how we can design algorithmic systems that consider different societal and stakeholder values and how to incorporate things such as transparency and fairness into the design and deployment processes of these systems. So I’m very interdisciplinary; I cross borders from the ethics to robotics to AI to the governance and policy sides as well.

Kiera: May I ask, did you plan to go into academia? What was the pathway going from a startup to the UN to academia?

Dr. Moon: No, I did not plan to be in academia at all. I was just very curious about this whole idea of ethics and robotics, even back while I was studying robotics as an undergrad at Waterloo. There, I stumbled across professors who were looking at ethics and robotics at UBC, so I followed them to start my graduate education there. At UBC, I started a non-profit thinktank here in Canada, now called the Open Roboethics Institute, which got me really engaged in discussions around global policy and regulation of autonomous weapons systems. This was just around the time that the discussions on these topics had begun at the UN around Lethal Autonomous Weapons Systems and the UN Convention on Conventional Weapons (CCW), so I actually got to go to the UN and spend time with activists and policymakers there. Seeing the policy discussions, I thought, “We need to make these conversations more practical and approachable for the technical folks” (because I come from a technical background myself). I also thought, “We need to help businesses who are rolling out these technologies,” and that was how I began the startup. Even in a high-level policy context, I discovered real challenges because we still need to get a lot more research done to establish the scientific evidence necessary to inform the policy decisions on technology. Ultimately, that drew me back to academia. Now, I look at, what are the impacts that we can talk about concretely — qualitatively and quantitatively — and what kinds of policy decisions will make sense, based on those facts, for the future?

Kiera: Could you briefly explain what ‘robots’ are in your work, and what these robots are used for?

Dr. Moon: Of course. So, I don’t work with terminators or robots that are purposefully built to kill. By robots, I’m talking about embodied, physical objects that are able to sense something about our physical environment, process, and compute something about the signal that it has sensed, and then do something about it within a physical environment to change that environment. Some people say that bots on web browsers — the algorithmic things that automate specific functions — are robots as well, but I am specifically focused on the physical domain when I talk about robots.

Kiera: One topic you research is human-robot interaction, such as human-robot collaboration, nonverbal communication, and human-robot negotiation using motions/gestures. Could you talk a bit about how robots and humans interact? What are some major questions that you are working on?

Dr. Moon: If anyone has visited an automotive factory within the past few decades, they’re likely familiar with those huge robotic arms that perform those repetitive functions and can be “on” 24/7. In a way, I work with those types of robotic arms but on a much smaller scale and in much more physically safe interactions. The idea is to work with industrial robots that are designed to be able to safely interact physically with people so that you don’t need the safety curtains that manufacturing facilities typically have. Essentially, it means that you can envision a person assembling a particular part with a robot, both holding onto the same object at the same time. I also look at robots that are a little more human-like: it might have two arms or head-like things, with cameras, and/or it can move across the floor.

In human-robot interaction, we look at questions about designing robots to better interact with us, such as, How do you get a robot to pick up a water bottle and hand it over to a person in a safe and clear manner? When a robot hands you something, it should be very clear when you are supposed to take it from the robot. Between humans, this seems trivial because we pick up on each other’s gaze cues, ways we move our hands, etc., to figure out these details of everyday tasks. But for robots, we have to program every single feature.

Kiera: Another area you research is robot ethics. Can you tell me about some of the main ethical issues related to the robots you work on? Which issues concern or interest you most?

Dr. Moon: Previously, I was really interested in the question of what kind of new ethical dilemmas arise as we are replacing or supplementing human work with robots. For example, as part of the Open Roboethics Institute, we ran a study to see if ownership of a “care” robot should make a difference in the way the robot treats its user. We presented a scenario where an alcoholic person is dispatched from the hospital and has a “care robot” at home to help them fetch things. According to the doctor, the patient is not supposed to have alcohol, but still she orders the robot to fetch her a drink. Should the robot give the person a drink? If this were about a human nurse, we wouldn’t even think about what the response would be. When we conducted a poll on the care robot scenario, however, we found that if the user was the owner of the robot, then people thought that the robot should indeed fetch the person the alcoholic drink. But if it was owned by someone else (e.g., the hospital owned it and was renting it to her, or if a family-member of the user bought it for her), then people said that the robot should not fetch the drink. So it is really interesting to think about how these small decisions that seem so trivial in human interactions, and which we take for granted, become morally challenging with robots because we need to program it into their systems.

In addition to those kinds of specific decision-based ethical issues, I’m also considering bigger moral questions like, what kind of robot influence on people is harmful? If we accept interactive robots as an agent of influence — for instance, if we accept the fact that robots can nudge you to buy something or influence the way you sit or talk — what kinds of manipulation/influence should be deemed as negative influence? What kinds of manipulation might we not desire as a society? In contrast, on the positive end of the manipulation spectrum, there could also be positive nudges for healthy behaviour. You can imagine how a robot could be used, for example, to remind people to wash their hands more often during a pandemic. So there is a moral spectrum that we are diving into with regards to these systems, and we are looking into empirical studies to shed light on those kinds of issues.

Kiera: Is it possible to address these ethical and moral issues through technological design? For example, is it possible to translate ethical principles into the design, deployment, or operational decisions? Or to embed human values into predictive data-driven algorithms? How would that work?

Dr. Moon: Typically, when you hear about embedding human values into robotic systems, people tend to think that people are simply programming machines so that the machines know what fairness means and act fairly, but that is not really what happens in reality. When we design these kinds of systems, we human technologists have to do the tough work of translating what kinds of values are important for the specific application of a robotic system and then converting that into design decisions. Addressing the ethics issues is much more about informing those kinds of design decisions, rather than building machines that do the ethics calculus for us. For example, in the care robot we just spoke about, should you design a system that prioritizes ownership or one that values the wellbeing of the end user regardless of ownership? Those are value-laden judgement calls that we, as designers, need to make. And these are easier when we are able to discover scientific facts that can inform policies and allow such decisions to be made in a way that aligns with our social values.

Kiera: Does your work intersect with discussions about AI? What AI-related ethics conversations are most present in your work?

Dr. Moon: When we talk about AI today, we are usually talking about machine learning systems, or systems that take massive amounts of data and then predict something about the future state of the world based on that data. When we talk about ethical issues in AI, we are typically worried about the link between the data that a particular system is trained on and what it spews out at the end. Is the output of that particular system fair? For example, there was a study by ProPublica that looked at how the judicial system in the United States had been using a particular piece of algorithm to inform judges on the likelihood of a particular person to be reconvicted of a particular crime and return into the system. The study demonstrated that there was some racial bias at play when you look at the output. Beyond the issues of discrimination, there also problems of transparency, which we talk about a lot as well. An end user or a recipient of the outcome of an AI algorithm may be led to believe that AI is this very intelligent system that is able to make better predictions than him/her without really understanding how the system is designed. If they are not an AI expert, they can’t intelligibly challenge what the AI is telling them and attempt to figure out what may be wrong with the system upon failure. So, when we talk about transparency, we mean the need to both (a) be able to audit a system for quality, and (b) be able to communicate to a user what exactly is happening and how a particular output was produced, so that the user can make a better decision about what to do with that information.

Kiera: Over the next five, 10 or 20 years, how do you see these ethical issues changing or evolving? Do you think there will be solutions or any reasons that these issues are resolved or disappear?

Dr. Moon: In 10–20 years — in my head, a long time — I do think we will have a much more ethical technology-driven society. My vision is of us being able to train the future engineers today to be able to think about these human values (e.g., fairness, human rights, dignity), which are not often taught in engineering schools, such that, by the time the engineering students graduate, five years down the road, they are able to join the work force and think through value-based and ethical issues related to their particular design issues. This way, 10 years from now, if that particular cohort of students has designed a product that has been in service for a few years, we can look back and say, “They designed a product that avoided a whole lot of potential problems because we were able to, en masse, educate students, who are now engineers, to be capable of making these beneficial design decisions.”

Kiera: What is the status at the moment with regard to teaching ethics in engineering programs in Canada?

Dr. Moon: In Canada, because we have a culture of professional engineers’ associations, teaching engineering ethics at universities has been the norm for many years, even when I was an undergraduate student. However, the kinds of ethics taught in those contexts are primarily professional ethics, rather than about thinking through the normative implications of things that you design, or being able to inform your design decisions using the language of values. So, I think ethics is being taught and has been taught for many years in engineering schools, but the scope and kind of ethics that is being taught needs to be broadened.

Kiera: Looking toward solutions, we’ve discussed the role of engineers in making design decisions, but what are your thoughts on the roles of other actors: policymakers, lawyers, philosophers, or any others? How do you see the distribution of responsibilities and solutions for these ethical issues?

Dr. Moon: I really take seriously the view that engineering is a team sport. That means that engineers are taught to be problem solvers who are good at looking at a problem and coming up with a technical solution for that problem. But not all problems in the world are meant to be solved technically. Engineers love to solve problems and have engineering as our expertise at hand, so we like to come up with often technical solutions. But we also need to be able to work with the users, the public, policymakers and legal scholars in order to think through what a best approach for a societal problem that we want to address would be. We need to be able to walk through the solution space broadly, not only focusing on whatever technology we know how to make simply because we are an expert in a particular piece of technology, but rather what society may want as a whole.

I also think that policymakers have a huge role to play because the way we shape our innovation policy — the decisions we make at the federal, provincial, and municipal levels to fund specific projects — have an impact on what kinds of research are done, what kinds of startups and businesses are funded. This is a strong motivator for innovators to shape their innovation accordingly in order to be successful. And this really shapes the kind of technology we can expect to see down the road. Ultimately, there is definitely a role for all these different stakeholders, and the important thing is to make the connections between different stakeholders stronger.

Kiera: In the international realm of tech policy and ethics, you are also an Executive Committee Member of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and a Panel Member of the International Panel on the Regulation of Autonomous Weapons (IPRAW). Can you tell me about these international efforts to address ethical considerations and regulation of autonomous systems? How effective are these efforts?

Dr. Moon: The IEEE, in particular, is a really interesting case of how, at the international level, different expert groups and the public are coming together to do something about the issues of ethics and technology. The IEEE started the The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems a few years ago, where we gathered all of the people who are interested in talking about ethical issues around autonomous systems and to collectively come up recommendations to address these issues. Within a couple of years, we were actually able to pull together over 700 experts from across the globe to work on a huge document called Ethically Aligned Design together. The document ended up being very detailed, was translated into various languages including Japanese and Korean, and is still one of the go-to guides for people looking into this topic area. But even more, while it started simply as a document of guidelines that we felt the need to write, it ended up creating this huge network of people who were then able to reach out to each other and other experts around the world to have deeper discussions on these specific issues. This community is still strong and growing.

I wanted to point this out because when we think about regulation at the international level, it is really hard for us to build a mechanism of governance that is as rigid and structured as that which we have at the national level, for example, with the federal government of Canada. We can’t envision the equivalent of that at the global level — something so coordinated and that can move as fast as the way technologies evolve today. But I think the IEEE provides an interesting example of how soft forms of governance can be powerful in the absence of hard mechanisms for governance. Overall, governance is a big issue right now — a lot of actors want to talk about governance in AI — but a lot of the solutions are challenging, and we are leaning toward soft forms of governance for this reason. Soft governance can be powerful and it is something to be supported.

Kiera: On a final note, what projects, initiatives or innovations are you most excited to be working on in the next little while?

Dr. Moon: Well, as COVID-19 is a hot topic right now — for very good reasons. There is a statement that the IEEE Global Initiative put together to help designers of technological solutions for this particular global issue to consider ethically aligned design of the systems. It’s about, how do we think through human rights and other human values, and what does that mean to be designing technologies for managing COVID-19? As part of the Open Roboethics Institute, as well as part of the start-up work that I’ve done, we have created a toolkit that anyone can use for free — called Foresight into AI Ethics toolkit. It walks designers through how to think through different values and ethics in the design process of any kind of algorithmic system. I’m very excited to do more work on this particular toolkit, to see if we can improve it, as well as looking into how we can start to think through measuring better design, in the ethical sense of things. How do we measure if these toolkits move us in a positive direction? Can we measure or objectively make statements of that kind as we produce more of these guidelines, toolkits, and recommendations in the future? I’ll be doing more case studies and looking for partners in this domain.

Kiera: Thank you very much for your time! It was a pleasure to speak with you.

Dr. AJung Moon is an experimental roboticist. She is currently an Assistant Professor in the Department of Electrical & Computer Engineering at McGill University, where she investigates how robots and AI systems influence the way people move, behave, and make decisions in order to inform how we can design and deploy such autonomous intelligent systems more responsibly. Dr. Moon is also the Founder and Director of Open Roboethics Institute (ORI), an Associate Member of the McGill Centre for Intelligent Machines (CIM), and an Associate Member of Mila. In the realm of tech policy and ethics, she is a Member of the Government of Canada Advisory Council on Artificial Intelligence, an Executive Committee Member of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, and a Panel Member of the International Panel on the Regulation of Autonomous Weapons (IPRAW).
Kiera Schuller is a Research & Policy Analyst at ICTC with a background in human rights and global governance. Kiera holds an MSc in Global Governance from the University of Oxford. She launched ICTC’s Human Rights Series in 2020 to explore the emerging ethical and human rights implications of new technologies such as AI and robotics in Canada and globally, particularly on issues such as privacy, equality, and freedom of expression.

ICTC’s Tech & Human Rights Series:

Our Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.

--

--

ICTC-CTIC
ICTC-CTIC

Information and Communications Technology Council (ICTC) - Conseil des technologies de l’information et des communications (CTIC)