Responsible Robotics and AI: at the table with Daniel McDuff, Joanna Bryson and Olivier Guilhem

HEART Team
6 min readMar 30, 2021

“When you try to put on the market a new drug, you have to do a risk assessment; this obliges you to question yourself about the positive and negative effects. I think we should be pushing that way to consider that AI is not just a simple software, it’s also an active substance” — Olivier Guilhem

On 12th February 2021 our HEART team at Softbank Robotics Europe hosted the webinar: Responsible Robotics & AI: Concrete solutions. With a diverse panel of experts, we discussed the most pressing ethical issues in robotics and AI as well as the solutions being considered, both at the legal and governmental level but also from the technical point of view of engineers and scientists.

Our first speaker, Dr. Daniel McDuff, is a Principal researcher at Microsoft Research. Previously, he was a Director of Research at the well known MIT Media Lab Spin-Out Affectiva, after receiving his PhD from the MIT Media Lab.

Dr. Daniel McDuff

When asked about the greatest ethical challenges in the field of affective computing, Daniel responded: “There are many challenges that come up when thinking about how affective computing algorithms could be deployed. My biggest concern at the moment is that affective signals, signals about how people express themselves, will be used as a shortcut in supposedly intelligent systems to try to characterize people in a way that isn’t accurate, equitable or responsible. This is particularly problematic if it is used to restrict opportunities or disadvantage people, which is a general problem when referring decision problems to machines.”

We also hosted Dr. Joanna Bryson, a Professor of Ethics and Technology at Hertie School of Governance in Berlin. Her present research focuses are the impacts of technology on human societies, and new models of governance for AI and digital technology. She is a founding member of Hertie School’s Centre for Digital Governance, and among nine experts nominated by Germany for the Global Partnership for AI (GPAI), where she is also the co-chair of the governance committee.

Dr. Joanna Bryson

As part of that initiative, Joanna joined the research group Responsible Development and Use of AI: “Well, they called it Responsible AI, and the first thing we did at the first meeting was change the name to Responsible Development and Use of AI, to make it clear that there’s no way to enforce responsibility on a machine or to penalize it in a way that would alter its behavior if it did something wrong. So the idea of responsibility does not belong with AI itself but rather it is always the responsibility of either the developers or, hopefully if it’s been correctly developed, of the owner operators.”

The third speaker of our webinar was Olivier Guilhem, Legal and Risk Director at Softbank Robotics Europe for 9 years. He is a co-author of three books about our relationship with AI and robots and the legal/ethical challenges raised by those technologies. Olivier is also a member of the ethics committee of Softbank Robotics Europe.

Olivier Guilhem

Olivier discussed with us the challenges of a robotic company like Softbank Robotics Europe and when asked what is the current big challenge of his work, he pointed out product liability: “Regarding product liability, in today’s European law there is no difference between AI, a robot or any other good such as a blender. So the interactivity, the learning capacity and the decision-making autonomy are absolutely not considered today in our legal point of view.”

On that matter, Olivier reminded us that the product liability should also concern the user. Joanna stated that “we can’t allow people to get out of their obligations to society, what you want to motivate is having a system that you can maintain, that is clean, that is clear, that you know is the safest thing you could make”. Another important point there is having a law that not only considers the possible physical damages of a product but also the liability about the risks related to the data and the environmental impacts of the development or the use of our digital systems.

Other challenges explained by Olivier include the social acceptance of the technology, such as of robots like Nao or Pepper, but his insight also applies to other AI-driven products.

“We are developing humanoid robots and we all have imagination and fantasy in regards to those products, so we are all imagining what the humanoid robot “could do” and when you start developing it you face reality and you quickly realize that reality and fantasy are not really matching.” — Olivier Guilhem

From a technical perspective, Daniel considers that the use of digital technology is a great opportunity for ensuring transparency and fairness in decision making systems. Indeed, as software systems are increasingly used to take decisions in important real-world scenarios such as healthcare decisions, hiring, university admissions, loan approval, crime prediction and many others, it is of great importance to ensure data privacy and minimize implicit bias. One way of doing that is using computer simulations to interrogate and diagnose biases within ML classifiers.

Also, in association with both the academia and the industry, Daniel is currently working on a responsible AI licensing initiative where licenses are proposed as a useful tool for enforcement in situations where it is difficult or time-consuming to legislate AI usage. By using such licenses, AI developers would also provide a signal to the AI community, as well as governmental bodies, that they are taking responsibility for their technologies.

“Our initiative is really just to address what we see as a potential problem where in science we want to be able to distribute things so that people can build on what we do, and in computer science in particular there’s been in many ways a good tradition of being very open but we’re recognizing now that it comes with challenges… We want to allow people who may not have all of the legal resources of large companies or governments to limit how other people can use what they build, if they’re not sure about the negative implications.” — Dr. Daniel McDuff

To discover more of what Daniel, Joanna and Olivier shared with us, watch the webinar replay on our YouTube channel, the Q&A session at the end was particularly interesting.

“There is some misconception that AI makes things opaque and this doesn’t have to be the case. It is a digital technology and we now have an opportunity to make things more transparent than they’ve ever been, which I think is one of the things people get nervous about. So, some people used to “hide behind clouds” of like saying “oh nobody can understand this”. But we’re making it clearer now that, even if you don’t understand every little weight in a neural network, that’s not what understanding means, it’s about making sure that people are delivering their software in a safe way and that they’ve done adequate testing and that should be auditable. And it doesn’t have to be open source, it just means that there ought to be a way to go through, and see, and be able to defend yourself from liability by showing that you’ve done your work correctly, and keeping records.” — Dr. Joanna Bryson

We would also love to hear from you, if you have any questions or if you would like to be a speaker for our next webinar contact us at: sbre-heart@softbankrobotics.com.

People behind HEART: Marine Chamoux (Software Manager), Miriam Bilac (R&D Software Engineer), Dr. Susana Sánchez Restrepo (R&D Robotics Engineer) @ Softbank Robotics Europe

Robots behind HEART: Nao & Pepper

--

--

HEART Team

Humanizing tEchnology And Robot Talks @SoftbankRoboticsEurope: a way to keep AI & robotics enthusiasts up to date!