Do Companies need a Chief AI-Ethics Officer?

Murat Durmus (CEO @AISOMA_AG)
Nerd For Tech
Published in
4 min readApr 4, 2021

--

shutterstock

A few quick thoughts on this.

The world we live in is becoming more and more data-driven. This is causing companies to make more and more use of AI techniques such as machine learning and deep learning. It seems to be the only “efficient” way to get control over the data and generate value for the company relatively quickly. Of course, future competitiveness also plays a significant role.

The task of the Chief AI Ethics Officer (CAIEO) should not be primarily technical. Instead, it should sensitize data scientists, machine learning engineers, and developers to ethical issues. The whole process of sensitization should be part of every data-driven project. By this, I mean that the ethical workflow should be firmly integrated into the respective process models and phases.

The Ethical Workflow

The underlying paper. Highly recommended: Understanding artificial intelligence ethics and safety by Dr. David Leslie (The Alan Turing Institute)

In the long term, AI may lead to ‘breakthroughs’ in numerous fields. From basic and applied science to medicine and advanced systems. However, as well as great promise, increasingly capable intelligent systems create significant ethical challenges. The issues discussed deal with impacts on: human society; human psychology; the financial system; the legal system; the environment and the planet; and impacts on trust.

Below are some points listed that an AI-Ethics officer should consider in their work (According to EPRS | European Parliamentary Research Service).

Social impacts: the potential impact of AI on the labour market and economy and how different demographic groups might be affected. It addresses questions of inequality and the risk that AI will further concentrate power and wealth in the hands of the few. Issues related to privacy, human rights and dignity are addressed as are risks that AI will perpetuate the biases, intended or otherwise, of existing social systems or their creators. This section also raises questions about the impact of AI technologies on democracy, suggesting that these technologies may operate for the benefit of state-controlled economies.

Psychological impacts: what impacts might arise from human-robot relationships? How might we address dependency and deception? Should we consider whether robots deserve to be given the status of ‘personhood’ and what are the legal and moral implications of doing so?

Financial system impacts: potential impacts of AI on financial systems are considered, including risks of manipulation and collusion and the need to build in accountability.

Legal system impacts: there are a number of ways in which AI could affect the legal system, including: questions relating to crime, such as liability if an AI is used for criminal activities, and the extent to which AI might support criminal activities such as drug trafficking. In situations where an AI is involved in personal injury, such as in a collision involving an autonomous vehicle, then questions arise around the legal approach to claims (whether it is a case of negligence, which is usually the basis for claims involving vehicular accidents, or product liability).

Environmental impacts: increasing use of AIs comes with increased use of natural resources, increased energy demands and waste disposal issues. However, AIs could improve the way we manage waste and resources, leading to environmental benefits.

Impacts on trust: society relies on trust. For AI to take on tasks, such as surgery, the public will need to trust the technology. Trust includes aspects such as fairness (that AI will be impartial), transparency (that we will be able to understand how an AI arrived at a particular decision), accountability (someone can be held accountable for mistakes made by AI) and control (how we might ‘shut down’ an AI that becomes too powerful).

Main ethical and moral issues associated with the development and implementation of AI:

Main ethical and moral issues

In the following graphic, I have tried to show how a responsible machine learning workflow (including the ethical workflow) could look like.

Responsible ML Workflow

It is primarily about identifying & understanding ethical risks, and training managers & employees on how to do the same.

Chief AI-Ethics Officer: The Job of the Future?

Discussions about AI Ethics are still mostly conducted in academic circles. But one can already see that many companies are seriously dealing with it. One thing that seems clear to me:

Graduates of philosophy and ethics will be in high demand in the future to investigate AI-related processes through a human lens.

~ (MINDFUL AI)

(The Text is an excerpt from my new Book “MINDFUL AI — Reflections on Artificial Intelligence”)

NEW RELEASE — Available on Amazon: MINDFUL AI

MINDFUL AI

--

--

Murat Durmus (CEO @AISOMA_AG)
Nerd For Tech

CEO & Founder @AISOMA_AG | Author | #ArtificialIntelligence | #CEO | #AI | #AIStrategy | #Leadership | #Philosophy | #AIEthics | (views are my own)