How to Implement Responsible AI

David Steenmeijer
Auraidata
Published in
8 min readMar 22, 2023
A futuristic interpretation of the Creation of Adam (courtesy of Tara Winstead)

TLDR: To minimize the risk of unintended consequences and to align with company values, AI should be implemented responsibly. This can be done by formulating principles and creating a blacklist of applications or uses of AI that an organization will never pursue. By adopting a responsible AI approach, a company can ensure that their algorithms are not only beneficial to them, but also to society. It helps ensure that they do not reinforce unfair bias or harm marginalized groups. From a business perspective, this results in lower risks of unintended consequences. Which in turn exposes the company to lower risks of financial penalties. This is especially relevant in the European Union with the GDPR and the upcoming AI Act.

AI ain’t that bad

First things first. Artificial Intelligence (AI) is gaining more ground every day. And rightly so. It makes our lives easier and the financial benefits for companies are clear. As a result, algorithms are implemented on a massive scale. In other words, the AI train has left the station and no one will stop it.

We could endlessly discuss the harms of AI, or how it should be regulated. But the truth is that those measures would not stop the rapid development and implementations of new AIs. Therefore, it’s more constructive to come up with ways to deploy AIs more responsibly.

This blog gives concrete examples of what first steps any company working with AI could implement to deploy their algorithms more responsibly and to lower the risk of unintended consequences. Whether you work as a consultant, a data scientist or a data engineer, these tips are a good starting point to think about what impact your data projects have on different stakeholders.

courtesy of Tara Winstead

Unintended consequences

There are numerous examples of AIs ‘gone rogue’. Take Google’s image classifier that classified “black people as gorillas”. This very graphic and painful example is just the tip of the iceberg. Many algorithms unintentionally punish the poor and marginalized groups according to several researchers.

Unintended consequences are potentially harmful and are a liability for businesses. Therefore, we must ensure that AI is implemented appropriately, accounting for these possible consequences. This blog gives hands-on tips to minimize the risk of unintended consequences of your AIs.

So how can we leverage the benefits of AI without doing harm?

Introducing: Responsible AI

The answer lies in Responsible AI: a discipline that focuses on the ethical and societal implications of AI. It helps ensure that algorithms are designed and implemented in a way that minimizes the risk of negative outcomes. This includes considering the potential consequences of AI on various groups and taking steps to prevent negative effects.

Responsible AI helps ensure that algorithms are designed and implemented in a way that minimizes the risk of negative outcomes.

Simply put, this is done in three ways:

  1. Considering the potential consequences of AI;
  2. Taking active steps to prevent possible negative consequences, such as marginalization, bias and exclusion;
  3. Using ethical standards and principles to regularly check your AIs.

This is especially crucial when an AI makes a decision that has real consequences for a person. For instance, when applying for loans/mortgages, insurance rates, or systems that assess recidivism risk.

Overall, the goal of responsible AI is to help ensure that AI is used in a way that is beneficial to society and does not harm individuals or groups.

It all starts with creating awareness about unintended consequences and how to prevent them. For companies, there are several strategies to minimize the risk of unintended consequences by their AIs. Let’s discuss two of them that any company can implement at any moment at practically zero cost:

  1. Formulate principles
  2. Blacklist applications you won’t pursue

Formulate principles

Together with your team, you can discuss which principles should be followed while designing and implementing your AIs.

This contributes to more Responsible AI in two ways: firstly, the resulting list is something that can be used as a directive to hold each other accountable. Secondly, it opens up the discussion about what a good AI should be and what it should not.

It helps your Data Scientists and Machine Learning Engineers think about the possible negative consequences of their work. This thought experiment helps in the awareness of possible negative consequences, which results in them thinking more critically about their work. This critical examination of one’s own work will prevent unnecessary outcomes.

For example, Google has listed the following principles:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards for scientific excellence.
  7. Be made available for uses that accord with these principles.

Some may argue that this is ethics washing by Google, but that does not mean we cannot learn from them. Take these principles as a starting point for your discussion about Responsible AI. Why would you consider these principles? What do they really mean? Ask yourself the question who benefits from your AIs? And who does not? Or worse, who is negatively impacted?

Courtesy of Tara Winstead

Using the Moral Machine Experiment to design your own moral principles

If you really want to dive into it, you can take different ethical perspectives into account. A classic thought experiment put into practice is the Moral Machine Experiment: respondents judge what a self-driving car should do in a situation with inevitable misery like the one in the image below. They must decide whether the car drives straight, killing the pedestrians crossing the street, or to diverge which kills the passengers.

A screenshot from the Moral Machine Experiment (via moralmachine.net)

People tend to respond very differently on what decision the self-driving car should take. In other words, there is no simple answer to moral decisions. Therefore it is important to think critically when it comes to automating moral decisions.

There are many ethical theories that I won’t bore you with today. The key take away here is this: machines make moral decisions all the time. Whether it is a self-driving car that decides to kill some people over others based on their quantity, age, illegality (look at the red light in the image) or whatever other trait.

Ethical perspectives help you approach a problem from different angles. In the case of the Moral Machine Experiment, some would argue that you should not ‘pull the switch’ and kill people that would survive if you would not interfere. (These people would be good friends with Immanuel Kant.)

Others would argue that you should maximize the number of people that survive the situation. Or prioritize saving young over older people. (Like the old chaps Jeremy Bentham or John Stuart Mill.)

The point of these discussions is not to prove who is right. But to realize that different moral decisions have different consequences for various groups of people.

Situations like the Moral Machine Experiment are not every day examples. However, there are numerous practical examples of moral decisions made in Machine Learning. For instance, when creating a fraud detector it is a moral decision to optimize for precision or recall: do you want to detect all fraud, with unavoidable collateral damage of innocent people that are unfairly labeled and treated as fraudsters? You probably don’t. So where is the cut-off point? How many false positives (innocent people labeled as fraudsters) are justified for every false negative (fraudsters labeled as innocent)?

In short, there is always a trade-off when optimizing a model. This is a moral decision. Even when collateral damage is unavoidable, it better be well-justified, or things get nasty rather quickly.

Developers need to realize that many algorithms make moral decisions, especially when their models interact with humans.

The Blacklist

The first approach was to formulate a list of what your AI should adhere to. But you can also work the other way around, creating a list of things your AI should NOT do.

Let’s stick with Google’s example here. They proclaim that they won’t create AIs that are:

  1. Technologies that cause or are likely to cause overall harm.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

It is obvious, that this blacklist was created following the several public outcries around their AIs. For instance, the employee protests against working on Project Maven. Additionally, Shoshana Zuboff would probably argue that their third declaration is at the core of their business model (but that is a story for another time).

However, no matter how cynical people’s views might be of these principles or blacklists, they are good conversation starters. It is important to start the dialogue at your company to gain insights in the way AIs are designed and implemented. This could really prevent potential harm.

The risks of irresponsible AI

Responsible AI is often regarded a money burner in organizations. But it’s important to note that responsible AI is not just a matter of ethics — it can also have financial implications. The European Union, for example, imposes fines up to 4% of a company’s annual global turnover for noncompliance with the GDPR (and soon the AI Act). By implementing responsible AI practices, you can help protect your company from potential liabilities and reputational damage.

Conclusion

If your company uses AIs (and they probably do), it is important to start a conversation about how these are created and who is affected by them. By formulating AI principles and/or a blacklist, you open up a dialogue on which values your company finds important and it could prevent unintended consequences. Responsible AI must be implemented in the design and implementation phase of data projects, but it is also worthwhile to audit them regularly.

Since the Responsible AI discipline is still relatively new, there are few standards on how to execute it. It is a journey that we discover together.

Aurai provides custom data solutions that help companies gain insights into their data. We engineer your company’s future through simplifying, organizing and automating data. Your time is maximized by receiving the automated knowledge effortlessly and enacting better processes on a foundation of relevant, reliable, and durable information. Interested in what Aurai can mean for your organisation? Don’t hesitate to contact us!

--

--