Fourkind’s thoughts on the ethical use of artificial intelligence

Max Pagels
The Hands-on Advisors
4 min readAug 10, 2018

Ethics in artificial intelligence has become a hot topic as of late, with individuals, corporations, non-profits and even governments establishing guidelines and manifestos for the responsible use of AI. And rightly so; as AI continues to grow in popularity and permeate all aspects of society, it’s clear that we need to be cognisant of the change we’re bringing about. In this post, we’d like to clarify the stances & guidelines Fourkind has on the responsible use of AI; guidelines that reflect our thoughts in general whilst remaining honest about the realities of AI development. Let’s start with the latter first.

Biased algorithms

You can’t talk about ethics in AI without bringing up bias. Bias has many definitions depending on context, but in terms of social bias or predjudice, there’s no such thing as a biased algorithm. For symbolic artificial intelligence, the way an algorithm works depends on who made it; for machine learning, it depends on the data it learned from. Algorithms may work in a way that is seen as biased, but in and of themselves, they have no understanding of the concept. Social bias as a result of how an algorithm behaves stems from us humans.

Biased data

In a statistical sense, bias is an everyday occurence. If we want to estimate approval ratings in an election, a slight mistake during the collection or treatment of our sample might lead to a result systematically different from the true population parameter. But the data itself is just, well, data. It’s not socially biased, it’s just a bunch of numbers. A dataset needs to be constructed carefully to avoid inducing social bias, but it’s not biased in and of itself. That’s a distinction worth keeping in mind as we develop AI systems responsibly.

Discrimination

Discrimination is everywhere. If you go to buy clothes and ask an employee for recommendations, the answer they’ll give will likely depend on your appearence, gender, and any other information you provide. One size doesn’t fit all, and in most cases, we don’t want it to. The same applies to the vast majority of artificial intelligence: we want to act differently based on the information we’re given. We want a discriminative model. At the same time, we want behave in a way that is socially responsible, ethical and law-abiding. The onus falls on us to make something that is both ethically sound and beneficial, and straddle the line where we discriminate against some things but not others.

Subjectivity

Social bias is subjective. What’s socially acceptable varies based on who you ask, where in the world you are, and who you interact with. It evolves over time. Given the “softness” of the concept, the only way to act responsibly is to understand what we, as individuals and as a modern society, deem as inherently wrong, and do our utmost to avoid developing systems that act unfairly. Using software to help check and verify is ok, but in the end, we are responsible for our work. There is no oracle to ask except our own moral compass.

Our guidelines

At Fourkind, the guidelines we have for the ethical use of AI serve not only as an internal tool, but also as a public stance. We‘ve tried to avoid overly general or misleading statements that gloss over the realities of making AI. The guidelines below represent our common values, honesty included.

#1: To the extent that it is possible, avoid creating AI systems that reinforce commonly understood social biases, contravene human rights, violate human dignity, or otherwise breach your own moral code

If you encounter and/or are asked to do something you find unethical, raise the issue immediately internally and with your client. Never sacrifice your own values for added model accuracy, financial gain, or fear of failure.

#2 Follow all applicable laws, directives and mutually agreed-upon best practices

This also applies to things like internal and client-company privacy guidelines/policies and GDPR. If you are unsure about interpretation, raise the issue with your client or internally, depending on the circumstance. Note that in some cases, such as laws regarding minors, you may have to enforce discrimination to behave ethically.

#3: Hold yourself accountable

If you develop an AI system, take ownership of it. Be honest about the way it works, the type of data used to train it, and address issues in a timely fashion. Don’t place the blame on others or try to hide mistakes. Inform clients immediately if something goes wrong.

#4 Be as transparent as you can

The inner workings of some AI models can be explained easily, some are more of a black box. Sometimes laws and regulations affect the choice of learning algorithm. In all cases, make sure to document the data and process you use when developing AI. Transparency also applies to end-users; make sure that the use of AI is documented and to offer a non-AI alternative solution not only where required, but also where feasible.

#5 Use your own judgement

At Fourkind, we go to great lengths to find the best people we can; people who we trust to make sound decisions not only collectively, but also as individuals. Don’t be afraid to speak up and err on the side of caution when you feel something isn’t quite right.

As the use of AI proliferates further, it’s possible — even probable — that Fourkind will refuse a project on ethical grounds (as of August 2018, we have yet to do so). This is more likely as we enter an era of high automation, where the temptation to compete at the expense of fairness could become more common than it is today.

--

--