Sami Moustachir
2 min readSep 26, 2016

Following a seminar course on science and ethics, the following papers will be dealing on the relations between robots, or intelligent systems, and humans, in a series called Ethics in AI. Check the next part here.

Introduction to ethics in Artificial Intelligence

Artificial intelligence has been taking quite some space in our lives lately. All over the news, products are giving in to the siren calls for AI. From self-driving cars to content curating in media, there are dozens of applications for machine intelligence, which are improving dramatically critical sectors such as medecine, transportation, media and so on.

Overall, when we mention artificial intelligence, people have in mind a system able to make decisions based on some kind of intelligence. This intelligence is often relying on some complex mathematical algorithms. A system greeting people based on sensors can already be considered as “intelligent”. The more you will be able to handle specific and unexpected cases, the more your system will be perceived as responsive as a humanThus, an AI is a decision-support system based on a predefined intelligence, which would be the scheme or mapping leading to the so call decision. But what makes a good AI? We could think of an AI making the right decisions, so with the best mapping. Then, we would need to figure out what means “the right decision(s)”.

An AI is a decision-support system based on a predefined intelligence.

In the science of AI, we often measure the efficiency of a system using precision metrics. In a quantified world, It is then easy to see if the system is indeed intelligent. How many times a customer was detected? How often you managed to predict accurately the weather? However, It is harder to avoid problematic decisions and poor judgements, raising concerns on a possible future full of clashes between humans and intelligent systems.

Each individual has its own beliefs on what is right or wrong and on what makes someone smart. We have our own ethic, often regulated by the sentiment of belonging to a group. Since AI has more and more meaningful social impact, It is then legitimate to wonder how we will define right and wrong for those intelligent systems. This is called robotethics.

Check the next part here.

Sami Moustachir

Data lover, AI thinker, Founder @biasimpact.org. Eating knowledge one bit at a time 🍎. http://samimoustachir.com