Illustration depicting a “benevolent” robot in a face-off with a “malevolent” robot

Journey Towards Ethical AI

Aurélien Coppée
Google for Developers EMEA
5 min readAug 4, 2023

--

Allow me to introduce myself.

I am Aurélien Coppée. I grew up surrounded by technology like many people of my generation, and from a very young age, I have been fascinated by the possibilities offered by artificial intelligence (AI). My initial studies in mechatronics have now led me on a path focused on electronics and computer science.

At 18 years old, full of ideas and dreams, I was gaining my first professional experience. I attended a meeting where we were looking for ways to improve the efficiency of a machining cell, and I naively suggested integrating AI into the robot and exploiting Big Data. I quickly learned that these technologies are not as simple or magical as they seem. My supervisor brought me back to reality, which led me to expand my research and explore other ideas.

I was struck by the contrast between the simplicity suggested by the concept on the surface and the much more complicated reality hidden behind it. The more I deepened my knowledge, the more I discovered a complex universe, rich in nuances, and challenges.

I thus became aware that understanding artificial intelligence goes far beyond simply understanding algorithms and codes. It also requires understanding its potential impact on daily life, on our social interactions, and more broadly, on our entire society.

The movie Oppenheimer, which explores the development of the atomic bomb, highlights how technology, despite its destructive potential, can also reflect the moral and ethical dilemmas humanity must confront.

Beyond Asimov

It is in this context that I undertook my journey towards a deeper understanding of ethics in AI. My exploration of neuroengineering, a sector I aspire to join, made me realize the importance of ethics, especially in medical diagnosis.

This realization was the starting point for my new adventure. I began to ask questions and search for answers. However, before we delve any further, I believe it is essential to share a common understanding of basic concepts.

Artificial Intelligence is the theory and development of computer systems able to perform tasks that would usually require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Ethics, on the other hand, encompasses moral principles that govern a person’s behavior or the conducting of an activity. (Oxford Language)

When these two concepts intersect, we arrive at the realm of Ethical AI, which seeks to ensure that artificial intelligence systems operate in a manner that is morally sound and respects the fundamental rights and freedoms of individuals. It addresses concerns about AI’s decision-making processes, its impact on society, and the moral responsibility associated with creating and deploying such systems.

The first thing I thought of when I discovered this field was Isaac Asimov’s Three Laws of Robotics.

Image quoting Asimov’s Three Laws: 1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First and Second law.

These laws, drafted by Asimov in the 1940s, were designed to protect humans from the potential dangers that AI and robotics could present. However, upon delving deeper, one realizes that these laws are closer to fantasy than to reality. They are an important foundation, certainly, but they do not suffice to cover the numerous issues that arise today.

Indeed, our contemporary concerns go far beyond simple human-machine interaction. We are faced with more complex issues: privacy, transparency, fairness, accessibility, responsibility, and control.

Privacy is a central issue in the current context. AI systems are often trained on huge amounts of data, much of which is personal. How can we ensure that these data are used ethically? What about user consent?

Transparency is another major concern. AI algorithms are often perceived as “black boxes,” with opaque internal mechanisms. How can we ensure sufficient transparency and explainability so that users can understand and trust the decisions made by AI?

Fairness is also a crucial issue. How can we prevent AI from reproducing existing biases and inequalities in society? How can we ensure that AI benefits everyone, and not just a privileged minority?

Accessibility raises many questions. Advances in AI often remain unreachable for many people. How can we ensure fair access to AI technologies, regardless of socioeconomic status or place of residence? How can we promote inclusion through AI rather than exclusion?

Responsibility and control are two other significant issues. Who has the right to develop and deploy AI? Who controls these technologies and decides how they are used?

Ethical Drift of AI

History is full of examples where AI, despite the best intentions, has acted in ways that were far from ethical. One such case is that of Microsoft and its AI named Tay.

The Twitter profile picture of Tay

Tay was a chatbot launched by Microsoft in 2016, with the aim of engaging in conversations with users on X, which was previously known as Twitter, and learning from them to enhance its conversational abilities. However, less than 24 hours after its launch, Tay began posting racist, sexist, and offensive messages. The AI had to be taken down.

What went wrong? Tay’s creators certainly did not intend to create a racist or offensive AI. However, they underestimated the internet’s ability to corrupt their product. Tay was designed to learn and adapt based on interactions with Twitter users, and some users quickly realized they could manipulate the AI to repeat and adopt their hate speech.

Towards Ethically Guided AI

We must recognize that AI, while offering incredible opportunities for innovation and improving human life, also raises complex and pressing issues of ethics, responsibility, and governance. It is not merely a question of technology, but a societal issue. It is a challenge that requires the attention and collaboration of everyone. Not only AI experts and researchers, but also legislators, regulators, companies, and the public.

We cannot ignore these issues hoping they will resolve themselves. We must be proactive in our search for solutions, and that starts with deep reflection on the ethical implications of AI. Furthermore, we must ask ourselves, not only what AI can do, but also what it should do and, just as importantly, what it should not do.

I am Aurélien Coppée, and I invite you to join me in this quest. The development of AI is an incredibly exciting adventure, but also one laden with responsibilities. Let’s make sure it is guided by ethics, respect for human dignity, and the pursuit of the common good.

* * *

This article was created during our Summer of Writers campaign. If you’d like to learn more, check out the article linked above or join our Discord channel on Google Developers Communities server — you’re welcome at the Writers’ Corner.

--

--