Fairness AI : Ethical issues of AI in society and legal foundations

jeremie bureau
Jellysmacklabs
Published in
10 min readApr 25, 2022

As a data-driven company, Jellysmack pays particular attention to the ethical and legal aspects of data. A team is dedicated to data governance issues to ensure data security, privacy, and legal compliance. We would like to dedicate a series of articles to these important themes.

Introduction

Data, the black gold of the 21st century, is extensively collected online and IRL by cameras, sensors, and connected devices. The use of data has taken on a new importance for both public organizations and private companies, which can no longer operate without this precious information.

Big data and artificial intelligence algorithms have brought numerous ethical and social issues to light. How can we be sure to respect peoples’ privacy and fundamental rights when collecting such massive amounts of data? Where do these tools come into the decision-making process?

In this article, we will explore both the legal underpinnings of data management and how companies are adapting to ensure that their brand image is associated with ethical and trusted AI. Then, we will describe some AI applications that have raised questions and fears from a societal and human perspective.

Legal

source: pixabay

Laws

As information and computer technologies continue to proliferate, a number of regulations have been put in place to protect the rights of individuals.

In France, the Data Protection Act of January 6, 1978, was one of the first laws in Europe to protect the rights of individuals by regulating the processing of personal data. Article 1 summarizes the law: “Information technology should be at the service of every citizen. Its development shall take place in the context of international cooperation. It shall not violate human identity, human rights, privacy, or individual or public liberties.”

The Commission Nationale de l’Informatique et des Libertés (CNIL) was created to uphold the law and raise awareness of the ethical issues surrounding algorithms and artificial intelligence. Every European country has its own CNIL equivalent and the map below illustrates the different levels of data protection worldwide.

In recent times, the General Data Protection Regulation (GDPR), which took effect on May 25, 2018 at the European level, complements, strengthens, and standardizes these laws. Today, it is the reference when it comes to personal data protection. The GDPR’s key principles are:

  • Collect only data that is essential to the success of a given project.
  • Do not collect data without the knowledge of individuals.
  • Make it easy for people to exercise their data rights.
  • Set retention periods for data.
  • Protect the integrity of data.

On April 21, 2021, the European Commission proposed a draft regulation on Artificial Intelligence systems. The idea is to classify the various uses of AI by level of risk and to establish precise rules for each category.

What sanctions are foreseen by the law?

Non-compliance with GDPR can result in fines of up to 20 million euros or 4% of the company’s revenue. These sanctions are intended to dissuade businesses of all sizes from breaking the law.

In January 2022, fines exceeds 1 billion euros between May 2018 and January 2022. Most significant fines can be found on the graph below:

https://www.dlapiper.com/

What measures are companies taking to ensure compliance?

Various changes and new tools have been introduced by companies to ensure adequate compliance. There are three major changes worth exploring:

Emerging business lines

  • Data Protection Officer (DPO): ensures the company’s compliance with regulations. They are the backbone and driving force behind the organization’s data protection strategy.
  • Data Quality & Data Governance: define data elements and establishes policies and procedures related to the collection and accuracy of data.
  • Chief Information Officer (CIO): Although this position is not completely new, it is important to ensure the use of modern software and technologies to streamline business operations.

Guidance

  • Specialized lawyers and consulting firms with expertise in data have become essential in monitoring GDPR compliance and ethics in AI.
  • Tech specialists for support in updating the data architecture and the optimal choice of data for specific issues.

New corporate communication

It is quite common for companies to highlight not only their data security and transparency but also their commitment to social and environmental fair play.

The seven key points to respect in the EU are:

  • Human agency and oversight: AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy.
  • Technical robustness and safety: Technical robustness requires AI systems to be developed with risk prevention in mind so that they reliably function as intended, minimizing unintentional and unexpected harm, and preventing unacceptable harm.
  • Privacy and data governance: AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle. 41 This includes the information initially provided by the user, as well as the information generated about the user over the course of their interaction with the system.
  • Transparency: The traceability of AI systems should be guaranteed
  • Diversity, non-discrimination, and fairness: The AI systems should take the full range of their capacities, aptitudes, and human needs into account and their accessibility should be guaranteed
  • Environmental and societal well-being: The AI systems should be used to support social progress and reinforce environmental sustainability
  • Accountability: Measures should be put in place for AI systems that mandate responsibility regarding their use and results, and require those involved to be held accountable.

Ethical Charter creation (Ex : Montréal)

The Montreal Declaration for responsible AI development has three main objectives:

  1. Develop an ethical framework for the development and deployment of AI;
  2. Guide the digital transition so that everyone can benefit from the technological revolution;
  3. Open a national and international forum for discussion to collectively achieve equitable, inclusive, and ecologically sustainable AI development.

Ethics and society

source: pixabay

Introduction

The phrase ethical and trustworthy artificial intelligence is becoming more popular in the media and in discussions about a digital society. The term ethical, above all, relates to the concept of morals and norms that govern the behavior of individuals and businesses. These norms supplement laws and, in the case of new sectors such as AI, serve as a “pathfinder.” There has been a lot of progress made in artificial intelligence in recent years, which makes it hard for legislators to keep up (cf. Part I).

Unregulated smart technologies are available to any user who wants them and can afford them. As a result, advanced technologies are more and more becoming a part of our everyday life. AI-based technologies are increasingly being used in commercial decision-making, and more than half of Europeans want artificial intelligence to replace legislators. According to a recent survey, 65 percent of organizations were unable to explain how their AI-based algorithms make judgments. However, these judgments are crucial, as they affect the lives of consumers.

Despite widespread interest in these unregulated technologies, there is a need to develop ethical and transparent artificial intelligence. We’ll use real-world examples to demonstrate how ethics plays a critical role in the creation of cutting-edge technology.

Autonomous systems

source: iotworldtoday.com

The development of autonomous vehicles is one of the most common examples of the necessity of ethics in the development of smart technology. A self-driving car is a car that can sense its surroundings and move with little or no human assistance. These technologies are predicted to reshape the car industry, transportation systems, and our lifestyles.

Autonomous vehicles acquire a massive quantity of data on a continual basis in order to function, using a variety of sensors such as cameras and radar. The system must be programmed in order so that the car “understands” the data it collects and makes the best decision possible in all imaginable scenarios. Most of the time, this is done with the help of a reinforcement learning algorithm. The autonomous system will learn what it should do and what it should avoid based on reinforcement learning to maximize the notion of a cumulative reward. But how can we educate the autonomous system to react in moral gray areas? Is it possible to automate ethics?

In the event of an unavoidable collision or a technical malfunction, self-driving cars will have to decide who lives and who dies. Should we then consider giving these cars legal personality? Or should the user be held accountable? The use of autonomous systems to make choices raises concerns about accountability in the event of a malfunction. Additionally, in the medical field, the question arises: who should be held accountable if an AI makes a diagnostic error?

According to the study on the Ethics of Artificial Intelligence (COMEST): “in some contexts, employing AI as a human-assisted or fully autonomous decision-maker might even be seen as a pact with the devil. In order to take advantage of the speed and large data ingestion and categorization capabilities of an AI engine, we will have to give up the ability to influence the decision. Moreover, the consequences of such decisions can be dire, especially in conflict situations.”

Security

source: pixabay

The European population is increasingly concerned about issues such as public safety, border control, and the ability to identify fake news sources. Recent breakthroughs in artificial intelligence (AI) have allowed for increased overall security but at a moral cost.

Many ideas are currently being researched in laboratories and just a handful have been implemented, but where does this leave us in the future? For example, an automatic lie detection system is being studied to reinforce security in airports without slowing down the flow of passengers (Border Ctrl project). The program would not replace human verification but would be useful to help with decision-making. Humans are lousy liar detectors and barely outperform probability, but AI-based technologies have a detection rate of around 70%. So it would seem to be in the best interest of the company to deploy them. But is this level of efficacy enough?

Every month, around 5 million travelers pass through Paris’s Charles de Gaulle Airport. Assuming that perjurers constitute 1% of the population, the algorithm would identify roughly 1.5 million innocent people as probable perjurers and detect 35,000 of them, while allowing 15,000 perjurers to slip through the cracks. If the tool’s dependability were as high as 99 percent: just 500 people fell through the cracks, and the number of liars identified was the same as the number wrongfully accused, the cost-benefit ratio could be judged acceptable.

This case demonstrates the necessity for incredibly effective tools in a field where decision-making is not trivial for those who may be wrongfully accused.

Mass surveillance / Social Credit System

source pixabay.

The tools used for surveillance and monitoring at airports can be deployed on a much larger scale. There is a growing trend of implementing systems around the world that open the door to mass surveillance. How can we ensure the protection of peoples’ fundamental rights and privacy?

In France, facial recognition is already being tested in public places under the supervision of researchers and members of civil society. In China, an estimated 540 million surveillance cameras are installed throughout the country. Algorithms are being used to monitor the productivity of factory workers, attendance, lapses in concentration in the classroom (with geolocated uniforms, headbands that record brain activity…). Monitoring systems are even gaining traction in households. ByteDance successfully sold the Dali smart lamp in 2020. The lamp monitors youngsters while they do their homework and informs parents when their child’s focus begins to wane.

These types of surveillance have the potential to have a serious impact. In 2014, the Chinese government announced a project for a population rating system — a tool for judging a person’s trustworthiness based on their social conduct or identified psychological features. The idea is to detect role models for citizens. A citizen earns or loses points based on their behavior. Their name may be added to lists (for example, non-payment of a fine would be sanctioned, and participation in community service would be rewarded). Scores and lists would be made public and might be consulted by employers, government offices, and more when recruiting, issuing credit, and so on. Such a system does not yet exist on a national level, but several local programs have been established. This system is being promoted as a means of fighting corruption.

In light of these threats, the EU has adopted a firm stance on certain applications of AI, such as outlawing population rating systems like those introduced in China and limiting mass monitoring to highly specific exceptions. The end goal is to structure AI so that it does not jeopardize fundamental rights while also allowing the economy to flourish and businesses to remain competitive.

CONCLUSION

AI breakthroughs have led to huge social transformations that present a variety of ethical challenges. These changes have prompted the establishment of the regulations discussed in the first section of this article. Although the evolution of the law may appear restrictive, ethics in AI is a vector of innovation for businesses, compelling them to question their current use of data, whether it respects peoples’ privacy, the use of business data, and the global mapping offering companies a boost in agility.

More articles to come on these topics, including an illustration of the changes in behaviour of some Jellysmack’s algorithms due to biases in the data, so stay tuned.

The data team at Jellysmack is always on the lookout for new talent.

Take a look at all our open positions here: https://jobs.jellysmack.com/.

--

--