The Moral Compass in the Machine: Ethics in AI

Rosemary J Thomas, PhD
Version 1
Published in
7 min readOct 13, 2023
Created using Microsoft Bing Image Creator

AI ethics, or machine ethics, is an interdisciplinary field focused on addressing the moral aspects of artificial intelligence, involving study of ethical theories, guidelines, policies, and regulations specific to AI. Additionally, it explores the concept of ethical AI, which involves artificial intelligence systems capable of upholding ethical standards and behaving in a conscientious and trustworthy manner.

Establishing an ethical foundation for AI is necessary for both developing systems that adhere to ethical norms and ensuring that AI operates in an ethically sound manner. This endeavour hinges on defining the ethical and moral values and principles that discern what is deemed moral or harmful. With high-tech innovations in AI ethics and through the application of suitable ethical frameworks, one can construct and apply systems that exhibit ethical behaviour, using various methodologies and technologies.

In this article, we explore the primary ethical challenges arising from AI and examine best practices for effectively addressing them.

Major ethical issues caused by artificial intelligence.

Biases

Some of the prominent types of biases are discussed below.

Sampling bias occurs when the training data used for machine learning algorithms does not adequately represent the entire population it is meant to serve, lacks diversity, resulting in inaccurate and biased predictions.

Algorithmic bias arises from unequal treatment of datasets, due to biased assumptions during AI algorithm development or inherent prejudices in the training data. It results from the algorithm design and execution, favouring certain attributes, ultimately leading to unjust outcomes.

Confirmation bias can manifest either in the training data or in the way prompts are framed, influencing the AI’s responses. This means that when users request information or responses from AI, it may tend to generate content that aligns with the users’ existing viewpoints.

Created using Microsoft Bing Image Creator

Measurement bias is associated with flawed sensors or devices, can also arise when human judgment and perspectives influence data collection, it favours or underrepresents specific groups.

Interaction bias occurs when biased data is employed to train AI models, the outputs generated by these models will inherently carry those biases It can magnify these biases, and they can inadvertently perpetuate societal prejudices.

Transparency and explainability

Transparency is a key issue in AI ethics, particularly in deep learning algorithms that often operate as “black boxes,” obscuring their decision-making processes. This opacity becomes more problematic when AI models are trained on biased data, potentially leading to unfair outcomes. Trust in AI, especially in sensitive fields like healthcare, relies on transparency because it affects public and patient trust and the adoption of AI technologies.

Explainability is another ethical concern, closely related to transparency. Likewise, it’s hindered by the black-box nature of many AI algorithms, making it difficult to explain the relationship between input data and results. Deep learning models, like Neural Networks, are often considered “black boxes” due to their complex, less transparent processes.

Privacy

The main AI privacy and security concerns revolve around personal data handling, which poses risks such as intentional breaches or accidental leaks, potentially leading to identity theft, fraud, and other forms of abuse. Additionally, as AI systems become more complex and autonomous, they are vulnerable to hacking or manipulation, potentially leading to harmful decisions.

Facial recognition, a contentious aspect of AI, has various applications such as for airport security, unlocking smartphones, and in federal law enforcement databases. While it aids in finding missing individuals and identifying criminals, critics argue that it can lead to mistaken identity and privacy invasion. Several cities and states have banned or regulated its use. The challenge for lawmakers is to strike a balance between protecting individual privacy and promoting AI development without getting entangled in broader social and political issues.

When assessing the impact of AI privacy, it’s essential to differentiate between data-related concerns common to all AI, like false positives and overfitting, and those particular to personal information use.

Autonomy

AI’s capability to either hinder or support human autonomy and is a significant issue. There are different forms of interpersonal disrespect by AI systems varying in extent, including direct interference, coercion, manipulation, deception, nudging, paternalism, delegation (by human) of cognitive tasks to AI, and biased AI recommendation systems. The degree of impact depends on factors like transparency, consent, alignment with users’ values, and the nature of AI recommendations and interactions.

There are ethical challenges posed by autonomous systems, focusing on autonomous vehicles and weapons systems. Autonomous weapons systems have the capability to replace human soldiers, potentially reducing war crimes if equipped with ethical governance, but concerns include increased conflicts, AI’s interpretation of ethical principles, and hacking risks. Though self-driving cars offer benefits like improved traffic safety, concerns arise about accidents and the need “ethics settings” to determine their behaviour, posing real-life machine ethics challenges. Fatal accidents involving autonomous vehicles have already occurred.

In addition, philosophers explore the ethical decision-making principles, focusing on “meaningful human control” in situations involving harm to humans. Furthermore, there are concerns about responsibility gaps in assigning blame for outcomes generated by autonomous systems, especially morally questionable ones, presenting complex challenges in allocating responsibility.

Sustainability

There significant financial and environmental costs associated with training deep learning models, particularly in natural language processing (NLP), which involve extensive energy consumption and carbon emissions. The argument that as AI models are developed and used, it’s crucial to evaluate their impact on climate change. Although some energy for AI model training can come from renewable sources or carbon offset resources, the high energy demands remain a concern, especially in regions without access to carbon-neutral energy. It highlights the ethical dilemma of allocating energy and resources to AI model training rather than providing electricity to millions of people lacking access to modern amenities.

Additionally, the fine-tuning AI models can have a more expensive environmental impact than initial training. This raises ethical questions about the proportionality of using AI methods for certain tasks, particularly those with ethically charged implications. Independent studies suggest that AI’s contribution to climate change can be substantial, with some estimates indicating that information and communication technology emissions, including AI, may exceed 14% of global emissions by 2040.

Created using Microsoft Bing Image Creator

Optimal practices for addressing AI ethical issues.

AI can unintentionally perpetuate human biases, leading to unfair decisions or recommendations. Technical tools exist that can detect and mitigate AI bias across various data types— Explainability tools play a crucial role in identifying bias in AI models.

In complex AI scenarios, full explainability may not always be realistic or necessary. Trustworthiness becomes a key concept. Users might not fully grasp the inner workings of AI systems, but trust can be built through factors like training, safety regulations, and the reputation of the system’s manufacturer. For instance, how drivers trust semiautonomous vehicles without fully understanding their algorithms. It’s vital to establish this sense of trust and map out under what conditions AI systems can be relied upon to ensure safe and ethical use.

Improved explainability can boost trust, especially in medicine where professionals need to understand and trust AI-driven decisions. Some AI models, like Decision Trees, are inherently interpretable.

AI developers need to be educated about their biases and how they can inadvertently introduce it into AI systems development. Creating a culture of fairness is essential, involving all decision-makers who understand AI bias issues and their impacts. AI systems should reduce the digital gap, improve accessibility, and address issues related to gender, disabilities, geography, and ethnicity. AI fairness should have a global dimension.

Additionally, promoting AI autonomy and positive independence involves measures such as transparency and informed consent. Freedom and autonomy values are crucial in guiding AI development and use, aiming to make AI technology that inspires and liberates, free from external interference and systemic power dynamics.

AI creators should establish internal principles and governance frameworks to ensure their AI systems are fair, accurate, transparent, and robust. External institutions, such as governments and industry bodies, can also contribute by co-creating AI frameworks.

Achieving AI fairness necessitates governance structures that detect and mitigate bias in data collection and processing. Data protection law requires that personal data be processed fairly without unjustified adverse effects on individuals. Frameworks should define oversight specific to each use case, and fairness should be context-dependent and defined through multi-stakeholder consultations. Further insights on AI Law & Regulation can be found in the linked blog.

Furthermore, AI sustainability metrics, i.e., measuring carbon emissions during model training and acknowledging regional variations is vital. The energy efficiency of AI algorithms, revealing drastic differences in energy consumption shows the importance of incorporating energy efficiency metrics alongside traditional performance measures. The ultimate objective is to encourage the development of energy-efficient and ethically responsible AI solutions, with standardized reporting of energy and carbon data to foster a culture of social responsibility in AI research and development. Further insights on Green Algorithm can be found in the linked blog.

The drive to tackle AI ethical issues extends beyond preserving human values; it is also a socio-economic obligation. Companies that fail to establish ethical standards in their AI products may encounter obstacles in gaining broader technology acceptance. Consequently, every stakeholder within the AI ecosystem should allocate resources and work diligently to actively promote comprehensive AI ethics.

This article was written with the help of ChatGPT and Bing Chat.

About the author:

Rosemary J Thomas, PhD, is a Senior Technical Researcher at the Version 1 AI Labs.

--

--