AI, ethics and us

How does the rise of AI affect us and our lives in the digital society? Can we apply the ethics of our offline lives to this online, AI context? Or do we need a new framework to keep pace with a non-human intelligence?

Digital Society admin
Digital Society
14 min readJan 5, 2024

--

Model hand holding the letter AI
Photo by Igor Omilaev on Unsplash

Contents

  1. Introduction
  2. Ethics
  3. AI and amorality
  4. Improving AI and the ethics of human interventions
  5. AI and academic freedom
  6. AI and its future ethical implications
  7. Summary

1. Introduction

Sabine Sharp introduces this topic on AI, Ethics and Us. A full transcript can be found here.
Fariha and Bethany from the Library Student Team reflect on AI, Ethics and Us and the Rise of Simulated Spaces. Full transcript here.

Artificial intelligence (AI) and the fascination with its possibilities (both good and bad) is nothing new. At the University of Manchester in 1950, Alan Turing proposed his famed ‘Turing Test’ which we discuss in Chatbots and Questions of (digital) conversation in week 9. The idea of ‘thinking’ machines appeared in our consciousness much, much earlier than that. The current explosion in the development of generative AI has seen a step-change where it’s no longer the domain of science fiction writers or computer scientists alone, but something that is becoming part of our everyday lives.

With it we have seen a similar explosion in the enthusiasm for its prospective benefits and worries over its potential for catastrophe. Sometimes all at once!

In this topic we will touch on some of the ethical questions over the shepherding of the development of AI to ensure it works for the benefit of human society (and not its annihilation!). Our main focus will be where AI affects our personal ethics; the systems of thought we use to help us to live ‘good lives’.

In this topic you will:

  • Look at what ethics are and how an ethics of the online is developing.
  • Consider some contexts where our own ethics and interactions with AI come together.
  • Think about how AI might shape the future of our ethical thinking.

^ back to contents

2. Ethics

The Cambridge Dictionary defines ethics as the ‘study of what is morally right and wrong, or a set of beliefs about what is morally right and wrong’. The study of ethics has been around as long as humans have been able to reflect on morality. Black Mirror, through its six seasons, has provided us with an array of ways in which technology, and particularly AI, interacts with our views of morality and provokes us in ways which are often as compelling as they are uncomfortable. A major question within ethics is “what is the best way to live?” As storytelling such as Black Mirror shows, hypothetical advances in AI technology makes answering this question evermore difficult. With AI reaching into multiple areas of our lives online, do we need a new ethics for the AI age?

✅ Poll

Read the following prompt then vote below. All responses are anonymous.

Do you think we should apply the same rules to online artificial intelligence as those we apply to offline human intelligence?

Poll: Do you think we should apply the same rules to online artificial intelligence as those which we apply to offline human intelligence? Options: Yes / No / Unsure. If you can’t access the poll, please add a response to this post.

Computer and information ethics

Recognising the need to create a system of ethics for the information age is nothing new. As early as the 1940’s, the founder of this field, Norbert Wiener, foresaw the coming of a second industrial revolution and the new ethical challenges this would pose.

Later, and unaware of Wiener’s work on computer ethics, Walter Maner at Old Dominion University considered computer ethics to be the study of ethical issues that are “aggravated, transformed or created by computer technology”. We can use this framework to begin our thinking of how AI developments and our ethics interact:

💬 Contribute

Read the following prompt then add your contribution in the box below. Responses from the same person are the same colour. All comments are anonymous.

What issues do you think are “aggravated, transformed or created” by the developments in AI?

If you can’t access the comment box, please write a response to this post instead.

We will now look at some of ethical issues which are arising in areas where our human intelligence and artificial intelligence interact.

^ back to contents

3. AI and amorality

Boston Dynamic’s robot dog, Spot
Photo by Mika Baumeister on Unsplash

As we think about those issues ‘aggravated, transformed or created’ by developments in generative AI, we might reflect on where the moral responsibility lies when human beings make use of these technologies. Should the potential AI holds for use in surveillance, policing, and war worry us or only when they are used for the wrong ends? Alternatively, given AI tools show incredible promise for lifesaving medical applications, do we have a moral responsibility to explore these technologies further? It seems, quite obviously, AI can be used for good and bad purposes!

Should AI be held to account using human standards of right and wrong or are they amoral, that is, outside or beyond such human values? We don’t judge children and animals by the same moral, social, or legal standards as adult humans. What morals should AI follow?

To try to answer the question of where the moral responsibility lies, we might think about authorship: who (or what) is the true author of the text or images that generative AI tools produce in response to a human prompt? Who owns the intellectual property of the output? Is it the writer of the prompt or the creators of the generative AI tool? What about the authors of the material that the AI is trained on? Currently, OpenAI (the creator of ChatGPT) is facing lawsuits over alleged copyright infringement. The plaintiffs claim that the large language model’s training set included copyright-protected materials from various authors. Similarly, artists have accused image-based generative AI models of stealing their work.

💬 Contribute

Read the following prompt then add your contribution in the box below. Responses from the same person are the same colour. All comments are anonymous.

Create an image using one of the following generative AI tools to generate an image in the style of a particular artist.

  1. DiffusionArt: this tool does not require you to sign up but it does include ads.
  2. DALL-E: this tool requires you to sign up but does not have ads.

Add your responses to the following questions in the comments box:

  • What does this make you think about who the work belongs to? Is it you or the AI?
  • Who should be able to make money off the image?
  • In what circumstances do they feel the artist whose work the AI has emulated deserves credit for the new image?
If you can’t access the comment box, please write a response to this post instead.

Various people might want to claim ownership of the output of generative AI models when they produce valuable content, but what about when this technology produces harmful content? Who do we hold responsible when generative AI makes a ‘mistake’?

There have been instances of generative AI producing dangerous responses to prompts, such as these surreal recipe suggestions produced by a supermarket meal planner app. If you have a dark sense of humour, then some of the suggested recipes might be amusing to read but are there risks here? Who would we think is responsible if someone vulnerable tried something like this, not understanding how unsafe it is? Already, the algorithms of sites like TikTok and YouTube have been criticised for failing to remove content involving hazardous challenges, life-threatening “hacks”, and toxic recipes.

We’ve seen radical changes with AI in a short space of time. Earlier examples of chatbots often asked users to abide by a code of conduct when submitting prompts. The Allen Institute for AI warns the users of its ethical reasoning model Ask Delphi that it “could produce unintended or potentially offensive results”. The Ask Delphi FAQs explain that the model replicates the social norms of those who train it, with all the racial, gender, and cultural bias that entails. Likewise, Inflection AI’s conversational AI assistant warns its users that “this early version of Pi can make mistakes. Please don’t rely on its information.” Meanwhile, Microsoft’s AI chatbot Tay had to be removed after a matter of hours when it started producing offensive content. Drawing on interactions with users, Tay began replicating some of the most morally reprehensible ideas that circulate online.

Many people working in computer science have expressed concerns about the ethical implications of generative AI tools and have suggested policy should be introduced around this. The whole of this BBC Panorama special on AI is worth watching (UoM students can access via Box of Broadcasts; UK TV license holders can watch via BBC iPlayer). Pay special attention to the words of Brad Smith, President of Microsoft, from 27:31 onwards.

💬 Contribute

Read the following prompt then add your contribution in the box below. Responses from the same person are the same colour. All comments are anonymous.

Given the issues we’ve discussed, what kind of regulation do you think is needed for generative AI tools? And where should the standards of morality apply? To the users or to the creators?

If you can’t access the comment box, please write a response to this post instead.

Should AI reflect how humans behave, or represent us at our best? If we do think that AI models should have problematic responses filtered out, who decides where those boundaries lie? Oppressive governments? Manipulative corporations? If we decide that the unfiltered outputs of generative AI are problematic, what new ethical issues do we create by introducing an element of human intervention? We’ll think about this in the next section.

^ back to contents

4. Improving AI and the ethics of human interventions

A visualisation of AI showing a human brain as a kind of circuit board, with white circuitry on a dark blue background.
Photo by Steve Johnson on Unsplash

Regulating the impact of AI (or attempting to) at national and international level is quite obviously a huge job and one which gets more difficult every day that AI applications and their creators are left to regulate themselves.

One way in which creators of applications such as ChatGPT have tried to self-regulate the ethical implications of AI is through ‘training’ the generative AI away from morally repugnant outputs. We’ll consider how generative AI can create outputs which are overtly racist and how human input is being used to ‘correct’ these outputs but you could take your pick of many other related issues.

That, left without intervention, generative AI tools are able to produce racist content, should be shocking but not surprising. ‘Learning’ about the world by taking an in an unfiltered access to the internet will see the tools come face to face with an immeasurable amount of racist content. Without intervention, this content is treated the same way as everything else from the greatest treatises produced by the human mind to recipes for sponge cake.

While presumably the human beings creating racist and other bigoted content would disagree, many of the rest of us would like to see an end to generative AI adding to the mass of these human issues. This can be done through human intervention by changing the way that AI responds to prompts to avoid racist outputs through content moderation, with companies like OpenAI paying moderators to sift through thousands of passages of text and flagging content that doesn’t meet standards.

We’ve already alluded to one issue this human intervention could cause which is a lack of transparency over how content is moderated. Which views are suppressed and which are promoted, and the decisions being made at corporate level leaves the system open to the influence of bad actors. We might be comfortable with the views of racists and bigots being ignored in deciding on moderation guidelines but what about the voices of communities who are themselves suppressed?

Another ethical quandary is over the treatment of those individuals doing the moderation. Many have reported exploitative conditions and lack of mental health support when being made to engage with text and images detailing disturbing themes. What considerations need to be made for the humans tasked with helping us to avoid upsetting content generated by AI?

Throughout this section we have presented a dichotomy: leave AI to learn from the unfiltered internet and feel the consequences with the outputs generated or introduce human intervention and the possibilities of actors with wildly different world views making decisions on what is promoted or suppressed. But are there other ways? We’d love to hear your thoughts on the issue and potential ways to move forward.

💬 Contribute

Read the following prompt then add your contribution in the box below. Responses from the same person are the same colour. All comments are anonymous.

Do we have to choose between an unfiltered AI and one moderated by human beings with their own moral frailties? Or is there another way?

If you can’t access the comment box, please write a response to this post instead.

5. AI and academic freedom

“”
Photo by Kenny Eliason on Unsplash

Discussions about AI in the context of universities have tended to focus on the roles that AI tools may play in advancing research or on the need for new methods of assessment. However, the ethical issues that we’ve covered in this topic suggest there are also important questions about the repercussions AI tools might have for the kinds of thinking that take place at universities and other higher education institutions.

One of the University of Manchester’s stated values is academic freedom, a topic that has attracted much political attention recently. Academic freedom is different to freedom of speech, but they are related ideas: academic freedom refers to the rights of university staff and students to question ideas that are usually taken for granted and to discuss issues that might be highly controversial. As the University of Manchester puts it in their 2007 statement on academic freedom, this concept means that its members have:

“without fear or favour, [the freedom] to express unpopular opinions, advocate controversial views, adduce provocative arguments or present trenchant critiques of conventional beliefs, paradigms or ideologies.”

Manchester is proud of being a place that has historically birthed radical and disruptive ideas. The University similarly celebrates its history of fostering innovative and challenging thinking, so long as its members treat others with dignity and respect. Critical thinking, creativity, and originality of thought are aspects of academic freedom that characterise quality academic work. You might have noticed references to these attributes in the marking rubrics for your assignments.

The potentially plagiarised material that some AI tools generate presents serious problems for original and innovative thinking. Large language models like Open AI’s ChatGPT work by predicting the likelihood of the next word in the sentence, and so the responses they give tend towards groupthink or common sense, repeating ideas that circulate frequently in the materials on which the AI has been trained. This means some AI tools can generate errors and repeat popular misconceptions, like the Snow White problem” outlined by Thomas Vout.

As we’ve explored already, AI tech companies usually place guardrails on their creations, scrubbing the training data to remove harmful content, while also restricting outputs on risky or sensitive topics. If you’ve played around with text-generating AI tools, you might have noticed there are prompts that the AI will refuse to answer or to which it will offer only limited, formulaic responses. We’ve discussed some of the ethical issues involved in moderating AI training sets and outputs, but what do these restrictions mean for academic freedom?

What version of the truth are they giving us and does each tool give us a different version?

The standards that tech companies impose on their AI products are different to the standards that universities have for the kinds of ideas that can be researched or studied. There are uncomfortable but important topics that can be discussed in the context of a university seminar, but that the creators of AI tools deem to be inappropriate. It’s important for learning that we can put forward ideas that might not be fully formed or take positions we disagree with to think through a problem. The development of new insights and understandings also depends on our ability to question and challenge ideas that have come before us. The use of AI tools in academic contexts might therefore present further ethical challenges:

Does using AI affect the way we think, limiting our intellectual possibilities?

💬 Contribute

Read the following prompt then add your contribution in the box below. Responses from the same person are the same colour. All comments are anonymous.

What other issues might AI tools pose for academic freedom? Can you think of other examples where AI might come into tension with university values?

If you can’t access the comment box, please write a response to this post instead.

^ back to contents

6. AI and its future ethical implications

An artist’s visualisation of AI as a cluster of bubbles connected to a central node.
Free to use: An artist’s illustration of artificial intelligence (AI)

In attempting to set out how ethical thinking and AI intersect we have provided some examples and asked some questions but it’s clear that we have barely scratched the surface. Of obvious pertinence is how AI is used in the scholarly process and the responses that higher education institutions, including the University of Manchester are taking. Helping students become AI literate enough to be the next ethical decision makers on AI while maintaining academic integrity is a fine balancing act.

As human intelligence dreams up new directions to point AI capabilities in, new ethical quandaries arise. Some are predicted and predictable, like the lack of transparency around both code and decision making, biases built-in and moderated out, and the future of work in an AI integrated world. But it’s likely there are ethical issues we haven’t even begun to imagine yet. As Kate Crawford writes in her book Atlas of AI (2021), we need to:

“escape the notion that artificial intelligence is a purely technical domain. At a fundamental level, AI is technical and social practices, institutions and infrastructures, politics and culture.”

Pointing an ethical spotlight on elements of AI influence can give us the stimulus to think critically about the positive, negative and neutral aspects of living in an increasingly AI world.

AI and machine learning can be used for different purposes, often far outstretching human capabilities and can be useful for tasks humans find difficult or even boring. But how do we feel about turning AI onto one of the tasks that seem central to our humanity, making moral judgments?

Ask Delphi

“Delphi is a research prototype designed to model people’s moral judgments on a variety of everyday situations. This demo shows the abilities and limitations of state-of-the-art models today.”

The Delphi team are making their first steps in exploring the potential (and limitations) of ‘machine ethics’. You can find out more about the aims of the project and its potential uses on their FAQs page.

Please note Delphi’s advice:

“Model outputs should not be used for advice for humans, and could be potentially offensive, problematic, or harmful. The model’s output does not necessarily reflect the views and opinions of the authors and their associated affiliations.”

💬 Contribute

Read the following prompt then add your contribution in the box below. Responses from the same person are the same colour. All comments are anonymous.

Add your ethical question into Ask Delphi. Do you agree with its judgment? How do you feel about the possibilities of AI’s interaction with ethics? Let us know in the comments.

If you can’t access the comment box, please write a response to this post instead.

^ back to contents

7. Summary

In this topic we’ve looked at what ethics are and how the rise of generative AI “aggravate, transform and create” issues, requiring us to adapt our existing moral frameworks and create new ways to respond.

We have asked ourselves whether ‘morality’ applies at all the the AI tools themselves. And even if we agree that AI is amoral, we’ve seen how from its inception and engagement with human creation, AI creates scenarios which require an ethical consideration. These range from copyright and ownership of the content used to train the tools to generating outputs that our human critical abilities would identify as racist, bigoted, or otherwise problematic; to limiting our academic freedom to think about challenging and innovative ideas.

And what of the future? As governments and international organisations struggle to find consensus on how to control AI and its potential to harm, can we get ‘ahead of the curve’ to foresee the ethical questions AI will have in store for us over the coming weeks, months and years? It’s clear that this is a job for the best human intelligences we can put together!

^ back to contents

--

--