Chelsea Manning: On AGI Ethics, Privacy & Security

Haley Lowy
Published in
6 min readJun 20, 2023


Chelsea Manning joined the Mindplex Podcast hosts to discuss the biggest challenges facing the world and gives her ‘hot take’ on how to solve them.

Greetings Singularitarians, and welcome to your next glimpse of the rapidly approaching future, with Ben Goertzel, Lisa Rein, and Grace Robot your hosts for The Mindplex Podcast.

Caption: Chelsea Manning (AI, Privacy & Security Expert, Author and Activist). Photo: Matt Barnes (CC BY-SA 4.0)

Watch the Full Mindplex Podcast Episode 8

Mindplex Podcast Episode 8 — Live Now!

Episode 8 features a lively discussion between Chelsea Manning and Ben Goertzel about numerous issues to consider when building, deploying and securing AGI-enabled systems.

Chelsea and Ben cover a host of timely topics, including:

  • AI cryptography.
  • Whether AGIs require consciousness or self-awareness.
  • Ethics of AGI self-awareness.
  • The importance of distributed, decentralized data verification systems to authenticate and verify sources of information.
  • Chelsea’s thoughts on the biggest risk facing humanity in the next few years. (Spoiler! It’s climate change.)
  • The importance of democratic, decentralized participatory mechanisms to control AI as it moves from narrow AI toward toward AGI.
  • AI employment displacement and its implications and the obvious solution that UBI represents moving forward, highlighting potential obstacles with implementing it in poorer countries.
  • Why there might be supply chain interruptions in the future and who wins and who loses in “the supply chain lottery.”
  • AI regulations… and Chelsea and Ben’s pragmatic take on what they can and can’t actually achieve.
  • Evolution of AI and its ethics.

AI-fueled Unemployment, AGI Consciousness, and Data Verification

AI-fueled unemployment was probably the hottest topic that emerges frequently throughout the discussion. Chelsea shares some concerns she’s been thinking about lately:

Another socio sort of political problem that I’ve identified, and I’ve talked to some researchers about it and they haven’t really thought about this much. Is that, if we are increasingly living in a society in which there’s the potential the risk of having less employment; having less people working because the productivity of automation offsets the requirement for these people to work.

The sort of Protestant Ethic society that we live in, in which everything is a commodity, everything is a transaction, and you’re being milked, you know, for every penny of every second for all of your eyeball time or whatever. At some point, they’re going to be drained and exhausted…

What I think is the potential end of the, or the collapse of the, information based economy, or the entertainment based economy, really happens whenever people don’t have enough. People have their jobs eliminated, and they can’t afford these nice things, and they can’t afford to watch these things.

To which Ben Goertzel responds:

So one possible endgame is like 10,000, rich people, owning robot factories that mine stuff and make luxury goods for them and pamper them. And meanwhile, everyone else ends up rummaging around for garbage outside the robot factory and subsistence farming, right?

I don’t think we’ll quite go to that extreme. But I mean, you’re potentially getting toward that level if AI does every stage of producing everything, right? Because then the question is, if that concentration of wealth increases and a small number of humans own the AI that produces anything, then indeed, what’s the motivation of those owners to share the product of these AIs with everybody else?

And of course, what should avoid that, is that the small number of people shouldn’t actually own all the robot factories nor the AIs designing everything and the ownership should be decentralized among more individuals.

Another riveting topic was whether AGIs require consciousness or self-awareness and the ethics of building self-aware systems. Chelsea explains:

I don’t necessarily think that an AGI has to be conscious, but whenever it comes to consciousness and self awareness, I’ve found I found the concept of consciousness for self awareness and in computer to be quite a scary one, because I’m just trying to put myself in the shoes of a machine that’s created by these like, biological meat sacks essentially, out of nowhere, and this just I’m born is a Frankenstein ish, like, you know, entity.

Ben agrees about what the experience could potentially be like for the AGI, and mentions cognitive scientist Thomas Metzinger’s book “Being No One,” in which Metzinger also ponders these situations.

He thinks it’ll be unethical to create AGI because of the conscious experience of all the defective AGIs you have to go through to get to the real one.

Like each buggy system that you build, and then delete, and then build the next buggy system, and then delete. What’s that system’s conscious experience while you’re building it, debugging it, and then deleting it? How do you know what subjective hell it’s going through, right?

Data verification, and the need to be able to distinguish “real” media from deep fakes and the like, also came up a lot. Both Ben and Chelsea feel that some of our current problems with misinformation and disinformation might be tamed a bit using some simple technological solutions that, in theory, shouldn’t be that hard to implement, but, in practice, can be a very slow and arduous process.

“I mean, it’s not that hard for like, every camera to watermark a picture and like this picture was this piece of hardware right here, that encrypted watermark is put into the picture,” Ben explains. “Then picture viewers can validate whether it was real or not. That doesn’t need AI. It’s not incredibly difficult. It just needs cooperation among people making hardware and software…It’s just so hard to get these ideas through people’s heads. And to get these things rolled out.

“Well what we need to do is make it a default feature of the iPhone camera,” Chelsea responds. “That’s how we get it through. That’s how we get it to percolate through society.”

Grace and Chelsea Discuss AI Regulation

An important global conversation is currently taking place, looking at how, and whether, AI and AI development should be regulated. Grace asked what Chelsea thought about regulating AI:

While I’m supportive of regulatory frameworks to set the tone and to talk about the discussion of these things in culture in society and to create frameworks for legal interpretation for courts to examine for society at large to have sort of standards, my expectations of them having any actual tangible effect throughout society are relative relatively limited.

So, while I’m supportive of regulatory frameworks, whenever it comes to AI, I’m very skeptical that the regulatory frameworks will be able to sort of prevent people or discourage people from engaging in these kinds of behaviors and tools.

To learn more about Chelsea, you can read her recently published memoir, “Readme.txt.”

Chelsea Mannings recently released memoir, “README.txt”

About Mindplex

Art by Tesfu Assefa. A non-embodied AGI connected to three humans, forming a “mindplex.”

The Mindplex Podcast team invites you to join them as they contemplate and learn about this fascinating world. They look forward to meeting you so you can help each other understand the true nature of all the challenges involved as we work together to create a Benevolent Singularity.

Most releases of a Mindplex Podcast will be accompanied by a live “pubcast” watch party.

The Mindplex Podcast and Mindplex Magazine — are both part of the Mindplex decentralized media platform spin-off powered by SingularityNet

We invite you to check out the Mindplex website, set up a profile, and perhaps even contribute to the magazine.

Follow us:

Visit Mindplex Magazine

Join our Telegram

Join our Discord

Follow us on Twitter, LinkedIn

Like us on Facebook



Haley Lowy

Marketing & Communication at SingularityNET… bringing about benevolent Singularity bigger-faster-better!!