When AGI Goes Wrong: Who Pays the Price?

Dennis Stevens, Ed.D.
HEGEMONACO
Published in
5 min readMay 12, 2024

--

The drive to create “trustworthy AI” is gaining momentum. Governments, industry, and civil society are all rushing to define ethical guidelines for AI development and deployment. The EU’s “ Ethical Guidelines for Trustworthy AI” is a prominent example. President Biden’s Executive Order establishes standards for AI safety and security, among other priorities.

But these well-intentioned efforts are facing a growing wave of criticism. Critics like Thomas Metzinger argue that “trustworthy AI” is a meaningless concept. Only humans can be trustworthy, not machines. This raises a crucial question:

If humans are ultimately in charge of AI, should companies that develop and deploy Artificial General Intelligence (AGI) systems be held liable when those systems cause harm?

This question cuts to the heart of AI governance. It forces us to grapple with fundamental power, responsibility, and accountability issues in a world increasingly shaped by intelligent machines.

Rejecting Technological Determinism:

It is critical to frame the above question in a way that rejects technological determinism-the idea that technology shapes society in inevitable ways. We are not passive recipients of technology; we choose how to develop, deploy, and use it. This means that companies developing AGI are…

--

--

Dennis Stevens, Ed.D.
HEGEMONACO

Navigating complexity with intellectual agility, I synthesize perspectives in art, technology, and politics to provide a view of transformative horizons.