Ethics of AI: is applied philosophy feasible or fictional?

Melissa Morano Aurigemma
Exceptional Capital
5 min readMay 20, 2024

In 2020 when I started my Masters in philosophy, ethical usage of AI was not exactly a common household topic. Now it is.

For context, let me start by revealing that during my undergraduate experience I took a number of courses with Professor Richard Sugarman, who I will always be indebted to for the introduction to Emmanuel Levinas, a Lithuanian-French Jewish philosopher. This turned out to be the start of an ongoing engagement with his work. I pursued graduate degrees in philosophy for fun, with a desire to study Levinas further and to challenge myself with a rigorous dissertation process. In the coming weeks I’ll share a deeper dive into Levinas himself and why I found/continue to find his work so relevant and engaging. For now, I’ll provide the briefest, oversimplified intro to why I believe his work has a place in our world today.

My high-level primary takeaway from Levinas is that reading his work has opened up my viewpoint to avoid a hyperfixation on the liminal. To comprehend his concept of what makes us uniquely human, where does philosophy itself begin — the answer is ethics. A phrase (translated in English, because he wrote in French) common to come by in material on Levinas is “ethics as first philosophy.” Before anything else, there is ethics.

I think it can be easy for people to misunderstand what a practical application of philosophy might look like. Engaging with Levinas’ work does not require a replication of his concepts; but at the same time, his work is not rendered useless simply because he could not have envisioned ChatGPT or drones delivering Amazon orders. Studying Levinas has given me the skillset to perform a more nuanced analysis of ethical responsibility. His work has offered a framework for me to comprehend the relationality between Self and the Other in a way that I otherwise likely would not have.

How is this relevant to ethics in tech, ethics of AI?

In my Masters thesis (TL;DR) I wrote about the idea of otherness in what I termed the Alius — similar to the notion of an avatar or online persona (ie alias) — but the human mirroring of the self, vs a purely digital object. It mirrors the person it “represents,” which then mirrors to an audience, viewer, etc. Perhaps it is more a prism than a mirror. Open to debate on this. It’s possible that this type of philosophical inquiry is not particularly fruitful, but I found it a useful exercise in contemplating responsibility between Self and Other in shared digital spaces. And let’s say we agree that the Alius does “exist” in some form — where does it exist, what is our responsibility to it? Do we risk objectifying people by failing to take up the burden of responsibility we may have to this digitalized otherness? Think online bullying as one very easy example of how this could be pertinent today and in the future.

Technology, as we’ve seen, is very much Pandora’s box. Social media was widely used across multiple platforms first, followed by discussions about how it could be harmful and how to make it less so. ChatGPT launched, and immediately received questions about how to regulate risks of plagiarism (both in terms of academic and professional work, as well as how the models are trained and with what material). It cannot go back in the box. We cannot regulate anything back into the box. Platforms, apps and actions can be banned or have associated fines, but in all likelihood technology will re-emerge. The innovation doesn’t disappear.

Perhaps a disappointment, I do not have a hot take on how to regulate AI. I think there is typically fallout in some form as a natural byproduct of innovation (read The Structure of Scientific Revolutions by Thomas Kuhn for more on paradigm shifts). That may sound callous, but anything that is disruptive is just that — it will quite naturally create a disruptive effect. Mitigating such harm could be the best we can do, as the likelihood of preventing any and all harm is, as history demonstrates, quite unlikely.

Emmanuel Levinas (1905–1995)

Funny enough, I chose Levinasian ethics, with a focus on AI ethics and philosophy of information for my MA (and ongoing PhD) because I suspected it would be a topic on everyone’s mind at one point in the future. That day has arrived and after 14 years of engagement and obsession with the philosophical works of Levinas, I have nothing concrete to inform you of. Which is again, rather the point of philosophy. It’s not a prescriptive discipline. Strictly prescriptive philosophy gets into the realm of dogmatic and fanatical sentiment. All I can advocate for is that people take into consideration that a practical and consistent application of philosophical thought can be valuable within dialogue related to ethics of AI, and tech innovation and usage more broadly.

Of course, as a student of Levinasian philosophy, I would love to advocate as I did in my MA thesis defense, that the development and deployment of new technologies necessarily involves an analysis of ethical responsibility. But in our systems of justice as we know them today, there is a high probability that looks like state regulation and not self-regulation. State regulation has its downfalls and limitations. That said, the integration of ethics and emerging technology has at times been tenuous; but that should not discourage us from remaining open minded about its potential for successful inclusion. One caveat/strong opinion that I need to state — Ayn Rand is an author, not a philosopher, and certain people in certain places have hyper fixated on her novels as doctrine for a bit too long. Let’s move on to other actual philosophers.

Time — and I hope the inclusion of applied philosophy — will reveal if and how regulation and ethical considerations play out for rapidly emerging technologies, inclusive of AI.

--

--

Melissa Morano Aurigemma
Exceptional Capital

Philosopher, artist, poet, etc, etc by night and by day Chief of Staff at Exceptional Capital