Put AI to work and hold it “accountable”

Alaeddine Abdessalem
4 min readMar 31, 2023

We’ve all seen LLMs’ capabilities in terms of language modeling, in-context learning and task completion. Some research even suggests we’re closer to AGI.

These models (as of now) only give back text as output. Therefore, they fundamentally are suitable for text generation and reasoning, not taking actions.

Though, some transformer LLMs are designed to take action and recent research showed that even those designed for next token prediction can be augmented with tools and get it to act. Such a paradigm can be observed with Langchain and chatgpt plugins (one of the methods to augment LLMs with tools explained in this blog).

Liability is key to reliability

Okay, so now we can augment language models with tools (terminal, APIs, software, whatever tool to make it do the job for you).

So all good? Let’s put it to action and lose our jobs?

Fortunately not yet. Because LLMs hallucinate, no one would dare to give it enough power for high risk tasks.

Imagine AI leaking your security credentials, refunding customers who tricked it to do that, or sending your emails to the wrong person.

But let’s assume AGI is achieved, it has less error rate than humans and we augment it with powerful tools, can we put it to action?

Humans are still preferable for certain tasks and more reliable, simply because they are liable, not only because they are smart.

Actually, as these thoughts crossed my mind, I was talking to GPT4 (because who doesn’t 🤷‍♂️) and asked it, who is liable when a self-driving car hurts someone?

AI liability

According to GPT4, this turns out to be one of the most intriguing and challenging topics in law. In fact, depending on the jurisdiction and local laws, either the AI developer, the AI owner (software company) or the AI user can be liable:

But in all those cases, the liability concerns might stop any of those players from putting AI into action for different reasons:

  • developers don’t want to be sued because of unintentional bugs
  • AI companies might become bankrupt if they end up involved in thousands of law suits
  • AI users might abstain from using AI and won’t trust it if they can be sued because of its mistakes

So this thought came to my mind: what if we made AI liable for its mistakes?

Sounds irrational right ? Well I wanted to hear GPT’s opinion anyway, here is what it said:

TLDR: interesting idea, but with several challenges:

  1. Legal personhood
  2. Enforcement
  3. Deterrence
  4. Moral responsibility

Well fair enough. But I thought we can still address these challenges.

So I told GPT4: suppose we assign legal personhood to AI systems, give them revenue for the services they offer and make them subject to penalties when they make mistakes to solve enforcement. To solve deterrence, we could make their training loss designed to minimize penalties applied on them and update their weights online.

Here’s the summary of its response:

“Interesting and novel approach” to AI liability, but:

  • Moral responsibility is not solved yet
  • Still need to determine how revenues are split between companies and AI systems
  • Technical challenge to implement and enforce a loss minimization system
  • AI systems might tend to optimize revenues instead of minimizing penalties
  • Incentives for AI developers and operators: such an approach frees developers from the obligation to make safe and ethical AI systems
  • Regulatory complexities, international coordination, ethical concerns over human rights, and many more issues appear as you tweak your prompt.

What I find impressive is how GPT4 thinks the approach is novel, yet quickly caught up flaws with such an opinion.

Politics are falling behind

Don’t get me wrong, I am not trying to start an AI rights movement or to promote such an idea.

All I’m saying is that we are clearly falling behind in terms of regulations, compared to the fast pace of AI development and the race for AGI.

Actually, let me quote Huggingface’s CEO about the topic:

https://www.linkedin.com/posts/clementdelangue_how-much-time-before-the-first-us-secretary-activity-7043613013190266880-3a4q

Link: https://www.linkedin.com/posts/clementdelangue_how-much-time-before-the-first-us-secretary-activity-7043613013190266880-3a4q

Or, let me refer to an open letter to pause giant AI experiments, submitted by the world’s top AI experts and company leaders. The letter states that we are in a “dangerous race” that can harm humanity and that we should first focus on building regulatory authorities for AI, provenance, auditing and watermarking systems, solve liability for AI-caused harm

Although there have been attempts to address AI regulation like the EU AI Act and the EU Liability rules for AI, we’re still lacking a more robust legal system, broader international coordination and government commitment to the topic, in the same way humanity tried to address nuclear weapons and global warming (although still in progress).

LinkedIn: https://www.linkedin.com/in/alaeddine-abdessalem-549b65169/

Twitter: https://twitter.com/alaeddine_abd

Co-Author Aziz Belaweid: https://www.linkedin.com/in/mohamed-aziz-belaweid/

--

--