AI Safety in 2023

Waleed Ammar
Holistic Intelligence & The Global Good
7 min readNov 17, 2023

Was Google too late? Was OpenAI too early?

For generations, the business schools will be teaching the rise of OpenAI as an example for how timing the market is critical for success in business. In Feb 2023, Google launched PaLM API as part of its cloud offering, featuring state-of-the-art large language models (LLMs) on the cloud. That was only 3 months after OpenAI’s launching ChatGPT for general availability, which earned it a lot of credibility, loyalty and subscriptions. Subscriptions that Google, Microsoft and Amazon would have been otherwise getting if they had a better understanding of the AI market outside their respective bubbles.

In my opinion, Google was too late for business and OpenAI was too early for safe business.

Photo by Pierre Bamin on Unsplash

But do we really know anything about AI safety?

Before 2023, most scientists and technologists refused to take time out of their busy schedule to consider the societal harm of the products they build with AI. With the ChatGPT’s general availability launch in Dec 2022, they started understanding the extent to which AI is hurting our society. That realization was an important step forward in promoting safety in AI products.

SURPRISE! For many years, a thriving community of researchers dedicated the best part of their waking hours to:

  1. Understanding the biases in machine learning models,
  2. Understanding the risks they impose on our society, and
  3. Assessing proposals for effective mitigation strategies.

We just need to take these proposals into account when we build AI products. How hard could it be?!

Photo by Patrick Tomasso on Unsplash

Who can you trust?

Should we ONLY pay attention to experts who built LLMs? Should we ONLY pay attention to experts who care about safety?

Because the incentive structure is so different between these two groups, they provide different perspectives and it’s essential to pay attention to both perspectives. This is A Good Thing because we need the balance. We need the dude who knows how to build the chair and the dudette who knows how the chair is to be safely used for the benefit of the user. Only then can we build a cost-effective and usable chair. Only then can we build an AI-product which is both efficient and safe.

Building successful and safe AI products is about maintaining a good balance. We need the balance between the short-term business interests and the long-term impact on users' well-being.

Photo by micheile henderson on Unsplash

How to bury your head in the sand?

As much as I love promoting safety in AI products, there is a genre of AI safety I would caution from: the discourse on safety from AI. While it’s a powerful statement to make, I don’t think there’s a future in which our society would abolish LLMs when it is practically everywhere. I am sorry I need to say this but safety from AI conversations are so impractical even the rest of the safe AI community don’t take it seriously. AI is already everywhere, on every device, in every country. Yes, AI has introduced new harms to our society, and will introduce more harms, but it is not going anywhere because there’s a sustainable need for it. So did social media circa 2007. So did the Web circa 2000. So did games throughout history, since, at least circa, 2000 BCE.

Photo by NEOM on Unsplash

It’s hard to read the entire literature.

First, let me share a dad joke.

“Three friends went to a bar: a rabbi, an engineer and a policeman.

The bartender asked: ‘What would you like to drink?’

The rabbi said ‘Drinking is against my values. I’ll take water.’

The engineer said ‘I’m driving. I’ll take two margaritas and a cup of water, in this order.’

The policeman arrested the bartender for serving the engineer alcohol without asking for an ID.”

So much has been written on how to mitigate AI risks. An attempt to cover this topic comprehensively would quickly turn into a book chapter, and I only have a few hours to write this article. Instead, I’ll enumerate three mitigation strategies which I believe are worth pursuing.

Values-oriented methods. Ethics experts can describe how ethical AI may look, but they cannot build it. Therefore, we need to work on imparting the values behind ethical AI to AI experts who may not necessarily appreciate these values, because all humans have their blindspots, and it takes the whole village to build a baby product, powered by AI. In this line of work, authors propose adopting methods which proved effective for raising awareness in other fields such as medicine and engineering.

Design-oriented methods. By far my favorite type of work in AI Safety is methods which answer “how to design AI models which addresses the biases that will arise?” The bias is often analyzed and traced back to the distributional shift between the data used to train the models vs the data fed to the model at test time (or after the product launches). This line of work is well aligned with user-centric design principles.

Police-oriented methods. Many well-respected, well-intentioned (if misguided) researchers and technologists declared their mission to be “accelerating scientific breakthroughs” and believed that it’s the path to “common good”. However, that ultimately resulted in AI products causing tangible harm to humans, including loss of life in AI-powered weapons. Causing harm is often, but not always, punishable by law. The law is certainly evolving to accommodate corner cases revealed by the wide adoption of AI products, although many legal experts don’t think the necessary reform is huge. The idea of AI models turning into legal entities who are able to decide whether to cause harm or not is (still) considered ridiculous in most quarters. The idea that the legal entity which created an AI-powered product is responsible for the harm it causes is gaining more traction, both when the inflicted harm is intended and when the harm is accidental. Cybersecurity appears to be the most natural specialty in tech to enforce these laws.

It takes a village to raise a safe AI-powered baby product.

Photo by John Cameron on Unsplash

One holistic procedure

In 2020, a holistic procedure for building safe AI products was defined and coded in an international standard known as IEEE 7010. Here’s the gist for folks building AI-powered products in 2023:

1. Define well-being KPIs to complement business KPIs.

2. Make it easy to monitor and analyze performance against all KPIs.

3. Rinse and repeat.

In addition to being holistic, this strategy is also adaptable, in that it provides a mechanism for gradual and sustainable adoption of different approaches to safe AI. For the sake of argument, let’s say you’re building an AI-powered product for learners. You may be tempted to focus exclusively on KPIs, e.g., revenue, traffic, MAU, DAU/MAU. These are all great, but they don’t necessarily reflect the well-being of your users. In addition to the aforementioned business-focused KPIs, you may want to add well-being KPIs. A few examples:

  • percentage of team members participating in relevant training, (values-oriented)
  • hourly rate of learner’s progression, (design-oriented)
  • number of blocked and audited fraud attempts, (police-oriented)
Photo by Austin Distel on Unsplash

AI insurance is coming.

Now, we all know what happens when a bunch of people need something (i.e., reduce their liability) but don’t have the expertise to do it themselves (e.g., because they don’t have safety experts or AI experts on the team). A marketplace is created, which can be reasonably called the AI insurance marketplace. An AI insurance policy will define certain requirements, e.g., a particular interpretation of the IEEE 7010, which must be followed for an AI-powered business to be covered by the policy, up to a certain limit. I haven’t seen any specialized AI insurance companies yet, although the idea was proposed at least as early as 2020. It’s a tough policy to define because there is not enough data. Which is why data around legal allegations in the AI space is more precious than gold.

Photo by krakenimages on Unsplash

The big question.

A few days ago, in Nov 2023, Sam Altman announced that OpenAI will be liable for the legal fees of businesses built on its platform. What kind of AI insurance policy enables OpenAI to make such a bold promise?

--

--

Waleed Ammar
Holistic Intelligence & The Global Good

an immigrant, artist, research scientist, engineer, educator, and business owner fighting income inequality