AI in Alternate Realities: When the Norm Becomes the Exception

Ratiomachina
Brass For Brain
Published in
11 min readMar 14, 2024

Diving into the Normic Universe: A Quirky Spin on Assessing AI Risks Through the Lens of Possible Worlds

Introduction

In the rapidly evolving landscape of Artificial Intelligence (AI), the term “High-Risk AI Applications” garners significant attention, particularly within regulatory frameworks like the EU AI Act and Australia’s impending AI regulations. Despite its frequent usage, the definitions of “risk” and “high risk” remain elusive, leading to ambiguity in determining the risk level of AI applications.

Most references to High-Risk Applications of AI, for example the EU AI Act and Australia’s upcoming AI Regulation for High-Risk Applications. The Australian Government plans to implement a strategy that prioritizes risk in the regulation of artificial intelligence (AI), aiming to mitigate potential negative impacts by regulating AI in scenarios deemed high-risk, while allowing less risky AI applications to develop with minimal interference.

Although the Government acknowledges the concept of “high-risk” as defined in certain prominent regions, it has yet to establish a clear definition within the Australian context, noting the need for further exploration to accurately characterize high-risk AI.

It is anticipated that Australia’s legal framework may draw from the criteria outlined in the recent EU AI Act, which assesses risk based on the likelihood and impact of harm from AI use. This includes AI that functions as a safety component in products, systems employed in sectors like education and employment, and algorithms for risk assessment and pricing in healthcare and life insurance.

The challenge remains in establishing concrete principles to determine whether an AI application is to be considered high-risk.

Definitions First

For navigational purposes, it’s essential to revisit and clear up some definitions:

  1. Risk = An unwanted event that may or may not occur. Or, most generally used is the probability of an unwanted event that may or may not occur.
  2. High Risk = Some kind of threshold is implied, beyond which the risk of an unwanted event occurring is defined as “High”.
  3. AI System = An AI system is a machine or software that can perform tasks that typically require human intelligence. These tasks include reasoning, learning, problem-solving, perception, language understanding, and decision-making. AI systems can range from simple algorithms capable of basic data processing to complex neural networks that learn and adapt over time. They are designed to analyze data, identify patterns, make decisions, and execute actions autonomously or with minimal human intervention.

Take the EU AI Act for example, it simply states, AI systems falling within listed uses such as education, employment, law enforcement and so on. This makes sense as its a very clear and pragmatic mechanism to determine the risk level of an AI system, predominantly determined by how the system is used.

We can restate this as:

High-Risk AI systems are those AI Systems, that when used, negatively affect our safety or fundamental rights.

Philosophical Analysis of Risk

Many philosophers accept that, as well as having a right that others not harm us, we also have a right that others not subject us to a risk of harm. This is implied by the EU AI Act, i.e. we have a right against the imposition of high risks by how AI systems are impacting our lives. For the moment I will use “others” to include AI Systems, although acknowledging that this is an ontological error.

The idea, more precisely, is that we have a right not to be subjected to a high risk of harm — that others not act in such a way that the risk of our being harmed, that others act in a way where the results of their actions, exceeds a threshold t. This interpretation is known as the High Risk Approach (HRA).

Therefore, if we think about High Risk AI systems, for example an employment screening AI system, it is deemed as High Risk due to the fact that applicants are subjected to a high risk of denied employment ( violation of a fundamental right ) due to factors such as race, gender etc. If it was known that the AI system is biased, it is reasonable to assume some applicants would not bother applying. Since applicants may not have this knowledge a priori, it seems that applicants have a tolerance for risk in the sense that if exceeded, results in adverse ( unwanted ) outcomes such as denied employment.

The AI use cases highlighted by the EU AI act must’ve exceeded some threshold for risk, to be defined as such in the first place.

Is this the right approach? This is what I’d like to explore next.

The High-Risk Approach Dilemma

A central challenge within this High-Risk Approach (HRA) is the concept of “distributed risk,” where the risk to a group is significant, although individual risks remain low.

Refering to the following example (McCarthy, “Rights, explanation and risks”, pp213–214. The example is adapted from McKerlie, “Rights and risk”, pp247–248. The fact that cases of distributed risk pose a potential problem for the High-Risk Thesis is observed by Railton, “Locke, stock and peril”, pp209–210) that I adapted for an AI situation:

Suppose an AI system, A is designed to determine two options for disposing of a large quantity of a toxic chemical.

First, A could select to dump the chemical into a pond near one of the neighboring AI engineers, B.

Second, A could select to dump the chemical into the river that flows through B’s property, even though there are a million people who live downstream.

The former option involves a high risk — say a one-in-a-thousand chance — that B will be exposed to a harmful quantity of the chemical. The latter option involves a very high risk that at least one of the people living downstream will be exposed to a harmful quantity of the chemical, even though the risk to each individual person is low — say a one-in-a-million chance.

If a one-in-a-thousand chance is above the threshold to define it as a High Risk, then, according to the HRA, B’s rights would be infringed if A selects to dump the chemical in the pond. If a one-in-a-million chance is below the threshold then, as far as the High-Risk approach is concerned, there is no person whose rights would be infringed if A selected to dump the chemical in the river. All else equal, then, selecting to dump the chemical in the river would be the morally preferable option, as this involves no rights infringements.

But this conclusion seems suspicious— after all, selecting to dump the chemical in the river involves a much higher overall risk of harm. If there are a million people living downstream who are each subjected to a one-in-a-million risk of harm then, assuming these risks are independent, the risk of at least one person being harmed works out to approximately sixty-four percent.

We need something else to be able to classify one action as morally impermissible even though a high-risk threshold t has not been exceeded.

The concept of a risk threshold, while useful for setting clear regulatory standards, may not fully capture the ethical nuances involved in scenarios where actions, although individually below a certain risk level and classified as not high-risk, cumulatively pose significant threats to a large group of people.

WWHOW (What Would Happen in Other Worlds)

Instead of focusing on the likelihood of events, this perspective considers the closeness of possible worlds in which various outcomes occur.

The thinking here, might involve; the chemical could be dumped in the the river and harming an individual, would be just as normal for any other person to be harmed by the toxic chemical. Not much of the world would have to change for this to occur.

Another way to put it is, some of the most normal possible worlds in which the AI system selects to dump the chemical in the river will be worlds in which an individual is harmed.

Clearly, the notion of normalcy that is being invoked here is distinct from the idea of statistical frequency — a person’s harm is not an outcome that would frequently arise from this action (1-in-a-million as mentioned previously), were it repeated over and over. Rather, this outcome is normal in the same sense that it would be normal for, say, “10, 7, 13, 8, 25, 19” to be the winning lottery numbers — some sequence of numbers has to come up, and this sequence would require no more explanation than any other.

There are people living down river, the AI selects to dump the chemical in the water, an individual is harmed. This sequence of events would not require more explanation than any other sequence where this does not happen.

Therefore, its not always the case, that the higher the probability the greater the risk and vice versa.

  1. According to the probabilistic account for risk, the risk that a particular outcome would result from a given action depends on how probable it is that the outcome would result from the action.
  2. According to the normic account, the risk that a particular outcome would result from a given action depends on how abnormal it would be for the outcome to result from the action.

The notion of normalcy at work here is linked with the need for explanation — an outcome is abnormal to the extent that it requires special explanation, in terms of factors that are additional to the action.

On the probabilistic account, when the AI system selects to dump the chemical in the river, the risk to an individual depends upon the number of people in the area, the AI system’s robustness, the wind conditions, the time of day and so on and can, as a result, be made arbitrarily close to zero.

On the normic account, things look altogether different; given the set-up, no special explanation would be needed if someone was was harmed— no matter how many people involved, this will represent one of the normal outcomes of the action.

Contrasting wo different views of risk formally.

Probabilistic High Risk Approach

We have a right that others not subject us to a high probabilistic risk of harm. More precisely, we have a right that others not act in such a way that the probability of our being harmed, as a result of their action, is above a threshold t.

Normic High Risk Approach

We have a right that others not subject us to a high normic risk of harm. More precisely, we have a right that others not act in such a way that the abnormality of our being harmed, as a result of their action, is below a threshold t.

Distributed Risk under the Normic Account of Risk

Revisiting, the Distributed Risk issue under the typical probabilistic account of risk (high risk that some member of a group will be harmed, even though the risk to each individual member is low).

Consider a group of individuals C, D, E, F … and suppose there is a high risk that some member of the group will suffer harm. On the normic account, this means that there is a relatively normal possible world in which some member of the group suffers harm.

But any world in which some member of the group suffers harm must either be a world in which C suffers harm, or a world in which D suffers harm and so on, in which case either C must be at a high risk of harm or D must be at a high risk of harm and so on.

That is, if there is a high normic risk that some member of the group will be harmed, then there must be some member of the group who is at a high normic risk of harm. If the risk to each member is equal, then they will all face a high normic risk of harm.

Back to the chemical dump example. The normic risk for either option is maximal given additional information such as the quantity and the potency of the chemical, the volume of the river, the way in which the water is used. In addition, we might have access to the AI system qualities such as reliability, transparency, risk controls in place and so on. As a result, the normic risk account predicts, all else equal, it would be morally worse to dump the chemical in the river as this would involve a million right infringements, while dumping the chemical in the pond would involve only one person.

However, what if we were to alter this case. What or how could we lower the normic risk to each person living downstream? Suppose we put in place a series of measures or controls to prevent people coming into contract with the water like a high fence for example. What this control does, is it now demands a special explanation in the event someone is exposed to the water. How did they get past the fence?

Therefore, it would be abnormal for someone to be harmed due to the chemical in the river and if this level of abnormality is higher than a threshold than dumping the chemical in the river would result in no infringements.

Conclusion

Unlike the Probabilistic HRA, the Normic HRA doesn’t generate a moral preference for cases in which a risk is distributed amongst the members of a group.

While the probabilistic High-Risk Approach (HRA) assesses risks based on the likelihood of harmful outcomes, the Normic account of risk focuses on how “normal” or “expected” it is for such outcomes to occur in the closest possible worlds.

In the context of AI, a high-risk use case under the normic account would be one where there is a close possible world in which the AI’s action leads to significant harm. This assessment isn’t about how often we expect the harm to occur but about the presence and proximity of possible worlds where the harm does occur.

Key takeaways

  1. Distributed Risk: Under the normic account, the moral evaluation doesn’t inherently change just because a risk is distributed among many individuals. If an AI system’s action leads to a situation where there’s a close possible world with significant harm to any individual or group, it’s considered high risk. The focus is on the closeness of the possible world where harm occurs, not on how the risk is distributed or its statistical probability.
  2. AI Decision-Making: Consider an AI system that makes loan approval decisions. Under the probabilistic view, if the system has a low probability of incorrectly denying a loan to any individual, it might be considered low risk. However, from a normic perspective, if there’s a close possible world where the AI’s algorithm systematically denies loans to a particular demographic, causing significant harm, it would be deemed high risk, even if each individual decision seems low risk.
  3. Risk Mitigation: In the normic view, mitigating risk involves altering the proximity of possible worlds where harm occurs. For AI, this could mean implementing robust fairness checks, transparency measures, and oversight to ensure that the closest possible worlds are worlds that require an explanation, i.e. the abnormality of harm is below a threshold t.
  4. Ethical and Regulatory Implications: Regulating high-risk AI applications through a normic lens would require authorities to consider not just the statistical probability of harm but the ethical significance of potential harm in closely adjacent possible worlds. This could lead to different priorities in AI governance, emphasizing preventative measures and ethical considerations even when probabilistic risks seem low.

--

--

Ratiomachina
Brass For Brain

AI Philosopher. My day job is advising clients on safe responsible adoption of emerging technologies such as AI.