Probability Schmobability: Why AI Needs the Modal Account of Risk

Ratiomachina
Brass For Brain
Published in
14 min readJan 11, 2024

Introduction

We all make decisions on a daily basis and risk management plays a pivotal role, spanning various domains from finance to healthcare, and more recently, artificial intelligence (AI). At its core, risk management involves identifying, assessing, and prioritizing risks followed by the application of resources to minimize and control the impact of unfortunate (unwanted outcomes) events. Traditionally, this process has been dominated by probabilistic approaches, which quantify risks in terms of likelihood and impact, providing a seemingly precise and mathematical method to tackle uncertainty (Kaplan & Garrick, 1981).

However, the advent of AI, with its complex algorithms and unpredictable outcomes, challenges the sufficiency of these conventional methods. AI systems, by their nature, introduce novel and often unforeseen risks, necessitating a reevaluation of traditional risk management frameworks (Bostrom, 2014). It is in this context that Duncan Pritchard’s Modal Account of Risk emerges as a compelling alternative. Pritchard’s approach, rooted in philosophical analysis, diverges from the probabilistic model by emphasizing the modalities of risk — the possible worlds where risks materialize — rather than their statistical likelihood (Pritchard, 2015).

This article aims to explore Duncan Pritchard’s Modal Account of Risk, contrasting it with the probabilistic way of thinking about risk, and examine how this philosophical perspective can lead to more effective risk management practices, particularly in the field of AI. The thesis posits that Pritchard’s approach offers a nuanced understanding of risk, especially in scenarios where probabilistic data is insufficient or misleading, thus potentially providing a more robust framework for managing the unique challenges posed by AI technologies.

Overview of Probabilistic Risk Assessment

Probabilistic Risk Assessment (PRA), also known as quantitative risk assessment, has been a cornerstone in the field of risk management for decades. This approach fundamentally relies on the quantification of risk by assessing the probability of a risk event occurring and the severity of its impact (Kaplan & Garrick, 1981). Typically, this is done through statistical methods and historical data analysis, which allow for the calculation of risk as a function of likelihood and consequence.

At its core, the probabilistic approach is grounded in the mathematical theory of probability. This method has been widely applied in various fields such as finance, engineering, and healthcare, where it assists in decision-making under conditions of uncertainty. In the financial sector, for instance, PRA is used to model market risks and credit risks, helping institutions to prepare for potential financial downturns (Jorion, 2007). In engineering, it aids in assessing the safety and reliability of structures or systems, like in nuclear power plant safety analysis (Cooke, 1991).

Despite its widespread usage, the probabilistic approach to risk assessment is not without limitations. One significant drawback is its reliance on historical data and statistical models, which may not accurately capture rare or unprecedented events. This limitation becomes particularly evident in dealing with complex systems, such as AI, where the interactions and outcomes can be unpredictable and not well-represented in existing data (Taleb, 2007). Additionally, this method often fails to account for the subjective nature of risk perception and the human factors involved in risk management decisions.

Moreover, the probabilistic approach can sometimes lead to a false sense of security. By providing precise numerical values, it might create an illusion of certainty and control in inherently uncertain situations. This can be misleading, especially in fields like AI, where the pace of innovation and the complexity of the systems involved can lead to situations that are not well-understood or previously encountered.

In summary, while the probabilistic approach to risk assessment offers a systematic and quantifiable method for managing risks, its limitations become increasingly apparent in the context of complex and evolving systems like AI. These limitations underscore the need for alternative perspectives and methods in risk assessment, such as Duncan Pritchard’s Modal Account of Risk, which will be explored in the following sections.

Duncan Pritchard’s Modal Account of Risk

Duncan Pritchard, a prominent philosopher in the field of epistemology, introduced the Modal Account of Risk as a novel framework for understanding and assessing risks. Distinct from the probabilistic approach, Pritchard’s theory is deeply rooted in philosophical analysis, particularly in modal epistemology, which deals with concepts of possibility and necessity (Pritchard, 2015).

Key Principles of the Modal Account of Risk

The Modal Account of Risk diverges from traditional probabilistic methods by focusing on the modalities (possibilities) of risk, rather than their statistical probabilities. The key principles of this approach include:

  1. Modal Realism: Pritchard’s approach is based on the idea of modal realism, which considers not just the actual world but a range of possible worlds where different outcomes might occur. This perspective allows for a broader consideration of risks, including those that are statistically improbable but potentially significant.
  2. Safety and Danger: The theory emphasizes the concepts of safety and danger in risk assessment. A situation is considered risky not just based on the likelihood of a negative outcome, but on how easily the scenario could shift from a safe to a dangerous state across possible worlds.
  3. Qualitative Analysis: Unlike the quantitative focus of probabilistic methods, the Modal Account advocates for a qualitative analysis of risk, taking into account the nature and context of the risk, including ethical and social dimensions.

Comparison with Probabilistic Approach

When contrasted with the probabilistic approach to risk, several philosophical and practical differences emerge:

  • Focus on Possibility vs. Probability: While probabilistic risk assessment hinges on the likelihood of events based on historical data, the Modal Account considers the broader spectrum of what is possible, even if not easily quantifiable.
  • Qualitative vs. Quantitative: Pritchard’s approach is more qualitative, considering the nature and context of the risk, which can be crucial in complex systems like AI where risks are not always quantifiable.
  • Ethical Considerations: The Modal Account inherently involves ethical considerations, as it takes into account the broader impact of risks on society and individuals, beyond mere statistical outcomes.
  • Adaptability to Novel Risks: Pritchard’s approach may be more adaptable to novel and unforeseen risks, such as those presented by rapidly evolving AI technologies, where historical data may be lacking or irrelevant.

In summary, Duncan Pritchard’s Modal Account of Risk provides a distinct and philosophically rich framework for risk assessment. It offers a valuable alternative to the probabilistic method, particularly in areas like AI risk management, where the nature and impact of risks are complex and multifaceted.

Application to AI Risk Management

Risk management in the realm of Artificial Intelligence (AI) is of paramount importance due to the dual nature of AI technologies: while they hold immense potential for societal benefits, they also pose significant risks and ethical challenges. These risks range from immediate concerns like privacy and security to long-term implications such as job displacement and decision-making autonomy (Russell, 2019). Managing these risks effectively is crucial to harnessing the positive potential of AI while mitigating its negative impacts.

Applying the Modal Account of Risk to AI

Duncan Pritchard’s Modal Account of Risk offers a unique perspective in addressing the challenges of AI risk management. Its application to AI can be considered in the following aspects:

  1. Embracing Uncertainty and Unforeseen Consequences: Given the rapid advancement and complexity of AI, it is often difficult to predict all possible outcomes. The Modal Account, with its emphasis on considering a range of possible worlds, is particularly suited to address this uncertainty. It encourages a more holistic view of potential risks, including those that are currently unknown or underappreciated.
  2. Ethical Considerations: The Modal Account inherently incorporates ethical considerations into risk assessment. This is especially relevant for AI, where ethical dilemmas abound, such as biases in decision-making algorithms or the moral implications of autonomous systems. This approach ensures that ethical concerns are not an afterthought but a central component of risk assessment.
  3. Adaptability to Novel Risks: The qualitative nature of the Modal Account makes it adaptable to the novel risks posed by AI, which might not be adequately captured by traditional probabilistic methods. It allows for a more nuanced understanding of risks in the context of evolving technologies.

Different Strategies in AI Risk Management

The adoption of Pritchard’s approach in AI risk management could lead to several strategic shifts compared to traditional methods:

  • Broadened Risk Assessment Scope: Risk assessments would not only consider the likelihood of negative outcomes but also their potential severity and ethical impact, even if these outcomes are statistically rare or unprecedented.
  • Proactive rather than Reactive: The focus on a range of possible worlds encourages a more proactive approach to risk management, anticipating and preparing for a wider array of potential scenarios.
  • Enhanced Stakeholder Involvement: Given its qualitative nature, the Modal Account may promote greater involvement of diverse stakeholders, including ethicists, sociologists, and the general public, in the risk assessment process.
  • Integrated Ethical Framework: Ethical considerations would be integrated into the risk management framework, ensuring that AI development aligns with societal values and moral principles.

In conclusion, applying Duncan Pritchard’s Modal Account of Risk to AI presents an opportunity to manage risks in a more comprehensive and ethically grounded manner. By moving beyond traditional probabilistic methods, this approach can help navigate the complex and evolving landscape of AI risks.

Case Studies and Examples

The utility of Duncan Pritchard’s Modal Account of Risk in AI risk management can be illustrated through both real and hypothetical examples. These examples will demonstrate how this approach can offer deeper insights compared to traditional probabilistic risk assessments.

Case Study 1: Autonomous Vehicles

Modal Account Perspective: The development of autonomous vehicles (AVs) presents complex risks, not just in terms of accident probabilities, but also in ethical decision-making scenarios, such as the trolley problem. The Modal Account urges us to consider various possible worlds where AVs might have to make ethical decisions, like choosing between the safety of passengers versus pedestrians. This approach emphasizes the importance of programming ethical considerations into AV algorithms, beyond just minimizing the statistical likelihood of accidents.

Probabilistic Risk Assessment: A probabilistic approach would primarily focus on the frequency of accidents involving AVs compared to human drivers, potentially overlooking the more nuanced ethical dilemmas AVs might face.

Case Study 2: AI in Healthcare

Modal Account Perspective: AI applications in healthcare, such as diagnostic algorithms, could have far-reaching impacts on patient care. The Modal Account encourages a comprehensive examination of scenarios, including rare but possible events where AI might misdiagnose a rare disease. This approach could lead to the development of AI systems that are not only accurate but also possess fallback mechanisms for unusual or unforeseen medical cases.

Probabilistic Risk Assessment: A traditional risk assessment might focus on the overall accuracy rate of AI diagnostics, potentially underestimating the impact of rare but significant misdiagnoses.

Hypothetical Example: AI in Financial Decision-Making

Modal Account Perspective: Imagine an AI system designed for financial decision-making. The Modal Account would prompt consideration of scenarios beyond historical market data, such as unprecedented economic crises or ethical implications of AI-driven decisions on market stability. This approach could lead to the incorporation of safeguards against unforeseen economic scenarios and ethical guidelines for AI behavior in financial markets.

Probabilistic Risk Assessment: A probabilistic approach might rely heavily on past market trends to predict future outcomes, potentially failing to anticipate novel economic disruptions or ethical dilemmas in AI-driven decisions.

Comparison of Outcomes and Strategies

In each of these examples, the Modal Account of Risk leads to a broader and more ethically nuanced risk assessment compared to probabilistic methods. It prompts consideration of a wider array of potential outcomes, including those that are statistically improbable but could have significant impacts. This approach encourages proactive and comprehensive risk management strategies in AI development, prioritizing ethical considerations and preparedness for unforeseen scenarios.

Advantages and Limitations

Duncan Pritchard’s Modal Account of Risk presents a philosophical shift in risk management, particularly applicable to AI. This section critically assesses its advantages and limitations.

Advantages

  1. Broader Risk Perspective: The Modal Account encourages a comprehensive view of risk, extending beyond statistical probabilities to include a range of possible scenarios, especially beneficial in the rapidly evolving field of AI (Pritchard, 2015).
  2. Ethical and Societal Considerations: By inherently incorporating ethical and societal implications in risk assessment, this approach ensures that AI development aligns with broader human values and ethics, a crucial aspect often overlooked in traditional risk management (Russell, 2019).
  3. Adaptability to Novel Risks: The qualitative nature of the Modal Account allows for flexibility and adaptability in assessing risks associated with novel AI applications, where historical data may be insufficient (Bostrom, 2014).
  4. Proactive Risk Management: This approach fosters a proactive stance in risk management, preparing for a wide array of potential outcomes, including rare but impactful events, thus enhancing the resilience of AI systems.

Limitations

  1. Quantification Challenges: The Modal Account’s qualitative focus can pose challenges in quantifying risks, making it difficult to integrate into frameworks that rely on numerical risk assessments (Taleb, 2007).
  2. Subjectivity and Interpretation: The approach’s reliance on qualitative analysis introduces a degree of subjectivity, potentially leading to varied interpretations of risk among different stakeholders (Goodall, 2014).
  3. Practical Implementation: Applying the Modal Account in practical AI risk management scenarios can be complex, as it requires a deep understanding of both the technology and the philosophical underpinnings of this approach (Hansson, 2007).
  4. Resource Intensiveness: Comprehensive risk assessments considering a wide range of possible worlds may require substantial resources, time, and expertise, which could be a limiting factor, especially for smaller organizations or projects.

Conclusion

This essay has explored the application of Duncan Pritchard’s Modal Account of Risk in the context of AI risk management, contrasting it with traditional probabilistic approaches. The key points made throughout the essay can be summarized as follows:

  1. Probabilistic Risk Assessment Limitations: While widely used, probabilistic risk assessments often fall short in managing the complex, unpredictable, and ethically nuanced risks associated with AI technologies.
  2. Pritchard’s Modal Account of Risk: This approach offers a philosophical alternative, emphasizing a broader spectrum of possible scenarios, including those with low probability but high impact. It inherently incorporates ethical considerations, making it particularly suitable for AI risk management.
  3. Broader and More Ethical Risk Perspective: By focusing on a range of possible outcomes and integrating ethical considerations, the Modal Account provides a more comprehensive understanding of AI risks, beyond mere statistical probabilities.
  4. Challenges in Practical Implementation: Despite its theoretical appeal, there are challenges in applying the Modal Account, including issues with quantifying risks, potential subjectivity, and resource intensiveness.

Incorporating the Modal Account of Risk into AI risk management could lead to more robust, ethically informed, and comprehensive risk strategies. It encourages a proactive stance, preparing for a wide array of potential outcomes and ensuring that AI development aligns with societal values and ethical standards.

For future research, several areas appear particularly promising:

  • Methodological Development: Developing methodologies to effectively integrate the Modal Account with existing quantitative risk management frameworks in AI.
  • Case Studies and Empirical Testing: Conducting case studies or empirical research to test and refine the Modal Account’s application in real-world AI scenarios.
  • Ethical and Societal Implications: Exploring the broader ethical and societal implications of AI risks through the lens of the Modal Account, particularly in areas like autonomous decision-making, privacy, and human-machine interaction.
  • Interdisciplinary Collaboration: Fostering interdisciplinary collaboration between philosophers, AI developers, ethicists, and policymakers to enrich the discourse and application of this approach.

In conclusion, while the Modal Account of Risk presents its challenges, its potential benefits in enhancing AI risk management are significant. Future explorations in this field should aim to bridge the gap between theoretical insights and practical applications, ensuring that AI technologies advance in a manner that is both innovative and aligned with ethical principles.

References

  • Kaplan, S., & Garrick, B. J. (1981). On The Quantitative Definition of Risk. Risk Analysis, 1(1), 11–27.
  • Pritchard, D. (2015). Risk. Metaphilosophy, 46(3), 436–461.
  • Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  • Kaplan, S., & Garrick, B. J. (1981). On The Quantitative Definition of Risk. Risk Analysis, 1(1), 11–27.
  • Jorion, P. (2007). Value at Risk: The New Benchmark for Managing Financial Risk. McGraw-Hill.
  • Cooke, R. (1991). Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford University Press.
  • Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House.
  • Goodall, N. J. (2014). Ethical decision making during automated vehicle crashes. Transportation Research Record, 2424(1), 58–65.
  • Topol, E. J. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.
  • Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131.

Appendix A— Modal Risk Framework

1. Risk Identification

Modal Realism Approach: Identify potential risks by envisioning a wide range of possible scenarios, not just those with historical precedence. This step should involve brainstorming sessions with diverse stakeholders, including ethicists, AI experts, and end-users.

2. Qualitative Risk Analysis

Scenario-Based Analysis: Assess each identified risk by considering how it might manifest in various possible worlds. This involves qualitative methods, such as thought experiments and ethical analysis, rather than relying solely on statistical data.

3. Risk Prioritization

Safety and Danger Focus: Prioritize risks based on their potential to shift a scenario from a safe to a dangerous state across different possible worlds. This step is not about probability but about the severity of impact and ethical implications.

4. Risk Mitigation Strategies

Proactive Measures: Develop strategies to mitigate each risk, focusing on ethical safeguards, resilience to unforeseen events, and adaptability. Engage in creating guidelines and protocols that address the ethical dimensions of risks.

5. Continuous Monitoring and Review

Dynamic Risk Landscape: Regularly revisit and update the risk assessment, acknowledging that the AI landscape is continuously evolving. This includes monitoring for new types of risks and re-evaluating existing risk mitigation strategies.

6. Stakeholder Communication and Engagement

Inclusive Dialogue: Maintain open communication channels with all stakeholders, ensuring transparency in the risk management process and fostering trust.

7. Documentation and Learning

Knowledge Management: Document all decisions, methodologies, and outcomes of the risk management process. Use this documentation for learning and improving future risk management practices.

Appendix B — Modal Risk Tools

Modal logic, with its focus on possibility, necessity, and other modalities, could theoretically be utilized to structure and analyze arguments about risks and possibilities in a more systematic and formal way. Here’s how this could be approached:

1. Defining Modal Statements for Risk

In modal logic, statements are evaluated in terms of their necessity or possibility. When applied to risk analysis, modal statements could be framed to represent various risk scenarios. For example:

  • Possibility (◇): “It is possible that the AI system will fail to recognize an anomaly.”
  • Necessity (□): “It is necessary for the AI system to comply with privacy regulations.”

These modal statements help in articulating the different facets of risk — what is possible, what is necessary, and potentially, what is impossible.

2. Constructing Modal Arguments

Modal logic allows for the construction of arguments using its specific rules of inference. For risk analysis, these arguments could be about the implications of certain risks or the conditions under which certain risks might arise. For example:

  • If it’s possible that an AI system can be biased (◇Bias), and if bias in AI necessarily leads to unfair outcomes (□Unfair), then it’s possible that the AI system can lead to unfair outcomes (◇Unfair).

3. Analyzing Risk Scenarios

Using possible worlds semantics, one can analyze various ‘worlds’ where different conditions hold true. In risk management, this translates to considering different scenarios and their implications. For instance, in one possible world, a new regulation might be introduced, affecting the operation of an AI system. Modal logic can help in structuring how these scenarios play out.

4. Assessing Conditional Risks

Modal logic can be used to understand conditional risks — risks that are dependent on certain conditions. Using modal operators, one can represent statements like, “If condition X is true, then risk Y becomes possible.”

5. Integrating with Probabilistic Information

While modal logic is qualitative, it can theoretically be integrated with quantitative probabilistic information to provide a more comprehensive risk analysis. For example, combining the possibility of a risk event with its probabilistic assessment can give a more nuanced understanding of the risk.

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Scenario: Risk Analysis for Autonomous Vehicle Development

Objective: To assess the risk of an autonomous vehicle failing to recognize a stop sign under different conditions.

Step 1: Define Modal Statements

Possibility Statement (◇):

  • “It is possible (◇) that the AV fails to recognize a stop sign (F).”
  • Symbolically: ◇F

Necessity Statement (□):

  • “It is necessary (□) that the AV recognizes all traffic signs to ensure safety (S).”
  • Symbolically: □S

Step 2: Construct Modal Arguments

Argument about Environmental Conditions (E):

  • “If it is raining heavily (R), it becomes possible that the AV’s sensors are impaired (I).”
  • Symbolically: R → ◇I

Argument about Sensor Impairment and Failure to Recognize Sign:

  • “If the AV’s sensors are impaired (I), it becomes possible that the AV fails to recognize a stop sign (F).”
  • Symbolically: I → ◇F

Step 3: Analyze Risk Scenarios Using Possible Worlds

Possible World 1 — Heavy Rain Scenario:

  • World where R is true.
  • From R → ◇I, we derive ◇I (it is possible that sensors are impaired).
  • From I → ◇F, we derive ◇F (it is possible that the AV fails to recognize a stop sign).

Possible World 2 — Normal Conditions Scenario:

  • World where R is false (no heavy rain).
  • The possibility of sensor impairment (◇I) is not directly derived.
  • The possibility of failing to recognize a stop sign (◇F) is less evident.

Step 4: Assessing Conditional Risks

  • “Under the condition of heavy rain (R), there is a risk (possibility) that the AV fails to recognize a stop sign (F).”
  • Symbolically: R → ◇F

--

--

Ratiomachina
Brass For Brain

AI Philosopher. My day job is advising clients on safe responsible adoption of emerging technologies such as AI.