The Evolving Landscape of AI Governance in Financial Services

Ratiomachina
9 min readJust now

--

Moving towards scalable oversight

Introduction

Financial services, as heavily regulated industries, have a long history of robust governance frameworks, processes, and practices. With the increased adoption of AI, a natural starting point has been to refer to Model Risk Management (MRM) as a close cousin to how we govern Artificial Intelligence (AI). While this isn’t the only affected risk class, MRM processes and practices initially seem like a reasonable starting point for building AI Risk Governance Frameworks.

I am increasingly noticing that AI governance and risk management, when done right, can serve as enablers for AI adoption. In theory, it should be seen less as a ‘brake’ on innovation and more as a way to accelerate adoption without incurring unwanted risks that could lead to unintentional harms. To achieve this, the key question is not whether to govern AI, but how to manage and govern AI to ensure responsible adoption and use.

Scalable oversight

As AI systems continue to grow in complexity and scale, the need for scalable oversight becomes increasingly critical. By adopting strategies such as automation, modular governance frameworks, and real-time monitoring, organizations can ensure that their AI governance practices are not only effective but also adaptable to future advancements. This proactive approach to governance will enable financial services and other industries to harness the full potential of AI while mitigating risks and ensuring compliance.

Implementing scalable oversight in AI governance offers several significant benefits:

  1. Enhanced Compliance: Automated and continuous monitoring systems ensure that AI models adhere to regulatory standards at all times, reducing the risk of non-compliance.
  2. Improved Performance: Real-time data integration and dynamic analysis help in early detection of issues, ensuring AI models perform optimally and adapt quickly to changing conditions.
  3. Resource Efficiency: By leveraging automation and advanced computational infrastructure, organizations can achieve effective oversight without the need for proportional increases in human resources.
  4. Faster Innovation: Scalable oversight enables quicker validation and deployment of AI applications, allowing organizations to capitalize on innovative solutions that can provide competitive advantages and societal benefits.
  5. Increased Trust: Robust and scalable oversight mechanisms earns trust among stakeholders, including regulators, customers, and internal teams, by demonstrating a commitment to responsible AI use.

What is so special about AI?

Traditional financial models are transparent, with clear dependencies between parameters and outputs, making governance reviews effective. However, AI models are fundamentally different; they often produce outputs without clear dependencies due to their encoded data representations. This lack of transparency creates significant challenges for current governance practices, which struggle with effectiveness, agility, cost, and complexity.

Given the increased and pervasive adoption of AI in every business process and product, following the conventional oversight framework and operating model will not suffice. The methods used for traditional financial models simply do not scale to meet the demands of AI governance. AI systems are integrated into various aspects of business operations, from customer service and fraud detection to risk assessment and investment strategies. This integration means that AI’s influence is far-reaching, affecting critical decision-making processes and operational efficiency.

The rapid growth in AI model complexity exacerbates these challenges, making fine-grained, manual reviews unsustainable. As AI systems become more sophisticated, the volume of data they process and the intricacies of their decision-making pathways expand exponentially. Manual governance processes are not equipped to handle this level of complexity, leading to potential oversights and increased risk.

Effective AI model governance now requires a shift from manual, sequential processes to a system-level approach, emphasizing continuous monitoring and mitigation for self-regulation. This strategic change involves several key components:

  1. Continuous Monitoring: Real-time oversight of AI models is crucial for detecting anomalies, biases, and performance issues as they occur. Automated monitoring systems can analyze vast amounts of data and provide immediate feedback, ensuring that AI models remain compliant and effective.
  2. Modular Frameworks: Implementing modular governance structures allows for more flexibility and scalability. Each module can address specific aspects of governance, such as data quality, regulatory compliance, and model performance. This approach enables easier updates and adjustments as AI systems evolve.
  3. Integration of AI and Governance Systems: Integrating governance mechanisms directly into AI systems ensures that compliance and oversight are embedded within the model’s operational framework. This integration facilitates proactive governance, where issues are addressed before they escalate into significant problems.
  4. Automation of Routine Tasks: By automating routine governance tasks, organizations can free up human resources to focus on more complex and high-value activities. Automation reduces the risk of human error and increases the efficiency and consistency of governance processes.
  5. Human-in-the-Loop Systems: While automation is essential, human judgment remains critical for complex decision-making scenarios. Human-in-the-loop systems combine the efficiency of automated monitoring with the nuanced understanding of human oversight, ensuring balanced and effective governance.
  6. Scalable Infrastructure: Leveraging scalable cloud-based infrastructure supports the extensive data processing and computational requirements of modern AI governance. This infrastructure enables organizations to manage AI systems of varying sizes and complexities without being constrained by physical hardware limitations.

Research on scalable oversight of AI provides valuable insights and strategies that can be adapted to enhance existing governance frameworks. By embracing these innovative approaches, organizations can develop robust, scalable governance systems that ensure responsible AI use while fostering innovation and growth.

To ensure effective oversight, organizations must adopt scalable oversight strategies that integrate continuous monitoring, modular frameworks, and automation. This proactive approach will enable businesses to harness the full potential of AI while maintaining compliance and mitigating risks.

Current Challenges

Considering the current and emerging AI governance practices, there are myriad challenges to address. Existing frameworks, if not urgently enhanced or completely redesigned, introduce far higher risks over and above the individual, inherent risks of models, algorithms and AI systems more generally.

Current AI Model Governance Frameworks, rooted in traditional financial (risk) models, actually introduce significant risks including inadequate oversight, delayed risk identification, and inconsistent reviews. These frameworks lead to high compliance costs, ineffective manual processes, and failure to detect complex AI model errors. The lengthy, subjective, and complex review processes reduce model deployment agility and increase the likelihood of non-compliance and regulatory penalties. Furthermore, static environment assumptions typically dominating ‘traditional’ financial risk models, intermittent monitoring ( usually delayed monitoring ), limited metrics, and manual reporting exacerbate performance issues and delay responses to emerging risks. The absence of standard stress tests and the difficulty in ensuring compliance under evolving regulations further elevate the risk of model failure and governance breakdowns. Maintaining the status quo not only fails to mitigate these risks but also compounds them, undermining the responsible adoption and use of AI.

Here are some of the critical challenges and the associated risks introduced by inadequate AI Governance Frameworks:

One significant risk arising from inadequate AI Governance Frameworks is the opportunity cost, or lost opportunity. When review processes are excessively long and cumbersome, the delay in bringing an AI product to market can have substantial negative consequences.

For instance, consider a scenario where an AI system designed for fraud detection is developed. This system has the potential to significantly reduce fraudulent activities, protecting both the bank and its customers. However, if the governance and review process is overly lengthy, the deployment of this system is delayed. During this delay, the financial institution continues to use less effective fraud detection methods, resulting in ongoing losses due to undetected fraud. Additionally, customers remain vulnerable to fraud, undermining their trust in the financial institution’s ability to protect their interests.

This delay represents a lost opportunity not just for the organization, but for society as a whole. The benefits of improved fraud detection, such as reduced financial losses and increased customer trust, are postponed. In some cases, this delay can also allow competitors to bring similar innovations to market first, capturing market share and setting new industry standards.

Thus, the opportunity cost associated with inadequate AI governance is a critical risk. It highlights the need for streamlined, efficient governance processes that ensure the responsible and timely adoption of AI technologies. Accelerating the review and deployment process without compromising on rigor can help mitigate this risk, enabling banks to leverage AI innovations promptly for maximum positive impact.

Inadequate AI governance not only jeopardizes compliance and effectiveness but also delays innovation, leading to significant opportunity costs. The true risk lies in the lost potential to harness AI’s benefits, leaving both the organization and society at a disadvantage.

New governance capabilities are required

The extensive list of AI governance challenges necessitates innovative solutions. In this section, I investigate a high-level AI system framework and modular building blocks aimed at enhancing self-regulation and streamlining AI governance in financial services. This proposed framework operates at the intersection of AI and governance systems. It is designed as a flexible, high-level approach that can be tailored to fit specific AI models, applications, and the unique needs of individual firms.

Summary of capabilities required

Effective AI governance in financial services requires a comprehensive framework incorporating several key capabilities. Continuous regulatory monitoring and reporting during deployment ensure that AI systems comply with guidelines in real-time, mitigating risks of non-compliance and performance degradation.

Integrating self-regulatory building blocks within AI systems addresses common regulatory requirements without increasing complexity, reducing the risk of outdated criteria.

Reusable libraries of module templates and regulatory guidelines standardize the governance process, minimizing inconsistencies. Automated and semi-automated run-time mitigation with human-in-the-loop alerting helps manage model behavior, addressing biases and performance issues.

A governance controller supports AI-based governance through increased automation, enhancing compliance and robustness.

Dynamic reconfiguration capabilities allow for updating regulatory modules without retraining models, ensuring ongoing compliance. A comprehensive regulatory framework throughout the model lifecycle ensures consistent governance from development to deployment.

Finally, custom robustness tests using continuous monitoring data help identify weaknesses, ensuring AI models are robust and compliant under various conditions. These capabilities collectively address critical risks, facilitating the responsible adoption and use of AI in financial services.

Conclusion

Implementing a more streamlined, automated and federated AI Governance Framework requires a step change in processes, ways of working and clear lines of accountability. Possible process improvement opportunities include:

reducing the number and complexity of the reviews, simplifying the roles and responsibilities of the numerous committees, providing insights to the committees via direct access to monitoring and compliance metrics as opposed to generating endless reports and papers.

A more federated operating model will heavily rely on libraries and guidelines, templates and procedures. Ideally, these should be embedded into the MLOps or equivalent processes. Naturally this requires significant investment, but the long-term pay-off is significantly positive. Gradually working towards this goal should be high on the agenda of the board if the financial institution is to realize the benefits of safe AI adoption, internally and also for it’s customers and society as a whole.

As AI systems become increasingly integral to the financial services industry, it is crucial to implement effective model governance practices to ensure robustness and compliance. Current legacy governance processes face significant challenges in terms of performance, cost, complexity, agility, and scalability when applied to AI models. This article highlights common challenges in AI model governance within the financial sector and questions the feasibility of existing practices given the rapid growth in AI complexity.

To address these issues, the article proposes a system-level framework with modular building blocks designed to enhance automation, integration, and configurability, moving towards self-regulation. This approach aims to provide key capabilities that mitigate existing challenges and enable more effective and compliant AI solutions.

Appendix

Current AI Governance Challenges and Risks I
Current AI Governance Challenges and Risks II
Current AI Governance Challenges and Risks III

References

  1. JPMorgan Chase & Co. (2023). AI Model Governance Challenges and Capabilities in Financial Services.
  2. Arrieta, A. B., Díaz-Rodríguez, N., Ser, J. D., Bennetot, A., Tabik, S., Barbado, A., … & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
  3. Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute.
  4. Zicari, R. V., Brodersen, J., Brusseau, J., & Cheatham, B. (2021). The EU Artificial Intelligence Act: A Step Towards a More Ethical Market. In Ethics in AI and Robotics (pp. 71–89). Springer, Cham.
  5. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., … & Barnes, P. (2020). Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44).

--

--

Ratiomachina

AI Philosopher. My day job is advising clients on safe responsible adoption of emerging technologies such as AI.