Leveraging Generative AI in Operational Risk Management

Anis Pathan
6 min readApr 19, 2024

--

Generative AI in Risk Management

In the labyrinthine world of modern risk management, navigating uncertainties is akin to traversing a maze without a map. The landscape is riddled with complexities, where unforeseen events and interconnected variables constantly challenge even the most seasoned professionals. Amidst this intricate web of risks, traditional approaches often stumble, constrained by the limitations of human cognition and the inability to foresee beyond the obvious.

The core challenge lies in the multifaceted nature of risk, which transcends linear thinking and demands a holistic understanding of interconnected systems. Conventional risk management strategies, reliant on historical data and linear projections, falter in the face of emerging threats and dynamic environments. The human mind, despite its ingenuity, grapples with the sheer volume and complexity of data, often failing to discern subtle patterns or anticipate nonlinear outcomes.

Furthermore, the traditional risk management paradigm is hampered by the inherent limitations of human expertise. Even the most adept professionals are bound by cognitive biases, predispositions, and the constraints of their knowledge domains. This narrow perspective inhibits the ability to think beyond the obvious, leaving blind spots that can prove catastrophic in an ever-evolving risk landscape.

Enter generative artificial intelligence (AI), a revolutionary paradigm that promises to revolutionize risk management in profound ways. Rooted in machine learning and neural networks, generative AI transcends the confines of traditional algorithms, capable of synthesizing complex patterns, generating novel insights, and simulating diverse scenarios with unprecedented accuracy.

However, amidst the promise of generative AI lies a conundrum of its own limitations. While adept at processing vast volumes of data and uncovering intricate correlations, generative AI grapples with the nuances of human judgment and contextual understanding. Its algorithms, though powerful, lack the intuitive reasoning and contextual awareness inherent to human cognition, leading to potential blind spots and oversights in risk assessment.

In this article, we delve into the application of generative AI in operational risk management, exploring its transformative potential, inherent limitations, and the imperative of striking a balance between human expertise and machine intelligence. Through real-world examples and case studies, we illuminate the intricate interplay between technology and human judgment, envisioning a future where Generative AI serves as a force multiplier for risk professionals, augmenting their capabilities and reshaping the landscape of risk management

One of the most basic but most critical functions of an operational risk management function is to map a risk for any new process or product before launching or going live., This activity is called Risk Mapping.

Risk mapping requires multidimensional thinking about what could go wrong in the process in terms of the human element, process element, system element, and external environment. Conducting a thorough risk assessment requires experience and subject matter expertise. Let us see if we can use generative AI to conduct the basic assessment of risk, which can be further fine-tuned by an experienced risk manager.

We will use simple Python code and free-source liabrary to conduct this experiment.

Step 1: Importing libraries

#import libraries
import streamlit as st
from transformers import pipeline
import fitz
import time
import io

Step 2: Load the QA pipeline

Below code loads a Question Answering (QA) pipeline using the Hugging Face Transformers

qa_pipeline = pipeline("question-answering", model="deepset/robertabase-
squad2")
  1. pipeline() Function:

The pipeline() function is a convenient way to load pretrained models and apply them to various natural language processing (NLP) tasks, such as question answering, text summarization, and sentiment analysis.

2. “question-answering” argument:

This argument specifies the type of pipeline to load. In this case, we are loading a QA pipeline, which is designed to answer questions based on a given context (such as a passage of text).

3. “model” Argument:

This argument specifies the specific pre-trained model to use for the QA task. In this case, we are using the “deepset/roberta-base-squad2” model, which is based on the RoBERTa architecture and trained on the Stanford Question Answering Dataset (SQuAD) 2.0.

4. qa_pipeline Variable:

The pipeline() function returns a pipeline object that can be used to perform question answering tasks. By assigning this object to the qa_pipeline variable, we can later use it to answer questions based on a given context.

Step 3. PDF File Upload and Text Processing: A simple publicly available Customer service department SOP is used as a sample

#
def main():
st.title("PDF Question Answering")
# Upload PDF file
pdf_file = st.file_uploader("Upload a PDF file", type=["pdf"])

if pdf_file is not None:

pdf_document = fitz.open(stream=io.BytesIO(pdf_file.read()),
filetype="pdf")
total_pages = pdf_document.page_count

# Concatenate text from all pages
full_text = ""for page_num in range(total_pages):
page = pdf_document.load_page(page_num)
full_text += page.get_text() + "\n"
# Process chunks of text to avoid answer length limit
chunk_size = 1000 # Number of characters in each chunk
chunks = [full_text[i:i+chunk_size] for i in range(0,
len(full_text), chunk_size)]

# Allow user to ask a question based on the whole PDF content
question = st.text_input("Ask a question about the content of
the PDF:")

if question:
# Add a delay to simulate processing
time.sleep(1)
answers = []
for chunk in chunks:
answer = qa_pipeline(question=question, context=chunk)
answers.append(answer['answer'])
full_answer = " ".join(answers)

# Split the answer into sentences
sentences = full_answer.split(". ")
st.write(f"Question: {question}")

# Display each sentence as a separate point
for i, sentence in enumerate(sentences):
st.write(f"Point {i+1}: {sentence}")
if __name__== "__main__":
main()

1. title() Function: Sets the title of the web app to “PDF Question Answering,” which is displayed at the top of the app.

2. file_uploader() Function: Creates a file uploader widget where the user can upload a PDF file. The uploaded file is stored in the pdf_file variable.

3. fitz.open() Function: Opens the uploaded PDF file using the PyMuPDF library’s fitz module and reads its contents. It stores the PDF document in the pdf_document variable.

4. chunk_size Variable: Specifies the number of characters in each chunk of text. This is used to process the text in smaller chunks to avoid hitting the answer length limit of the question-answering model.

5. chunks Variable: Divides the full text of the PDF into smaller chunks of chunk_size characters each. This is done to process the text in manageable chunks.

6. text_input() Function: Displays a text input box where the user can enter a question about the content of the PDF file. The user’s question is stored in the question variable.

7. time.sleep() Function: Simulates a 1-second delay to mimic processing time. This is done to make the user experience more realistic.

8. qa_pipeline() Function: Calls a function (qa_pipeline) to get the answer to the user’s question for each chunk of text. The answer is stored in the answer variable.

9. join() Function: Joins all the answers from different chunks into a single string. This is done to combine the answers from all chunks into a single response.

10. split() Function: Splits the full answer into individual sentences. This is done to display each sentence as a separate point in the response.

11. st.write() Function: Writes each sentence of the answer as a separate point in the response. The point number (i+1) is displayed before each sentence to number the points sequentially.

Output:

Clearly, the application is not as accurate as we expected but its a good start. since we have not used paid model, accuracy can be greatly enhanced by using paid trained model such as ChatGPT, etc. but it exposes the firm to risks related to data privacy and confidentiality.

Using the pretrained model in the company environment provides a safe and effective way to use generative AI for risk management.

Conclusion:

While the application of generative AI in operational risk management presents a promising avenue for innovation, it’s essential to tread carefully amidst the complexities and nuances of this transformative technology. The experiment showcased a glimpse of its potential, albeit with room for improvement, especially in accuracy and reliability.

Moving forward, organizations must strike a delicate balance between leveraging generative AI’s capabilities and mitigating associated risks, such as data privacy and confidentiality concerns. Investing in premium trained models like ChatGPT or similar solutions may enhance accuracy, but it necessitates a vigilant approach to safeguard sensitive information.

Ultimately, integrating generative AI into operational risk management workflows offers a compelling opportunity to augment human expertise, streamline processes, and uncover insights hidden within vast datasets.

By embracing innovation responsibly and continuously refining methodologies, organizations can navigate the labyrinth of operational risks with confidence, resilience, and foresight

--

--