Enhancing Customer Query Handling with Generative AI Chatbots for a Leading Financial Company

Usman Aslam
PREDICTif Ponders
Published in
4 min readMay 30, 2024
Image by ImageFlow

PREDICTif Solutions, an AWS Advanced Consulting Partner, recently collaborated with a leading financial company to develop a GenAI solution for enhancing customer query handling through an AI-driven chatbot solution. By leveraging AWS services, PREDICTif provided a scalable and efficient solution that significantly reduced response times, increased customer satisfaction, and decreased operational costs through automation.

Problem Statement

The financial company faced the challenge of efficiently addressing customer queries and providing timely responses. The manual process of query handling resulted in prolonged response times, diminished customer satisfaction, and increased operational costs due to high manpower requirements. To remain competitive and improve customer service, the company needed an automated solution that could handle queries quickly and accurately.

Solution Implementation

PREDICTif Solutions addressed the financial company’s needs by developing a robust Gen AI chatbot solution utilizing several AWS services:

  1. Amazon ECS with Fargate: Manages flexible container deployments for scalability and efficiency, ensuring the chatbot can handle varying demand levels.
  2. AWS SageMaker JumpStart: Ensures rapid inference by deploying models quickly and effectively, reducing the time required to implement updates and improvements.
  3. AWS Bedrock: Facilitates seamless conversion of queries and documents into embeddings, which are crucial for the chatbot’s understanding and response generation through Retrieval Augment Generation (RAG).
  4. AWS OpenSearch Serverless Vector DB: Stores the embeddings, ensuring fast and reliable access for real-time query resolution.
  5. AWS Lambda: Facilitates indexing of documents into embeddings and stores them into the vector database.

Application Architecture

To ease the integration of various components like Bedrock embedding, SageMaker JumpStart model prompting, and similarity search with OpenSearch Serverless Vector DB, PREDICTif Solutions employed the Streamlit and Langchain frameworks.

  1. Streamlit: This open-source framework was used to create an intuitive and interactive front-end for the chatbot. Streamlit facilitated the rapid development and deployment of the user interface, allowing for real-time updates and easy modifications based on user feedback. The simplicity and flexibility of Streamlit made it an ideal choice for developing the chatbot’s front-end.
  2. Langchain: Langchain was used to streamline the integration of different AI and data components. This framework enabled seamless communication between Bedrock for embeddings, SageMaker JumpStart for model inference, and OpenSearch Serverless Vector DB for similarity searches. Langchain’s modular design allowed for efficient orchestration of these services, ensuring smooth data flow and processing within the chatbot application.

Leveraging Retrieval Augmented Generation (RAG)

A key aspect of the chatbot’s functionality was leveraging Retrieval Augmented Generation (RAG) to improve the quality and relevance of responses. This approach combined the power of retrieval-based methods with generative AI, enhancing the chatbot’s ability to provide accurate and contextually appropriate answers.

  1. Amazon Bedrock: The chatbot used Amazon Bedrock to convert user queries and relevant documents into embeddings. These embeddings captured the semantic meaning of the text, enabling more nuanced understanding and processing.
  2. AWS OpenSearch Serverless Vector DB: The embeddings generated by Bedrock were stored in the AWS OpenSearch Serverless Vector DB. This database allowed for efficient similarity searches, quickly retrieving the most relevant information based on the user’s query.

The integration of these services enabled the chatbot to perform advanced similarity searches, finding the best matches for user queries from a vast repository of documents. By leveraging RAG, the chatbot could generate responses that were not only relevant but also grounded in accurate and up-to-date information, significantly improving the user experience.

Outcomes of the Project

The implementation of the AI chatbot brought significant benefits to the financial company. The Gen AI Chatbot dramatically improved response times, cutting them by 40%, which significantly enhanced the efficiency of customer support. Faster and more accurate query responses led to a 25% increase in customer satisfaction scores, demonstrating the positive impact of the Gen AI-driven solution. Automation of the query handling process decreased the need for manual intervention by 30%, resulting in substantial cost savings for the company. Lastly, user engagement and experience improved, as evidenced by the 25% increase in customer satisfaction scores.

--

--

Usman Aslam
PREDICTif Ponders

Ex-Amazonian, Sr. Solutions Architect at AWS, 12x AWS Certified. ❤️ Tech, Cloud, Programming, Data Science, AI/ML, Software Development, and DevOps. Join me 🤝