Generative AI based Recommendation Engine on Azure Cloud

Balaram Panda
2 min readSep 27, 2023

--

A high — level architecture for Generative AI based Recommendation Engine on Azure Cloud.

In 2023, every business has undergone digital transformation. However, being digital does not inherently guarantee success. The critical factors that can elevate your digital business to success include optimizing the digital customer experience, streamlining onboarding processes, and fostering meaningful engagement. Two most critical component that will help to achieve the above are 👇

  1. Search Engine
  2. Recommendation Engine.

In my previous search engine article I outlined the solution for creating an AI-based search engine that outperformed all search engines at current time. In this article, I am providing a high-level solution architecture for a recommendation engine based on the Azure Cloud Stack and Open Source technologies. This solution primarily utilizes Generative AI, specifically LLM (Language Model), and a Vector Database. More on recomendation system can be found here.

A Generative AI-based recommendation engine is a sophisticated system that leverages generative artificial intelligence techniques to provide personalized recommendations to users. Unlike traditional recommendation systems that rely on collaborative filtering, content-based filtering, or hybrid approaches, generative AI recommendation engines create recommendations by generating new content or items based on the user’s preferences and historical interactions.

Generative AI transform the user behavior data, items descriptions and other features into lower dimensional space using LLM Embeddings, that captures the semantic meaning of product purchased in past and resomend the next product to be.

Generative AI-based recommendation engines have the potential to provide highly personalized and engaging user experiences across a wide range of applications, including e-commerce, content streaming, news recommendation, and more. However, they also come with technical challenges and ethical considerations that need to be carefully addressed in their development and deployment.

More about Vector DB: With the rise of LLM vector DB is gaining momentum again. Vector is efficient in store and querying embeddings.

What is Embedding: Embedding is a process to convert text / image into numeric value for computer program to understand. You cannot process text as it is into machine learning algorithm. TF-IDF is a simple form of embedding used for text data.

Embedding by LLM model: LLM embedding are more accurate semantically, because it assign number to text based on contextual similarity.

References: AWS , Microsoft Azure, Google and Open AI docs

Show some support — follow, clap, or do whatever’s cool when you see this. Thanks! 🙌

Follow me on https://www.linkedin.com/in/balarampanda/

--

--