The AI Forum

Its AI forum where all the topics spread across Data Analytics, Data Science, Machine Learning, Deep Learning are discussed.

RAG Context Relevancy Checker Agent using DeepSeek-R1 70B on Groq, ModernBERT & LangChain

--

RAG Context Relevancy Checker Agentic Workflow

Introduction

The RAG architecture combines generative capabilities of Large Language Models (LLMs) with the precision of information retrieval. This approach has the potential to redefine how we interact with and augment both structured and unstructured knowledge in generative models to enhance transparency, accuracy and contextuality of responses.

Steps Involved in RAG:

  1. ๐Ÿ“ˆ Data Collection: The process starts with gathering relevant, domain-specific textual data from external sources like PDFs, structured documents, or text files. These documents serve as raw data for creating a tailored knowledge base that the system will query during the retrieval process.
  2. ๐Ÿงน Data Preprocessing: The collected data is then cleaned and preprocessed to create manageable and meaningful chunks. This step involves removing noise and formatting, normalizing the text, and segmenting it into smaller units, such as tokens (e.g., words or groups of words), that can be easily indexed and retrieved later.
  3. ๐Ÿ“Š Creating Vector Embeddings: After preprocessing, the chunks of data are transformed into vector representations using embedding models, such as BERT or Sentence Transformers. These vector embeddings capture the semantic meaning of the text, allowing the system to perform similarity searches. The vector representations are stored in a Vector Store, an indexed database optimized for fast retrieval based on similarity measures.
  4. ๐Ÿ”Ž Retrieval of Relevant Content: When a query is input into the system, it is transformed into a vector embedding, similar to the documents in the vector store. The Retriever component then searches within the vector store to identify and retrieve the most relevant chunks of information related to the query.
  5. ๐Ÿ”„ Augmentation of Context: The system merges two knowledge streams โ€” the fixed, general knowledge embedded in the LLM and the flexible, domain-specific information augmented on-demand as an additional layer of context. This aligns the Large Language Model (LLM) with both established and emerging information.
  6. ๐Ÿ’ฌ Generation of Response by LLM: The context-infused prompt, consisting of the original user query combined with the retrieved relevant content, is provided to a Large Language Model (LLM) like GPT, T5, or Llama. The LLM then processes this augmented input to generate a coherent and factually grounded response.
  7. ๐Ÿ“ค Final Output: The final output of RAG systems offers several advantages, such as minimizing the risk of generating hallucinations or outdated information, enhancing interpretability by clearly linking outputs to real-world sources, and providing enriched and accurate responses.

The real challenge arises scheming through the most meaningful and relevant context which has been procured via Retrieval process for the associated query as a result of cosine similarity approach .

Although there are readily package solutions available in LangChain to eliminate noise from the retrieved context, but I have attempted a similar approach via implementing agents using LangChain and Groq.

Technology Stack Used for Implementation

  • LangChain : Used as an application Framework
  • Deepseek-R1 70B : LLM
  • Groq : Faster LLM inferencing
  • ModernBERT: Embedding Model

๐Ÿงฉ What is LangChain?

LangChain is a developer framework (๐Ÿ› ๏ธ) for building apps powered by language models (LLMs) like GPT ๐Ÿค–. It โ€œchainsโ€ (โ›“๏ธ) together modules to create smart, context-aware applications ๐ŸŒ.

๐Ÿ”‘ Core Components

  1. LLMs ๐Ÿค–: Connects to models (OpenAI, Anthropic, etc.) for text generation.
  2. Prompts ๐Ÿ“‹: Templates to guide model outputs (e.g., โ€œTranslate this to French: {text}โ€).
  3. Chains โ›“๏ธ: Combine multiple steps (e.g., fetch data โ†’ analyze โ†’ generate report).
  4. Agents ๐Ÿ•ต๏ธโ™‚๏ธ: AI that uses tools (web search ๐ŸŒ, calculators ๐Ÿงฎ, APIs ๐Ÿ”Œ) to solve tasks.
  5. Memory ๐Ÿ’พ: Stores chat history or context for conversations (๐Ÿ’ฌโ†’๐Ÿ’ฌโ†’๐Ÿ’ฌ).
  6. Data Loaders ๐Ÿ“‚: Ingest documents, websites, or databases (๐Ÿ—„๏ธโ†’๐Ÿค–).

๐Ÿš€ How It Works

  1. Input ๐Ÿ“ฅ: User query (โ€œSummarize this PDFโ€).
  2. Retrieve ๐Ÿ”: Fetch data from files/APIs (๐Ÿ“„โ†’โ˜๏ธ).
  3. Process โš™๏ธ: Model analyzes data (๐Ÿค–โœจ).
  4. Generate ๐Ÿ“: Output answer, code, or action (๐ŸŽฏ).

๐ŸŒŸ Use Cases

  • Chatbots ๐Ÿ’ฌ with long-term memory (๐Ÿ˜๐Ÿ’พ).
  • Document QA ๐Ÿ“š: Ask questions about PDFs/websites.
  • Code Assistants ๐Ÿ‘ฉ๐Ÿ’ป: Generate + debug code (๐Ÿ๐Ÿ”ง).
  • Custom Workflows ๐Ÿ”„: Automate research, emails, etc.

โœ… Benefits

  • Modular ๐Ÿงฑ: Mix and match tools.
  • Scalable ๐Ÿ“ˆ: From simple scripts to enterprise apps.
  • Open-Source ๐Ÿ: Python/JS support.

In short: LangChain = LLMs ๐Ÿค– + Your Data ๐Ÿ—‚๏ธ + Logic โšก. Build the future, one chain at a time! ๐Ÿš€๐Ÿ”—

๐Ÿš€What is Deepseek-R1 70B

๐ŸŒŸ Overview

Deepseek-R1โ€“70B is a cutting-edge 70-billion-parameter AI model ๐Ÿง  developed by Deepseek (a Chinese AI research company ๐Ÿ‡จ๐Ÿ‡ณ). Itโ€™s designed for advanced reasoning, coding ๐Ÿ‘ฉ๐Ÿ’ป, and complex problem-solving ๐Ÿงฉ, optimized for both performance and efficiency โšก.

๐ŸŒŸ Base Architecture

  • Model Type: A decoder-only transformer model (like GPT-4/LLaMA) ๐Ÿง , optimized for autoregressive text generation.
  • Scale: 70 billion parameters (๐Ÿฆพ๐Ÿ’ฅ), making it a โ€œfrontier modelโ€ for complex reasoning and coding.
  • Layers: Likely 80+ transformer layers (๐Ÿ”„๐Ÿ”„) stacked for deep learning.
  • Hidden Dimension: ~8,192+ units per layer (๐Ÿ“) for rich representation of text/code.
  • Attention Heads: ~64+ heads (๐Ÿ‘€๐Ÿ‘€) to process multiple linguistic/code patterns in parallel.

๐Ÿ”ง Key Technical Components

  1. Transformer Blocks:
  • Self-Attention: Uses multi-head attention to weigh relationships between tokens (e.g., โ€œifโ€ โ†” โ€œelseโ€ in code).
  • Layer Normalization: Stabilizes training (โš–๏ธ) with techniques like RMSNorm or Pre-LN.
  • Feedforward Networks: Swish/GELU activation (๐Ÿ”Œ) for non-linear processing.

2. Tokenization:

  • Trained on a code-friendly tokenizer (๐Ÿ๐Ÿ”ฃ) with a large vocabulary (~100k+ tokens) to handle programming syntax.
  • Supports multilingual text and math symbols (โˆซ, โˆ‘, etc.) โž•โž—.

3. Context Window: Likely 8kโ€“32k tokens (๐Ÿ“œโ†’๐Ÿ“œโ†’๐Ÿ“œ), allowing analysis of long documents or codebases.

๐Ÿ”‘ Key Features

Architecture ๐Ÿ—๏ธ:

  • Built on a transformer-based framework (like GPT/LLaMA).
  • Trained on a massive, diverse dataset (text, code, math, etc.) ๐ŸŒ๐Ÿ“š.

Specialization ๐ŸŽฏ:

  • Excels at logical reasoning ๐Ÿค” โž” โœ… (e.g., math proofs, code debugging).
  • Strong coding capabilities ๐Ÿ๐Ÿ’ป (supports Python, Java, C++, etc.).

Efficiency โšก:

  • Uses optimizations to reduce computational costs ๐Ÿ’ฐ while maintaining high accuracy ๐ŸŽฏ.

๐Ÿš€ Performance

  • Benchmarks: Competes with top models like GPT-4 ๐Ÿค– and Claude 3 ๐Ÿฆธ in reasoning tasks.
  • Coding: Outperforms many open-source models (e.g., LLaMA 2, CodeLlama) on HumanEval ๐Ÿ†.

๐Ÿ› ๏ธ Use Cases

  • AI Assistants ๐Ÿค–: Advanced chatbots for tech support, tutoring, or coding help.
  • Research ๐Ÿ”ฌ: Solving complex scientific/math problems.
  • Software Development ๐Ÿ’ป: Auto-generate code, debug, or write documentation.

๐Ÿ†š Comparison to Other Models

In short: Deepseek-R1โ€“70B = Massive Scale ๐Ÿฆพ + Code-Optimized Design ๐Ÿ‘ฉ๐Ÿ’ป + Cutting-Edge Efficiency โšก. Itโ€™s built to tackle the hardest logic puzzles and programming challenges! ๐Ÿ”ฅ๐Ÿ”—

๐Ÿš€What is Groq ?

Groq is a hardware company (๐Ÿ–ฅ๏ธ๐Ÿ’ก) that builds ultra-fast AI accelerator chips called LPUs (Language Processing Units). These chips are designed to run large language models (LLMs) like GPT-4 or Gemini at lightning speed โšก, with deterministic performance (no lag spikes!).

๐Ÿ”‘ Key Features

LPU Architecture ๐Ÿ—๏ธ:

  • Specialized for LLM inference (not training).
  • Focuses on low latency and high throughput (think: 500+ tokens per second ๐ŸŽ๏ธ).
  • Uses a single-core design with deterministic execution (๐Ÿ”„โœ…).

Speed Demon โšก:

  • Outperforms GPUs (like NVIDIA A100/H100) for real-time AI tasks (e.g., chatbots, code generation).
  • Example: Runs Metaโ€™s Llama 3 70B faster than most cloud GPUs.

Energy Efficiency ๐ŸŒฑ:

  • Consumes less power per token vs. traditional GPUs (๐Ÿ’ฐ๐Ÿ”‹).

๐Ÿ› ๏ธ How It Works

  1. Tensor Streaming Processor (TSP):
  • Processes data in a predictable sequence (no cache misses โŒ).

2. Software Stack ๐Ÿ“ฆ:

  • Compiles ML models (PyTorch/TensorFlow) to run natively on Groq chips.

๐ŸŒ Availability

  • Accessible via GroqCloud โ˜๏ธ (API-based).
  • Sold as on-prem hardware for enterprises ($$$).

๐Ÿ’ก Why It Matters

Groq solves the โ€œAI latency bottleneckโ€ โ€” making LLMs feel instant and scalable for apps like customer service, gaming NPCs, or medical diagnostics. Itโ€™s like giving AI a Ferrari engine ๐ŸŽ๏ธ๐Ÿ’จ instead of a bicycle!

๐Ÿš€ What is ModernBERT?

ModernBERT is an improved version of Googleโ€™s original BERT model, optimized for efficiency and performance on modern hardware. Developed by Nomic AI (๐Ÿง‘๐Ÿ’ป), itโ€™s designed to deliver high-quality embeddings while being faster and lighter โšก.

๐Ÿ”‘ Key Features

  1. Architecture ๐Ÿ—๏ธ:
  • Retains BERTโ€™s transformer backbone but with optimized attention mechanisms.
  • Trained on modern tokenization strategies (e.g., Unigram tokenizers)

2. Efficiency โšก:

  • Reduced memory usage compared to vanilla BERT ๐Ÿง โ†’๐Ÿ’พ.
  • Faster inference on CPUs/GPUs (ideal for edge devices ๐Ÿ“ฑ).

3. Use Cases ๐ŸŽฏ:

  • Semantic search (๐Ÿ”โ†’๐Ÿ“‘)
  • Retrieval-Augmented Generation (RAG) pipelines ๐Ÿค–๐Ÿ“š
  • Text classification/clustering ๐Ÿ—‚๏ธ

๐ŸŒŸ Why Itโ€™s Used in theCode

In your RAG pipeline:

embedding_model = HuggingFaceEmbeddings(model_name="nomic-ai/modernbert-embed-base")
  • Semantic Chunking ๐Ÿงฉ: ModernBERT generates embeddings to split documents into meaningful chunks.
  • Accuracy ๐ŸŽฏ: Captures nuanced relationships between text snippets better than older BERT variants.

Comparision

โš™๏ธ Code Workflow Highlights

  1. ๐Ÿ› ๏ธ Setup
  • Installs dependent library(LangChain/Groq/HuggingFace)
  • Connects Groqโ€™s lightning-fast API โšก

2. ๐Ÿ“„ Document Processing

  • PDF โ†’ text chunks using semantic splitting (๐Ÿ“‘โ†’๐Ÿงฉ)
  • Stores in Chroma vector DB (๐Ÿ—„๏ธ๐Ÿ’ก)

3. ๐Ÿค– AI Agents Team

  • Relevancy Judge ๐Ÿ‘จโš–๏ธ: Scores context usefulness (0/1)
  • Context Picker ๐Ÿ‘: Filters best chunks (๐ŸŽฏโ†’๐Ÿ“Œ)
  • Response Chef ๐Ÿ‘ฉ๐Ÿณ: Cooks up final answer (๐Ÿณโ†’๐Ÿฝ๏ธ)

4. ๐Ÿ”„ Sequential Workflow

5.๐Ÿ’ก Key Innovations

  • Agentic Validation ๐Ÿ”Ž: Double-checks context quality before answering
  • Groq Speed ๐ŸŽ๏ธ: Uses LPU chips for instant LLM responses
  • ModernBERT ๐Ÿค–: State-of-the-art embeddings for accurate retrieval

Code Implementation

Install required dependencies

%pip install -qU langchain langchain_community langchain_groq langchain-huggingface 
%pip install -qU pyPDF2 pdfplumber
%pip install --quiet langchain_experimental
%pip install -qU sentence-transformers
%pip install -qU transformers
%pip install -qU langchain-chroma

Set up Groq API Keys

from google.colab import userdata
import os
#
os.environ["GROQ_API_KEY"] = userdata.get('GROQ_API_KEY')
os.environ["HF_TOKEN"] = userdata.get('HF_TOKEN')

Import required dependencies

from langchain.chains import SequentialChain, LLMChain
from langchain.prompts import PromptTemplate
from langchain_groq import ChatGroq

Setup the LLM

llm_judge = ChatGroq(model="deepseek-r1-distill-llama-70b")
rag_llm = ChatGroq(model="mixtral-8x7b-32768")
#
llm_judge.verbose = True
rag_llm.verbose = True

Load the Data to be processed

!mkdir data
!wget "https://arxiv.org/pdf/2410.15944v1" -O data/RAG.pdf

Process the PDF documents

from langchain.document_loaders import PDFPlumberLoader
loader = PDFPlumberLoader("data/RAG.pdf")
docs = loader.load()
print(len(docs))
print(docs[0].metadata)


################Response########################
36
{'source': 'data/RAG.pdf',
'file_path': 'data/RAG.pdf',
'page': 0,
'total_pages': 36,
'Author': '',
'CreationDate': 'D:20241022015619Z',
'Creator': 'LaTeX with hyperref',
'Keywords': '',
'ModDate': 'D:20241022015619Z',
'PTEX.Fullbanner': 'This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5',
'Producer': 'pdfTeX-1.40.25',
'Subject': '',
'Title': '',
'Trapped': 'False'}

Chunk documents into smaller managable chunks

from langchain_huggingface import HuggingFaceEmbeddings
from langchain_experimental.text_splitter import SemanticChunker
#

text_splitter = SemanticChunker(embedding_model)
documents = text_splitter.split_documents(docs)
print(len(documents))
print(documents[0].page_content)

############################Response###############################
73
Developing Retrieval Augmented Generation
(RAG) based LLM Systems from PDFs: An
Experience Report
Ayman Asad Khan Md Toufique Hasan
Tampere University Tampere University
ayman.khan@tuni.fi mdtoufique.hasan@tuni.fi
Kai Kristian Kemell Jussi Rasku Pekka Abrahamsson
Tampere University Tampere University Tampere University
kai-kristian.kemell@tuni.fi jussi.rasku@tuni.fi pekka.abrahamsson@tuni.fi
Abstract. This paper presents an experience report on the develop-
ment of Retrieval Augmented Generation (RAG) systems using PDF
documentsastheprimarydatasource.TheRAGarchitecturecombines
generativecapabilitiesofLargeLanguageModels(LLMs)withthepreci-
sionofinformationretrieval.Thisapproachhasthepotentialtoredefine
how we interact with and augment both structured and unstructured
knowledge in generative models to enhance transparency, accuracy and
contextuality of responses. The paper details the end-to-end pipeline,
from data collection, preprocessing, to retrieval indexing and response
generation,highlightingtechnicalchallengesandpracticalsolutions.We
aim to offer insights to researchers and practitioners developing similar
systems using two distinct approaches: OpenAIโ€™s Assistant API with
GPTSeries andLlamaโ€™sopen-sourcemodels.Thepracticalimplications
of this research lie in enhancing the reliability of generative AI systems
in various sectors where domain specific knowledge and real time infor-
mationretrievalisimportant.ThePythoncodeusedinthisworkisalso
available at: GitHub. Keywords: Retrieval Augmented Generation (RAG), Large Language Models
(LLMs), Generative AI in Software Development, Transparent AI. 1 Introduction
Large language models (LLMs) excel at generating human like responses, but
base AI models canโ€™t keep up with the constantly evolving information within
dynamicsectors.Theyrelyonstatictrainingdata,leadingtooutdatedorincom-
plete answers.



Setup the Embedding Model

model_name = "nomic-ai/modernbert-embed-base"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
embedding_model = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)

Setup Vector Store

from langchain_chroma import Chroma

vector_store = Chroma(
collection_name="deepseek_collection",
collection_metadata={"hnsw:space": "cosine"},
embedding_function=embedding_model,
persist_directory="./chroma_langchain_db", # Where to save data locally, remove if not necessary
)

Add Embeddings to the Vector Store

vector_store.add_documents(documents)
len(vector_store.get()["documents"])
############Response############
73

Setup the Retriever

retriver =vector_store.as_retriever(search_type="similarity",search_kwargs={"k": 5})

Context Relevany Checker Agent


relevancy_prompt = """You are an expert judge tasked with evaluating whather the EACH OF THE CONTEXT provided in the CONTEXT LIST is self sufficient to answer the QUERY asked.
Analyze the provided QUERY AND CONTEXT to determine if each Ccontent in the CONTEXT LIST contains Relevant information to answer the QUERY.

Guidelines:
1. The content must not introduce new information beyond what's provided in the QUERY.
2. Pay close attention to the subject of statements. Ensure that attributes, actions, or dates are correctly associated with the right entities (e.g., a person vs. a TV show they star in).
6. Be vigilant for subtle misattributions or conflations of information, even if the date or other details are correct.
7. Check that the content in the CONTEXT LIST doesn't oversimplify or generalize information in a way that changes the meaning of the QUERY.

Analyze the text thoroughly and assign a relevancy score 0 or 1 where:
- 0: The content has all the necessary information to answer the QUERY
- 1: The content does not has the necessary information to answer the QUERY

```
EXAMPLE:

INPUT (for context only, not to be used for faithfulness evaluation):
What is the capital of France?

CONTEXT:
['France is a country in Western Europe. Its capital is Paris, which is known for landmarks like the Eiffel Tower.',
'Mr. Naveen patnaik has been the chief minister of Odisha for consequetive 5 terms']

OUTPUT:
The Context has sufficient information to answer the query.

RESPONSE:
{{"score":0}}
```

CONTENT LIST:
{context}

QUERY:
{retriever_query}
Provide your verdict in JSON format with a single key 'score' and no preamble or explanation:
[{{"content:1,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
{{"content:2,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
...]

"""
context_relevancy_checker_prompt = PromptTemplate(input_variables=["retriever_query","context"],template=relevancy_prompt)

Relevant Context Picker Agent

# Relevant Context Picker Agent
relevant_prompt = PromptTemplate(
input_variables=["relevancy_response"],
template="""
You main task is to analyze the json structure as a part of the Relevancy Response.
Review the Relevancy Response and do the following:-
(1) Look at the Json Structure content
(2) Analyze the 'score' key in the Json Structure content.
(3) pick the value of 'content' key against those 'score' key value which has 0.
.

Relevancy Response:
{relevancy_response}

Provide your verdict in JSON format with a single key 'content number' and no preamble or explanation:
[{{"content":<content number>}}]

"""
)

Response synthesis Agent

# MeaningFul Context for Response synthesis Agent
context_prompt = PromptTemplate(
input_variables=["context_number"],
template="""
You main task is to analyze the json structure as a part of the Context Number Response and the list of Contexts provided in the 'Content List' and perform the following steps:-
(1) Look at the output from the Relevant Context Picker Agent.
(2) Analyze the 'content' key in the Json Structure format({{"content":<<content_number>>}}).
(3) Retrieve the value of 'content' key and pick up the context corresponding to that element from the Content List provided.
(4) Pass the retrieved context for each corresponing element number referred in the 'Context Number Response'

Context Number Response:
{context_number}

Content List:
{context}

Provide your verdict in JSON format with a two key 'relevant_content' and 'context_number' no preamble or explanation:
[{{"context_number":<content1>,"relevant_content":<content corresponing to that element 1 in the Content List>}},
{{"context_number":<content4>,"relevant_content":<content corresponing to that element 4 in the Content List>}},
...
]
"""
)

Create Chains :Define LLM Chains for each agent

from langchain.chains import SequentialChain, LLMChain
#
context_relevancy_evaluation_chain = LLMChain(llm=llm_judge, prompt=context_relevancy_checker_prompt, output_key="relevancy_response")
#
pick_relevant_context_chain = LLMChain(llm=llm_judge, prompt=relevant_prompt, output_key="context_number")
#
relevant_contexts_chain = LLMChain(llm=llm_judge, prompt=context_prompt, output_key="relevant_contexts")
#
response_chain = LLMChain(llm=rag_llm,prompt=final_prompt,output_key="final_response")

Orchestrate using Langchain SequentialChain

context_management_chain = SequentialChain(
chains=[context_relevancy_evaluation_chain ,pick_relevant_context_chain, relevant_contexts_chain,response_chain],
input_variables=["context","retriever_query","query"],
output_variables=["relevancy_response", "context_number","relevant_contexts","final_response"]
)

Query and Contexts โ€” 1

contexts = retriver.invoke("What is RAG?")
context =[d.page_content for d in contexts]
rag_prompt = """ You are ahelpful assistant very profiient in formulating clear and meaningful answers from the context provided.Based on the CONTEXT Provided ,Please formulate
a clear concise and meaningful answer for the QUERY asked.Please refrain from making up your own answer in case the COTEXT
provided is not sufficient to answer the QUERY.In such a situation please respond as 'I do not know'.
QUERY:
{query}
CONTEXT
{context}
ANSWER:

Invoke the Agent Orchestration

final_output = context_management_chain({"context":context,"retriever_query":query,"query":query})

Response

# Print first response (RAG Relevan Context Picker Agent)
print("\n-------- ๐ŸŸฅ context_relevancy_evaluation_chain Statement ๐ŸŸฅ --------\n")
print(final_output["relevancy_response"])

# Print final legally refined response
print("\n-------- ๐ŸŸฆ pick_relevant_context_chain Statement ๐ŸŸฆ --------\n")
print(final_output["context_number"])

print("\n-------- ๐ŸŸฅ relevant_contexts_chain Statement ๐ŸŸฅ --------\n")
print(final_output["relevant_contexts"])

print("\n-------- ๐ŸŸฅ Rag Response Statement ๐ŸŸฅ --------\n")
print(final_output["final_response"])
-------- ๐ŸŸฅ context_relevancy_evaluation_chain Statement ๐ŸŸฅ --------

<think>
Okay, so I need to evaluate each content in the provided list to see if they sufficiently answer the query "What is RAG?" without introducing new information or misattributing details. Let me go through each content one by one.

Starting with content 1: It's a figure title about the most valuable aspects of a workshop on RAG systems. It doesn't explain what RAG is, just mentions implementation. So, it doesn't answer the question.

Content 2: This one talks about enhancing the model's ability to respond and mentions the architecture of RAG systems. It refers to a figure but doesn't define RAG itself. So, it's not sufficient.

Content 3: Discusses participants' familiarity with RAG and their understanding improvement. It doesn't explain what RAG is, just how participants interacted with it. Not sufficient.

Content 4: This content seems more detailed. It talks about the design incorporating feedback, real-time retrieval capabilities, and integration between retrieval and generation. It also mentions the foundation for a RAG system and its applications. While it explains some aspects, it doesn't directly define what RAG is. So, still not sufficient on its own.

Content 5: This is about setting up a development environment for RAG. It's more about the setup process and tools, not defining RAG itself. So, it doesn't answer the query.

Content 6: This part explains that RAG models provide solutions by pulling real-time data and mentions their ability to explain and trace answers. It also talks about the guide developed and tested in a workshop, integrating RAG into workflows. This content does explain what RAG is by describing its function and purpose. It mentions that RAG models help address real-world challenges with dynamic data. So, this one does provide a sufficient explanation of what RAG is.

Content 7: This is about future trends and tools like Haystack and Elasticsearch. It explains how these tools enhance RAG models but doesn't define RAG itself. So, it doesn't answer the query.

So, only content 6 provides enough information to define RAG, while the others don't. Therefore, I'll score them accordingly.
</think>

```json
[{"content":1,"score":1,"Reasoning":"Content 1 only mentions implementation of RAG systems without defining what RAG is."},
{"content":2,"score":1,"Reasoning":"Content 2 refers to the architecture but does not define RAG."},
{"content":3,"score":1,"Reasoning":"Content 3 discusses participants' understanding but doesn't explain RAG."},
{"content":4,"score":1,"Reasoning":"Content 4 details design aspects but doesn't define RAG."},
{"content":5,"score":1,"Reasoning":"Content 5 is about setup, not defining RAG."},
{"content":6,"score":0,"Reasoning":"Content 6 explains RAG's function and purpose, providing a sufficient definition."},
{"content":7,"score":1,"Reasoning":"Content 7 discusses future trends without defining RAG."}]
```

-------- ๐ŸŸฆ pick_relevant_context_chain Statement ๐ŸŸฆ --------

<think>
Alright, I need to figure out which content number has a score of 0 in the provided JSON structure. The JSON is an array of objects, each with "content", "score", and "Reasoning" keys.

First, I'll look through each object to find where the "score" is 0. I'll start with the first object: content 1 has a score of 1, so that's not it. Next, content 2 also has a score of 1. Moving on, content 3 and 4 both have a score of 1. Content 5 has a score of 1 as well. Then, I check content 6 and see that its score is 0. Finally, content 7 has a score of 1.

So, the only content with a score of 0 is content 6. Therefore, the answer is content number 6.
</think>

{"content":6}

-------- ๐ŸŸฅ relevant_contexts_chain Statement ๐ŸŸฅ --------

<think>
Okay, so I have this JSON structure and a list of contexts. I need to figure out which content number corresponds to each element in the JSON. Let me start by understanding the JSON. It looks like it's an array of objects, each with "content", "score", and "Reasoning".

First, I need to extract the "content" numbers from each object. The first object has "content":1, the second "content":2, and so on up to "content":7. So, I need to look at each of these content numbers and find the corresponding context in the Content List provided.

Looking at the Content List, it's an array with 7 elements. Each element corresponds to a content number from 1 to 7. So, content 1 is the first element in the list, content 2 is the second, etc.

Now, I need to map each "content" number from the JSON to the correct context in the Content List. For example, the first object in the JSON has "content":1, so I take the first element from the Content List. The second object has "content":2, so I take the second element, and so on.

I'll go through each object in the JSON and retrieve the corresponding context. It's important to make sure I'm matching the correct content number to the right context. I should double-check each one to avoid mistakes.

Once I've mapped all the content numbers, I'll structure the result as a JSON array with objects containing "context_number" and "relevant_content". Each object will have the content number and the corresponding context from the list.

I think that's all I need to do. Now, I'll put it all together into the required JSON format.
</think>

```json
[
{"context_number":1,"relevant_content":"Fig.6: Most Valuable Aspects of the Workshop. Implementation of RAG systems."},
{"context_number":2,"relevant_content":"Fig.1: Architecture of Retrieval Augmented Generation(RAG) system. 2."},
{"context_number":3,"relevant_content":"Fig.4: Participantsโ€™ Familiarity with RAG Systems. Priortoattendingtheworkshop,themajorityofparticipantsreportedarea-\nsonable level of familiarity with RAG systems.This indicated that the audience\nhadafoundationalunderstandingoftheconceptspresented,allowingformorein\ndepthdiscussionsduringtheworkshop.Aftertheworkshop,therewasanotable\nimprovement in participantsโ€™ understanding of RAG systems. Fig.5: Participantsโ€™ Improvement in Understanding RAG Systems. Themajorityofparticipantshighlightedthepracticalcodingexercisesasthe\nmostvaluableaspectoftheworkshop,whichhelpedthembetterunderstandthe\n31\n"},
{"context_number":4,"relevant_content":"The design also incorporates the feedback from a diverse group of partici-\npantsduringaworkshopsession,whichfocusedonthepracticalaspectsofimple-\nmenting RAG systems. Their input highlighted the effectiveness of the systemโ€™s\nreal-timeretrievalcapabilities,particularlyinknowledge-intensivedomains,and\nunderscored the importance of refining the integration between retrieval and\ngeneration to enhance the transparency and reliability of the systemโ€™s outputs. This design sets the foundation for a RAG system capable of addressing the\nneeds of domains requiring precise, up-to-date information. 4 Results: Step-by-Step Guide to RAG\n4.1 Setting Up the Environment\nThis section walks you through the steps required to set up a development\nenvironmentforRetrievalAugmentedGeneration(RAG)onyourlocalmachine. We will cover the installation of Python, setting up a virtual environment and\nconfiguring an IDE (VSCode)."},
{"context_number":5,"relevant_content":"RAG\nmodels provide practical solutions with pulling in real time data from provided\nsources. The ability to explain and trace how RAG models reach their answers\nalsobuildstrustwhereaccountabilityanddecisionmakingbasedonrealevidence\nis important. Inthispaper,wedevelopedaRAGguidethatwetestedinaworkshopsetting,\nwhere participants set up and deployed RAG systems following the approaches\nmentioned. This contribution is practical, as it helps practitioners implement\nRAG models to address real world challenges with dynamic data and improved\naccuracy.Theguideprovidesusersclear,actionablestepstointegrateRAGinto\ntheir workflows, contributing to the growing toolkit of AI driven solutions. With that, RAG also opens new research avenues that can shape the future\nofAIandNLPtechnologies.Asthesemodelsandtoolsimprove,therearemany\npotentialareasforgrowth,suchasfindingbetterwaystosearchforinformation,\nadapting to new data automatically, and handling more than just text (like\nimages or audio). Recent advancements in tools and technologies have further\naccelerated the development and deployment of RAG models. As RAG models\ncontinue to evolve, several emerging trends are shaping the future of this field. 1. Haystack: An open-source framework that integrates dense and sparse re-\ntrieval methods with large-scale language models. Haystack supports real-\ntime search applications and can be used to develop RAG models that per-\nform tasks such as document retrieval, question answering, and summariza-\ntion [4]. 2. Elasticsearch with Vector Search: Enhanced support for dense vector\nsearch capabilities, allowing RAG models to perform more sophisticated re-\ntrieval tasks. Elasticsearchโ€™s integration with frameworks like Faiss enables\n33\n"},
{"context_number":6,"relevant_content":"The design also incorporates the feedback from a diverse group of partici-\npantsduringaworkshopsession,whichfocusedonthepracticalaspectsofimple-\nmenting RAG systems. Their input highlighted the effectiveness of the systemโ€™s\nreal-timeretrievalcapabilities,particularlyinknowledge-intensivedomains,and\nunderscored the importance of refining the integration between retrieval and\ngeneration to enhance the transparency and reliability of the systemโ€™s outputs. This design sets the foundation for a RAG system capable of addressing the\nneeds of domains requiring precise, up-to-date information. 4 Results: Step-by-Step Guide to RAG\n4.1 Setting Up the Environment\nThis section walks you through the steps required to set up a development\nenvironmentforRetrievalAugmentedGeneration(RAG)onyourlocalmachine. We will cover the installation of Python, setting up a virtual environment and\nconfiguring an IDE (VSCode)."},
{"context_number":7,"relevant_content":"RAG\nmodels provide practical solutions with pulling in real time data from provided\nsources. The ability to explain and trace how RAG models reach their answers\nalsobuildstrustwhereaccountabilityanddecisionmakingbasedonrealevidence\nis important. Inthispaper,wedevelopedaRAGguidethatwetestedinaworkshopsetting,\nwhere participants set up and deployed RAG systems following the approaches\nmentioned. This contribution is practical, as it helps practitioners implement\nRAG models to address real world challenges with dynamic data and improved\naccuracy.Theguideprovidesusersclear,actionablestepstointegrateRAGinto\ntheir workflows, contributing to the growing toolkit of AI driven solutions. With that, RAG also opens new research avenues that can shape the future\nofAIandNLPtechnologies.Asthesemodelsandtoolsimprove,therearemany\npotentialareasforgrowth,suchasfindingbetterwaystosearchforinformation,\nadapting to new data automatically, and handling more than just text (like\nimages or audio). Recent advancements in tools and technologies have further\naccelerated the development and deployment of RAG models. As RAG models\ncontinue to evolve, several emerging trends are shaping the future of this field. 1. Haystack: An open-source framework that integrates dense and sparse re-\ntrieval methods with large-scale language models. Haystack supports real-\ntime search applications and can be used to develop RAG models that per-\nform tasks such as document retrieval, question answering, and summariza-\ntion [4]. 2. Elasticsearch with Vector Search: Enhanced support for dense vector\nsearch capabilities, allowing RAG models to perform more sophisticated re-\ntrieval tasks. Elasticsearchโ€™s integration with frameworks like Faiss enables\n33\n"}
]

-------- ๐ŸŸฅ Rag Response Statement ๐ŸŸฅ --------

RAG stands for Retrieval Augmented Generation. It is a system that combines real-time data retrieval with large-scale language models to provide practical solutions for addressing challenges with dynamic data and improved accuracy. RAG models are capable of explaining and tracing how they reach their answers, which builds trust in scenarios where accountability and decision-making based on real evidence is important. The design of RAG systems can be refined based on feedback to enhance their transparency, reliability, and real-time retrieval capabilities, particularly in knowledge-intensive domains. Recent advancements in tools and technologies have accelerated the development and deployment of RAG models, and emerging trends continue to shape the future of this field.

Query and Contexts โ€” 2

contexts =  retriver.invoke("What are the key steps that a RAG Process is structured into?")
context = [d.page_content for d in contexts]
query = "What are the key steps that a RAG Process is structured into?"
retriever_query = query

Invoke the Agent Orchestration

final_output = context_management_chain({"context":context,"retriever_query":query,"query":query})

Response

# Print first response (RAG Relevan Context Picker Agent)
print("\n-------- ๐ŸŸฅ context_relevancy_evaluation_chain Statement ๐ŸŸฅ --------\n")
print(final_output["relevancy_response"])

# Print final legally refined response
print("\n-------- ๐ŸŸฆ pick_relevant_context_chain Statement ๐ŸŸฆ --------\n")
print(final_output["context_number"])

print("\n-------- ๐ŸŸฅ relevant_contexts_chain Statement ๐ŸŸฅ --------\n")
print(final_output["relevant_contexts"])

print("\n-------- ๐ŸŸฅ Rag Response Statement ๐ŸŸฅ --------\n")
print(final_output["final_response"])
-------- ๐ŸŸฅ context_relevancy_evaluation_chain Statement ๐ŸŸฅ --------

<think>
Okay, I need to evaluate each content in the provided CONTEXT LIST to determine if it contains enough information to answer the query: "What are the key steps that a RAG Process is structured into?"

First, I'll look at each content piece one by one.

Content 1: This is about the most valuable aspects of a workshop on RAG systems. It mentions feedback from participants and the effectiveness of real-time retrieval but doesn't outline any steps of the RAG process itself. So, it's not relevant.

Content 2: Discusses the design incorporating feedback, effectiveness of real-time retrieval, and the integration between retrieval and generation. It also talks about setting up a development environment, like installing Python and VSCode, which are setup steps but not the key steps of the RAG process. So, partially relevant but not enough.

Content 3: Covers participants' familiarity and improvement in understanding RAG systems. It mentions practical coding exercises but doesn't detail the RAG process steps. Not relevant.

Content 4: Talks about enhancing the model's ability to respond and mentions real-time data pulling. It also briefly mentions the architecture of RAG but doesn't break it down into steps. Not sufficient.

Content 5: This seems more detailed. It explains that RAG models provide practical solutions by pulling real-time data and mentions integrating RAG into workflows. It also talks about tools like Haystack and Elasticsearch with vector search. However, it doesn't explicitly list the key steps of the RAG process. It's more about applications and tools rather than the structured steps.

After reviewing all, none of the contents provide a clear, step-by-step structure of the RAG process. They discuss aspects like setup, tools, and applications but don't outline the key steps. Therefore, each content doesn't have the necessary information to answer the query.
</think>

{"score":1}

-------- ๐ŸŸฆ pick_relevant_context_chain Statement ๐ŸŸฆ --------

<think>
Alright, I need to determine which content from the provided list has a 'score' of 0 and then extract the 'content' value from that entry.

Looking at the JSON response, I see an array of objects. Each object has a 'content' and a 'score'. I'll go through each one:

1. The first object has 'content': 'Content 1' and 'score': 0. This is the one I'm interested in.
2. The second object has 'score': 1, so I skip it.
3. There's only two objects in the array, so I stop here.

The content with a score of 0 is 'Content 1'.
</think>

{"content": "Content 1"}

-------- ๐ŸŸฅ relevant_contexts_chain Statement ๐ŸŸฅ --------

<think>
Okay, let's tackle this problem step by step. I need to analyze the JSON structure provided and the list of contexts to determine the relevant content for each context number specified in the 'Context Number Response'.

First, I'll look at the 'Context Number Response' which is a JSON array containing objects with 'context_number' and 'score'. My task is to find the entries where the 'score' is 0 and extract the corresponding 'content' from the 'Content List'.

Looking at the 'Context Number Response', I see two objects:
1. The first object has 'context_number': 1 and 'score': 0.
2. The second object has 'context_number': 4 and 'score': 1.

Since I'm only interested in entries with a 'score' of 0, I'll focus on the first object where 'context_number' is 1.

Next, I'll refer to the 'Content List' provided. The list has five elements, each corresponding to a context number from 1 to 5. I need to find the content for context number 1.

Looking at the 'Content List', the first element (index 0) is "Fig.6: Most Valuable Aspects of the Workshop. Implementation of RAG systems." This corresponds to context number 1.

Therefore, the relevant content for context number 1 is "Fig.6: Most Valuable Aspects of the Workshop. Implementation of RAG systems."

Now, I'll format the result as a JSON array with two keys: 'context_number' and 'relevant_content' for each relevant entry. Since only context number 1 has a score of 0, I'll include only that in the result.

So, the final JSON output should be:
[{"context_number":1,"relevant_content":"Fig.6: Most Valuable Aspects of the Workshop. Implementation of RAG systems."}]
</think>

```json
[
{
"context_number": 1,
"relevant_content": "Fig.6: Most Valuable Aspects of the Workshop. Implementation of RAG systems."
}
]
```

-------- ๐ŸŸฅ Rag Response Statement ๐ŸŸฅ --------

Based on the provided context, the RAG (Retrieval Augmented Generation) process seems to be structured into the following key steps:

1. Setting Up the Environment: This involves installing Python, setting up a virtual environment, and configuring an Integrated Development Environment (IDE), such as VSCode.

2. Implementing the RAG System: This involves refining the integration between retrieval and generation to enhance the transparency and reliability of the system's outputs. The system's real-time retrieval capabilities are particularly effective in knowledge-intensive domains.

3. Enhancing the Model's Ability to Respond: This can be achieved through practical coding exercises, which help users better understand the RAG system.

4. Integrating RAG into Workflows: The RAG guide provides clear and actionable steps to help users integrate RAG into their workflows, contributing to the growing toolkit of AI-driven solutions.

Additionally, there are emerging trends and tools shaping the future of RAG, such as Haystack, an open-source framework that integrates dense and sparse retrieval methods with large-scale language models, and Elasticsearch with Vector Search, which enhances support for dense vector search capabilities.

Query and Contexts โ€” 3

contexts =  retriver.invoke("What are the drawbacks of RAG approach?")
context = [d.page_content for d in contexts]
query = "What are the drawbacks of RAG approach?"
retriever_query = query

Invoke the Agent Orchestration

final_output = context_management_chain({"context":context,"retriever_query":query,"query":query})

Response

# Print first response (RAG Relevan Context Picker Agent)
print("\n-------- ๐ŸŸฅ context_relevancy_evaluation_chain Statement ๐ŸŸฅ --------\n")
print(final_output["relevancy_response"])

# Print final legally refined response
print("\n-------- ๐ŸŸฆ pick_relevant_context_chain Statement ๐ŸŸฆ --------\n")
print(final_output["context_number"])

print("\n-------- ๐ŸŸฅ relevant_contexts_chain Statement ๐ŸŸฅ --------\n")
print(final_output["relevant_contexts"])

print("\n-------- ๐ŸŸฅ Rag Response Statement ๐ŸŸฅ --------\n")
print(final_output["final_response"])
-------- ๐ŸŸฅ context_relevancy_evaluation_chain Statement ๐ŸŸฅ --------

<think>
Okay, I need to evaluate each context in the CONTENT LIST to see if it provides sufficient information to answer the query: "What are the drawbacks of RAG approach?"

The query is asking specifically about the drawbacks, so I'm looking for any mention of cons, limitations, or disadvantages related to RAG.

Looking at the first content: It talks about the most valuable aspects of a workshop and implementation of RAG systems. It doesn't mention any drawbacks, so this one doesn't help. I'll score it 1.

The second content discusses a decision framework comparing Fine-Tuning, RAG, and Base Models. It lists drawbacks of RAG, such as limited performance on domain-specific queries needing customization. This directly addresses the query, so it gets a 0.

The third content is about the role of PDFs in RAG, highlighting their importance. No drawbacks mentioned here, so score 1.

The fourth content describes RAG models, their benefits, and some future trends. It mentions that RAG requires complex infrastructure and is resource-intensive, which are drawbacks. So, this gets a 0.

The fifth content talks about customer support and RAG's ability to handle dynamic info. It doesn't mention any drawbacks, so score 1.
</think>

{"score":0}
{"score":0}
{"score":1}
{"score":0}
{"score":1}

Wait, but the user instructed to provide the response in JSON format with each content evaluated. Let me correct that.

Actually, the correct approach is to evaluate each content in the list and assign a score of 0 or 1 based on whether it contains the necessary information.

Here's the step-by-step evaluation:

1. Content 1: No mention of drawbacks. Score 1.
2. Content 2: Explicitly lists drawbacks. Score 0.
3. Content 3: Talks about PDFs, no drawbacks. Score 1.
4. Content 4: Discusses infrastructure needs and resource intensity. Score 0.
5. Content 5: Use case examples, no drawbacks. Score 1.

So the final JSON should reflect each content's score individually.

```json
[
{"content": "1", "score": 1, "Reasoning": "Does not mention drawbacks of RAG."},
{"content": "2", "score": 0, "Reasoning": "Explicitly lists drawbacks of RAG."},
{"content": "3", "score": 1, "Reasoning": "Focuses on PDFs, no drawbacks mentioned."},
{"content": "4", "score": 0, "Reasoning": "Discusses RAG's infrastructure and resource requirements as drawbacks."},
{"content": "5", "score": 1, "Reasoning": "Describes use cases without mentioning drawbacks."}
]
```

-------- ๐ŸŸฆ pick_relevant_context_chain Statement ๐ŸŸฆ --------

<think>
Alright, I need to help the user by analyzing the JSON structure they provided. Their main goal is to evaluate each content item to see if it addresses the drawbacks of the RAG approach.

First, I'll look at each content in the list:

1. Content 1 talks about valuable aspects and implementation of RAG systems. No drawbacks mentioned here, so I'll score it 1.
2. Content 2 discusses a decision framework comparing different models and explicitly lists drawbacks of RAG. This is relevant, so I'll score it 0.
3. Content 3 focuses on the role of PDFs in RAG. No drawbacks here, so score 1.
4. Content 4 mentions the need for complex infrastructure and resources as drawbacks. Score 0.
5. Content 5 is about customer support and handling dynamic info. No drawbacks mentioned, so score 1.

Now, I need to pick the 'content' key where the 'score' is 0. That means I'm looking for content numbers 2 and 4.

I should present this in the required JSON format without any extra explanation, just the content numbers with scores 0.
</think>

{"content": "2"}
{"content": "4"}

-------- ๐ŸŸฅ relevant_contexts_chain Statement ๐ŸŸฅ --------

<think>
Alright, I need to help the user by evaluating each content item to determine if it addresses the drawbacks of the RAG approach.

Looking at the Context Number Response, the user is asking for content numbers 1, 4, and 5.

1. **Content 1**: This talks about the valuable aspects and implementation of RAG systems. It doesn't mention any drawbacks, so it's not relevant.
2. **Content 4**: This explicitly lists the drawbacks of RAG, such as requiring complex infrastructure and being resource-intensive. This is relevant.
3. **Content 5**: This focuses on base models and when to use them, not discussing RAG's drawbacks. Not relevant.

So, only content 4 is relevant. I'll format the response as a JSON array with the relevant content number and its corresponding content from the list.
</think>

```json
[
{
"context_number": 4,
"relevant_content": "Drawbacks:RAGrequirescomplexinfrastructure,includingvectordatabases\nand effective retrieval pipelines, and can be resource-intensive during inference."
}
]
```

-------- ๐ŸŸฅ Rag Response Statement ๐ŸŸฅ --------

The drawbacks of the Retrieval-Augmented Generation (RAG) approach include:

1. Limited performance on domain-specific queries or tasks that need high levels of customization.
2. Requirement of complex infrastructure, such as vector databases and effective retrieval pipelines, which can be resource-intensive during inference.

These drawbacks make RAG less suitable for applications that require broad generalization, low-cost deployment, or rapid prototyping. In such cases, base models without fine-tuning or RAG are more appropriate.

๐ŸŽฏ Conclusion

The implemented RAG pipeline demonstrates a robust framework for building context-aware AI systems by integrating cutting-edge tools:

  1. Efficient Embeddings: Using ModernBERT (nomic-ai/modernbert-embed-base) for semantic chunking ๐Ÿงฉ enables precise document segmentation, balancing context retention and computational efficiency.
  2. Speed-Optimized Inference: Leveraging Groqโ€™s LPUs โšก with mixtral-8x7b and deepseek-r1-distill-llama-70b ensures low-latency response generation, critical for real-time applications like chatbots ๐Ÿ’ฌ or code assistants ๐Ÿ‘ฉ๐Ÿ’ป.
  3. Agentic Workflow: The SequentialChain orchestrates a multi-step validation process:
  • Relevancy Checker ๐Ÿค– filters out irrelevant context (reducing hallucinations).
  • Context Picker ๐ŸŽฏ dynamically selects optimal text chunks.
  • Response Synthesizer ๐Ÿง  generates answers grounded in verified data.

โœ… Key Strengths

  • Accuracy: Multi-agent validation ensures responses align strictly with provided context (๐Ÿ“šโ†’โœ…).
  • Scalability: Chroma vector store ๐Ÿ—„๏ธ allows seamless scaling to large document corpora.
  • Interpretability: Transparent JSON-based evaluation logs enable debugging and audit trails (๐Ÿ”โ†’๐Ÿ“Š).

In summary, this implementation bridges the gap between raw data and actionable insights, showcasing how modern AI frameworks (LangChain), efficient hardware (Groq), and state-of-the-art models (ModernBERT) can collaboratively solve complex information retrieval challenges.

๐Ÿš€ Practical Applications

This pipeline is ideal for:

  • Enterprise knowledge bases ๐Ÿข๐Ÿ“‘
  • Legal/medical document analysis โš–๏ธ๐Ÿฅ
  • Academic research tools ๐Ÿ”ฌ๐Ÿ“š

๐Ÿ”ฎ Future Enhancements

  • Expand context window (32k+ tokens) for handling lengthy documents ๐Ÿ“œ.
  • Integrate hybrid search (semantic + keyword) for improved recall ๐Ÿ”.
  • Optimize cost-efficiency with quantized models ๐Ÿ’ฐ.

Note: The above article has been formulated by browsing through online resources.

connect with me

--

--

The AI Forum
The AI Forum

Published in The AI Forum

Its AI forum where all the topics spread across Data Analytics, Data Science, Machine Learning, Deep Learning are discussed.

Plaban Nayak
Plaban Nayak

Written by Plaban Nayak

Machine Learning and Deep Learning enthusiast

Responses (3)