Haig Douzdjian
Feedback Intelligence
3 min readMay 31, 2024

--

Part 2 (Connectors): How to Cut Through the Feedback Noise

Welcome back intelligentsias! (Get it? Because we all love Feedback Intelligence 😎)

In Part 1, we explored the limitations of current evaluation methods, introduced the concepts of explicit and implicit feedback, and highlighted why no one is able to harness this feedback due to the noisy and unstructured nature.

Now, let’s go a level deeper to understand how to gather this feedback and cut through the feedback noise via our Connectors.

The 3 most common frameworks to utilize modern LLMs are:

We handle them all! No matter the framework, making sense of the noisy feedback data is still the Achilles heel for Ai teams as they attempt to improve LLM product reliability.

Here’s how we cut through the noise to enable valuable insights, such as root cause analysis, sentiment analysis, issue clustering, etc:

To gather Implicit Feedback — we hand over a function to capture LLM calls and responses. More specifically:

  • Queries, responses, user IDs, transaction IDs (for chains), underlying prompts, respective hyper-parameters (if available), call traces (if available), etc.

To gather Explicit Feedback — there are three possibilities that can be combined as needed:

  • [In-App] If the data is already stored or processed appropriately (ie RDS, S3, etc): we provide a connector to the DB that listens to new processed information and pulls it directly into our Insights.
  • [In-App] If no processing exists: we provide a custom function to capture thumbs up, thumbs down, star ratings, and/or raw text input along with corresponding user and thread metadata.
  • [External Channel] If user correspondence takes place outside of the product (ie Slack, Gmail, Teams, Outlook, etc): we provide a custom bot or integration (dependent on the application) to capture raw text input.

To gather additional parameters for our Insightsthere are many possibilities, the more provided the deeper the insights:

  • Knowledge base or document store (ie S3).
  • Custom or foundation model retrieval hyper-parameters (relevant information retrieval, reranking, and context framing/prompt engineering).

🥇 With all of these connectors in place, we are able to capture all of the unstructured noise from LLM products -> structure the data -> feed it back to the Ai team in a usable form via our Insights.

In Part 3, we will walk through how to harness feedback effectively via our Insights.

In the finale, Part 4, we will understand how to utilize feedback and insights to make LLM products better via our Resolutions.

🏄 But wait there’s more! In the coming weeks we will ship product demos, case studies, and our Open-Core so any PM or builder can leverage this magic for free!

Reach out for questions, collaboration, or just to say hi! 👋

Co-author: movchinar

--

--