Real-Time Corrections: Microsoft’s Revolutionary Tool to Tackle AI “Hallucinations”

Germano Costi
2 min read1 day ago

--

Artificial Intelligence (AI) has made significant strides in recent years, but with progress comes new challenges.

One of the most debated issues is the problem of AI “hallucinations” — inaccurate or entirely fabricated responses generated by language models. Microsoft has just introduced an innovative tool that could radically change how we address this growing concern.

The Hallucinating AI: A Problem of Unreliable Answers

When we talk about hallucinations in AI, we refer to situations where generative models, like popular chatbots, produce responses that have no basis in the data or documents they are supposed to consult. These responses can be inaccurate or even misleading, posing risks to both users and companies that rely on these tools in critical environments.

On September 24, 2024, Microsoft seized this challenge and announced a new tool within its Azure AI Content Safety suite, capable not only of detecting hallucinations but also correcting them in real time.

How Does the New Hallucination Corrector Work?

Microsoft’s mechanism is simple to understand but extremely powerful. The system operates through the “groundedness” detection API, which assesses whether an AI-generated response is based on user-selected source materials. When the model detects an “ungrounded” sentence (i.e., one not supported by the reference data), the correction process begins.

Here’s how it works:

  1. The developer enables the correction capability.
  2. When an ungrounded sentence is detected, a new request is sent to the generative model for correction.
  3. The model checks the sentence against the user-provided reference document.
  4. Sentences that do not relate to the reference material can either be removed or rewritten to align with the correct content.

The result? AI that generates far more reliable and accurate content, minimizing the risk of errors.

AI Content Safety: Comprehensive Security

But Microsoft didn’t stop there. In addition to the hallucination correction tool, they announced the Public Preview of a new hybrid version of its content safety system, Azure AI Content Safety (AACS).

This allows developers to implement content moderation mechanisms both in the cloud and directly on devices.

The on-device feature enables real-time checks, even without an internet connection, enhancing security and reliability in complex environments.

Code Generation Security: Another Major Step Forward

Lastly, a feature that will interest programmers: Microsoft has released a tool for detecting protected materials in AI-generated code. This feature is now available for customers using generative AI models that produce code, ensuring that the output does not contain portions of code protected by copyright or patents.

These updates mark a significant step forward in the moderation and management of AI-generated content, promoting safer and more reliable AI applications across industries, research, and everyday apps.

Conclusion

Microsoft’s new direction in AI content safety is a game-changer. It not only improves the accuracy and quality of AI-generated responses but also strengthens user trust in AI. Developers now have more advanced tools to ensure that AI consistently produces reliable, secure, and fact-based content.

--

--

Germano Costi

Lifelong learner passionate about discovering new things and cybersecurity trends to innovative tech. My Book with amazon affiliate https://amzn.to/3XfAIAE