Understanding how the Azure OpenAI “use your data” feature works

Saverio Proto
Microsoft Azure
Published in
5 min readSep 27, 2023

As of September 2023, Azure OpenAI has a preview feature called “Use your data.” To understand what this feature is, let’s first talk about Grounding and Retrieval Augmented Generation.

Grounding

Grounding with Large Language Models (LLMs) like GPT-3.5 means giving specific information or context to the model and adding it to the prompt. This helps the model give better and more accurate answers. Instead of only relying on what the model learned during training, you include extra information in the prompt to reduce hallucinations and make the generated responses more specific.

Retrieval Augmented Generation (RAG)

RAG, or Retrieval Augmented Generation, is the process of providing context to a prompt by using document retrieval through a search service. While it’s possible to perform RAG using a vector database, it’s not a requirement. The challenging part lies in formulating a search query that can retrieve relevant data for grounding the prompt. Vector databases are commonly used for RAG because they allow you to calculate the embedding of the user input and then perform a vector search in the database to find content that matches the user’s input.

There are more advanced methods for conducting RAG. One such method I’m familiar with is the Hypothetical Document Embedding (HyDE) technique, which enhances accuracy when searching for similarities among documents within the Cheshire Cat AI framework. Another famous open source library that helps you composing your own RAG pipeline is LlamaIndex.

How Azure OpenAI “use your data” helps me ?

You should have all the information now to understand a typical RAG architecture:

Simple RAG Model — author Eleanor Berger https://everything.intellectronica.net/p/grounding-llms

Once the user input (trigger) is captured, your software implementation is responsible for the prompt augmentation and all the necessary steps before the API call to the LLM.

If you are using a vector database, there is also some pre-processing involved, because you need to calculate embeddings for your data. The data needs to be chunked in a way that each chunk has some semantic meaning.

Simple Vector Indexing / Retrieval Model — author Eleanor Berger https://everything.intellectronica.net/p/grounding-llms

The primary advantage of Azure OpenAI’s “use your data” feature is that you only need to include the dataSources variable block in your Azure OpenAI ChatCompletion API call. As a managed service, the prompt is automatically grounded. Additionally, beginning with unstructured data stored in an Azure Blob Storage account, the required tools are supplied for data preparation, data ingestion, and the creation of the search index. There’s no need to build your own RAG pipeline from the ground up.

This is how the dataSources configuration looks like, from the tutorial in the Azure documentation:

Screenshot from the Azure documentation page https://learn.microsoft.com/en-us/azure/ai-services/openai/use-your-data-quickstart?tabs=command-line&pivots=rest-api#example-curl-commands

Adding the dataSources variable block, Azure OpenAI service will automatically augment the prompt using the Azure Cognitive Search index.

The product is still in preview, to see all the possible API fields, open the GitHub azure-rest-api-specs.

GitHub snapshot of https://github.com/Azure/azure-rest-api-specs/blob/06479e897f635086bd40e6ee879e94f4a803ddd9/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json#L1238-L1261

The configuration wizard in Azure OpenAI Studio

To help you get started, Azure OpenAI Studio provides a wizard that assists you in making requests to Azure OpenAI while specifying a data source.

Let’s understand these 3 options:

  • “Azure Cognitive Search”: The assumption here is that you already have an active Azure Cognitive Search service in place. The following screen will guide you through the process of selecting the index you wish to use and, if necessary, choosing the embedding model for vector search.
  • “Azure Blob Storage”: We assume that you already have data stored in Azure Blob Storage and have not previously created an Azure Search Service. The following screen will assist you in selecting the existing storage container and guide you through the process of creating a new Azure Cognitive Search service. You will be prompted to provide a new index name and set an index schedule, indicating how frequently the index should be reindexed.
  • “Upload files”: In this scenario, we assume that you wish to explore the “use your own data” feature but have not yet set up an Azure Blob Storage with data or an Azure Cognitive Search service. In this case, the following screen will guide you through the process of creating both the Azure Blob Storage and the Azure Cognitive Search service. Additionally, you will have the option to upload sample files for testing the product.
    When using the “Upload files” wizard, Azure Blob Storage will create a container named “fileupload-<indexname>” to store the files that are uploaded through the web interface.

The Azure Cognitive Search service will host the index. It’s worth noting that the document count may not necessarily reflect the exact number of files in the storage account.

This difference occurs because the product utilizes a data preparation script to add data to Cognitive Search, where the chunking takes place. You can find the script on GitHub if you want to manually add a large amount of data without using the web wizard.

The configuration in the “Add your data” tab is temporary. However, the next time you wish to utilize this feature within OpenAI Studio, you can access the “Azure Cognitive Search” menu. By that point, you will have already uploaded your data to Azure Blob Storage and set up an Azure Cognitive Search service.

References

I used the following resources to write this blog post:

Conclusion

In summary, Azure OpenAI’s “use your data” feature simplifies prompt grounding without the need for complex RAG pipelines. For customers aiming to create applications, having some basic knowledge of Azure Cognitive Search is essential. Combining Azure OpenAI and Azure Cognitive Search allows for the development of AI applications that seamlessly work with your enterprise data, offering a solution guaranteed to be secure and scalable.

--

--

Saverio Proto
Microsoft Azure

Customer Experience Engineer @ Microsoft - Opinions and observations expressed in this blog posts are my own.