Unleashing Structured Responses: Functional Calling with LangChain, Ollama, and Phi-3(PART-3):

Anoop Maurya
6 min readMay 16, 2024

In the previous articles, we explored functional calling with LangChain, Ollama, and Microsoft’s Phi-3 model. We focused on functional calling, demonstrating how to interact with the LLM and retrieve a textual response. This article delves deeper into achieving structured output, specifically using JSON format, for a more organized and machine-readable response.

The Power of Structured Output:

Structured output allows the Large Language Model (LLM) to return its response in a pre-defined format, such as JSON or XML. This makes the response easier to parse and integrate into applications. Structured output is particularly beneficial for tasks where the LLM’s response needs to be programmatically processed or visualized.

For example, imagine you’re building a news aggregator application. You could use an LLM to summarize news articles and return the summaries in JSON format. This format would allow you to easily extract the summarized text, source information, or other relevant details from the LLM’s response and display them in your application.

Leveraging LangChain’s Capabilities:

--

--