Featured

Model Context Protocol (MCP) using Ollama

Mehul Gupta
Data Science in Your Pocket
3 min readMar 29, 2025

--

Photo by Lorenzo Herrera on Unsplash

Model Context Protocols: MCP servers are said to be the next big game changer in the world of AI, which will make AI agents way more advanced than we can think of.

MCP or Model Context Protocol was released by Anthropic last year, which helps the LLM to connect with software and take control over it.

but there is a catch

Most of the MCP servers are compatible with Claude AI, especially the Claude AI desktop applications, which have their own limits.

Is there a way we can run MCP servers using local LLMs?

Yes, in this particular step-by-step, detailed tutorial, we will be exploring how to use MCP servers using local LLMs using Ollama.

Let’s get started!

Setting up Ollama

Step 1: Install Ollama in your local system.

Step 2: Ollama pull an LLM that supports tool calling. How? Check for relevant LLMs from the ‘tool’ sections in the ‘Model’

How to pull? Go to cmd in your local system and run (for qwen2.5)

ollama run qwen2.5

Prepare config.json

config.json is important to store information about the different MCP servers we will be using. Below is a sample config.json that I am using for now.

Note: As I am using Windows, I need to provide the full path of UVX, DB Path, etc., alongside double backslashes. If you are using Mac OS or Linux, this should be quite easy, and you should use just the command name (e.g., UVX).

{
"globalShortcut": "Ctrl+Space",
"mcpServers": {
"sqlite": {
"command": "C:\\Users\\datas\\anaconda3\\Scripts\\uvx.exe",
"args": ["mcp-server-sqlite", "--db-path", "C:\\Users\\datas\\OneDrive\\Desktop\\Car_Database.db"]
},
"ddg-search": {
"command": "C:\\Users\\datas\\anaconda3\\Scripts\\uvx.exe",
"args": ["duckduckgo-mcp-server"]},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"C:\\Users\\datas\\OneDrive\\Desktop\\ollama-mcp"
]
}
}

}

Save the above JSON in some JSON file (e.g., `local.json`) and copy its full path.

Setting up MCPHost

now, our local LLM is set to use the MCP servers. Now let’s set up the MCP servers.

Step 1: Install Go.

Step 2: Open CMD and run this command.

go install github.com/mark3labs/mcphost@latest

Step 3: Start your MCP host using the command below, provide the path for your local.json file that you created above

mcphost -m ollama:qwen2.5 --config "C:\Users\datas\OneDrive\Desktop\local.json"

You are now ready to use MCP servers using local LLMs in Ollama.

Ask any queries to your local LLM around the software; they should be able to answer. Do remember that if you are using a weak tool calling LLM, you need to be very specific about the tool names that are to be used.

Thank you so much! I hope you try it out.

--

--

Responses (4)