Getting Started: Model Context Protocol

Chris McKenzie
5 min readDec 19, 2024

--

Discover how MCP simplifies AI integrations, unlocks real-time data access, and empowers developers to build smarter, scalable solutions

AI systems today are stuck behind a wall of data silos and fragmented integrations, making it hard to connect them to the tools and systems where information actually lives. I can’t count the number of times I’ve had to copy content from one data source, and paste it into Claude or ChatGPT. And if you want to avoid this, every new data source requires a custom integration. At best this is a slight pain, and at worst, it is slowing progress and creating scalability headaches.

Model Context Protocol (MCP) solves this by introducing a universal standard for connecting AI to data and tools. Instead of patchwork solutions, it provides a streamlined, open protocol that simplifies integrations, breaks down silos, and unlocks the full potential of AI to deliver relevant, high-quality results.

“even the most sophisticated models are constrained by their isolation from data — trapped behind information silos and legacy systems.” ~ Anthropic

What is MCP and Why Does It Matter?

AI is stuck in a frustrating loop: (rewrite) powerful tools, limited by fragmented data connections. Every new data source demands a custom integration, creating silos and scalability headaches for developers. The result? AI systems can’t fully tap into the context they need to deliver truly relevant and useful outputs.

That’s where the Model Context Protocol (MCP) comes in. MCP is an open standard by Anthropic, designed to provide a universal way to connect data sources with AI-powered tools. Whether it’s your content repositories, business tools, or developer environments, MCP simplifies the process of building secure, two-way connections so AI can finally work the way it’s supposed to.

With MCP, developers no longer have to reinvent the wheel for every data source. Instead, they can focus on creating smarter, more connected AI systems that scale without the constant drag of maintaining fragmented integrations.

How MCP Works

MCP works by bridging two key components: MCP servers that expose data sources and MCP clients (AI apps) that connect to those servers. This architecture is simple but incredibly flexible, allowing AI systems to easily access the data and tools they need in real time, while providing a permissions framework to control access.

To make adoption easy, MCP includes three main pieces:

  • Hosts are LLM applications (like Claude Desktop or IDEs) that initiate connections
  • Clients maintain 1:1 connections with servers, inside the host application
  • Servers provide context, tools, and prompts to clients

You can review the specifications here

How to use it

You can start leveraging the protocol today in minutes. While there are many permutations of servers and clients, for this demo I’m going to use the Claude Desktop client and Knowledge Graph Memory Server.

If you’re interested in building your own server, check out my tutorial on getting started with MCP Servers.

To Start, Download Desktop app if you don’t already have it.

Now let’s add the Knowledge Graph Memory Server to Claude for storing and retrieving information. However, I encourage you to navigate to repo readme to see list of available servers. They’ve been updating this frequently, so I’d keeping an eye on it.

Start by going to settings, and click “Edit Config.” If claude_desktop_config.json doesn't automatically open, then open it in your editor of choice.

Add the following config to the file:

{
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
]
}
}
}

Save and close. That’s it!

Now you can begin to add nodes and edges to your knowledge graph. For this demo, I’ll use the famous paper, “Attention Is All You Need

I start by dropping in the following prompt:

Read the following research paper, and add key concepts and their relationships to the knowledge base:

[PASTE CONTENTS OF RESEARCH PAPER HERE]

Claude will now ask for permission to write to your knowledge graph (or memory). Click “Allow for this Chat”. Working with the memory server, Claude will create nodes and edges in the graph with key concepts and their relationships.

Once this is done, you can start a new chat thread (or even close Claude, as the data is stored locally and not tied to a session).

Enter a prompt similar to:

Based on your knowledge in the knowledge base on the subject of attention in transformers, create a mermaid graph that shows the key concepts and relationships

The end result should look something like:

Note: your results may vary depending on the version of Claude as well as how it interpreted your prompt and parsed the article. I encourage you to play around with testing this on different models and prompts.

From here you can continue to build relationships between concepts, add more detailed information, or use the knowledge to help perform other tasks. This is just one of the many servers you have access to, and I encourage you to install others, including the sqlite server.

Next Steps

While not in the scope of this article, you can develop your own custom integrations using the open protocol, and Anthropic’s Python and TypeScript SDKs. If you want to build your first MCP server, check out my tutorial.

Final Thoughts

The Model Context Protocol represents a promising shift in how AI systems interact with data, breaking down silos and simplifying integrations. By adopting MCP, developers can focus on building smarter, scalable AI solutions without the burden of fragmented systems. Whether you’re leveraging pre-built servers or exploring custom integrations, MCP provides the foundation for unlocking AI’s full potential — one seamless connection at a time.

To stay connected and share your journey, feel free to reach out through the following channels:

  • 👨‍💼 LinkedIn: Join me for more insights into AI development and tech innovations.
  • 💻 GitHub: Explore my projects and contribute to ongoing work.
  • 📚 Medium: Follow my articles for more in-depth discussions on LangSmith, LangChain, and other AI technologies.

--

--

Chris McKenzie
Chris McKenzie

Written by Chris McKenzie

Some guy in NYC leading teams and writing code

Responses (5)