Title: MCP Explained — The Protocol That Lets AI Agents Actually Do Things
Intro — USB-C analogy to hook instantly
Your AI agent is smart. It can reason, plan, and respond — but the moment you ask it to check your database, read a file, or call an API, you're back to writing custom glue code for every single tool. MCP fixes that. One protocol, any tool, any model.
MCP (Model Context Protocol) is an open standard introduced by Anthropic that defines how AI models communicate with external tools and data sources. Think of it as USB-C for AI agents — instead of a different connector for every device, you get one universal plug that works everywhere.
The Problem MCP Solves
Before MCP, connecting an LLM to tools was a mess. Every integration was bespoke — different SDKs, different auth patterns, different formats. You'd end up with something like this:
Custom integration #1 for your database tool
Custom integration #2 for your file system tool
Custom integration #3 for your web search tool
Repeat this for every new tool, every new agent, every new model
Result: unmaintainable spaghetti that breaks every time something changes
MCP collapses all of this into a single, standardized interface. Build an MCP server once — any MCP-compatible agent can use it.
The Architecture
MCP has three roles. Understanding them makes the whole thing click instantly.
The AI Application
The app running the LLM — Claude Desktop, your custom LangGraph agent, Cursor, whatever. It orchestrates everything and decides which MCP servers to connect to.
The Protocol Bridge
Lives inside the host. Maintains a 1:1 connection with each MCP server and handles the communication protocol so your app doesn't have to.
The Tool Provider
A lightweight process that exposes capabilities — tools, resources, prompts. It could be a file system server, a Postgres server, a web search server, or anything you build yourself.
Host (LLM App) ↔ Client ↔ MCP Server (Tool) ↔ External Service
What an MCP Server Exposes
An MCP server can expose three types of things to the model:
| Primitive | What it is | Example |
|---|---|---|
| Tools | Functions the model can call | run_query(), send_email() |
| Resources | Data the model can read | file://readme.md, db://users |
| Prompts | Reusable prompt templates | summarize_doc, review_pr |
Most real-world usage is Tools — the model decides which tool to call, passes in arguments, and gets a result back. Resources and Prompts are more advanced patterns.
How the Flow Actually Works
Here's what happens when your agent uses an MCP tool end-to-end:
Host connects to MCP server
On startup, your agent discovers what tools are available by listing them from the server.
User sends a message
"What are the open issues in my repo?" — the agent now knows it has a GitHub tool available.
Model decides to call a tool
LLM outputs a structured tool call — name + arguments — instead of a text response.
Client routes to the right MCP server
The client executes the call against the GitHub MCP server and gets the result back.
Result injected back, model responds
Tool result goes back into context, model generates its final response using real data.
User → LLM → Tool Call → MCP Server → Result → LLM → Final Response
Transport: How They Talk
MCP supports two transport mechanisms depending on where your server lives:
Standard I/O (local)
Server runs as a child process on the same machine. Fast, no network overhead. Most Claude Desktop integrations use this.
Server-Sent Events (remote)
Server runs as an HTTP service. Enables remote MCP servers — your agent in one place, tools deployed elsewhere.
Notes
MCP is not a framework. It's a protocol spec. You can implement an MCP server in Python, TypeScript, Go — anything. The official SDKs just make it easier.
LangGraph + MCP is a natural fit. If you're already building agentic pipelines with LangGraph, MCP slots in cleanly — your graph nodes can call MCP tools without touching your core graph logic.
The ecosystem is growing fast. There are already community MCP servers for GitHub, Postgres, Slack, Brave Search, filesystem, and more. You can drop these in without writing a single line of server code.
Security is on you. MCP servers can do real things — read files, run queries, call APIs. Scope your tools tightly, validate inputs, and never expose sensitive servers without auth.
CODE
Here's a minimal MCP server in Python using the official SDK. This exposes one tool — get_weather — that any MCP-compatible agent can call.
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
import asyncio
app = Server("weather-server")
@app.list_tools()
async def list_tools():
return [
Tool(
name="get_weather",
description="Get current weather for a city",
inputSchema={
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "get_weather":
city = arguments["city"]
# your actual weather API call here
return [TextContent(type="text", text=f"Weather in {city}: 28°C, Sunny")]
async def main():
async with stdio_server() as (read, write):
await app.run(read, write, app.create_initialization_options())
asyncio.run(main())
Install the SDK with pip install mcp, run this server, and point any MCP host at it. That's it — your agent now has a weather tool with zero custom integration code on the agent side.
Closing
MCP just hit a major milestone — OpenAI adopted it in March 2025, which means it's no longer just an Anthropic thing. It's quickly becoming the de facto standard for agent-tool communication across the entire industry.
If you're building agentic systems in 2026, MCP is worth learning now — not because it's hype, but because the alternative is writing custom integrations forever. The protocol is simple, the ecosystem is growing fast, and the tooling is already mature enough to use in production.
My next step is wiring MCP into a LangGraph pipeline. If you're exploring the same space, I'll be writing about that soon.
Keep reading.
More from Ai Concepts