Model Context Protocol (MCP) is the standard for how AI clients talk to tools and data sources. It's not a product — it's a specification — and that's the most important architectural fact about it.
We read the actual spec repository (verified on 2026-04-29) and several reference servers to produce this walkthrough.
What MCP is
In one sentence: MCP defines how an AI client (like Claude Code, Cursor, or Claude Desktop) discovers and invokes external tools, data sources, and prompt templates over a standard wire protocol.
In one paragraph: AI clients before MCP each shipped their own way of integrating tools — Claude Desktop had one mechanism, OpenAI's function-calling another, custom agents whatever the developer invented. The result was an N×M problem: every tool needed a separate integration for every client. MCP collapses that to N+M: every client speaks MCP, every tool speaks MCP, and any compatible client works with any compatible tool.
Verified GitHub data (2026-04-29)
| Metric | Value |
|---|---|
| Stars | ~8,000 |
| Forks | 1,474 |
| Open issues | 224 |
| Created | 2024-09-24 |
| Last push | 2026-04-28 |
| Language | TypeScript |
| Top-level dirs | docs/, schema/, seps/ (specification enhancement proposals), tools/, blog/ |
| Homepage | modelcontextprotocol.io |
| License | NOASSERTION (open / vendor-neutral intent) |
The architecture
Three core concepts
The MCP spec defines three primitive concepts a server can expose:
- Resources: Read-only data the model can request. Examples: a file's contents, a database query result, a Git log. The client asks for the resource by URI; the server returns the data. The model treats resources as context.
- Tools: Invokable functions with structured arguments. Examples:
run_query(sql),send_email(to, subject, body),fetch_url(url). The model decides when to call a tool based on the user's request; the server executes it and returns a structured result. - Prompts: Parameterized templates the server provides for the user (or model) to use. Example: a "code-review" prompt with parameters for repo and PR number.
These three concepts are deliberately minimal. Every MCP capability is one of these three, which keeps the protocol simple and predictable.
Two transports
MCP runs over JSON-RPC, with two supported transports:
- stdio: For local servers. The client launches the server as a subprocess and communicates via stdin/stdout. Ideal for tools that need access to the local filesystem, the user's shell environment, or local databases.
- HTTP+SSE: For remote servers. The client makes HTTP requests for synchronous operations and receives streaming events via Server-Sent Events. Ideal for SaaS integrations, hosted databases, or tools that don't need local access.
The two transports cover the realistic deployment shapes. Local stdio servers run alongside the client (low latency, full local access). Remote SSE servers run on someone's infrastructure (multi-tenant, scalable).
Capability negotiation
When a client connects to a server, the first exchange is capability negotiation. The server tells the client what resources, tools, and prompts it offers. The client adapts — it knows what the server can do without prior configuration.
This is what enables ecosystem composability. A new MCP server appears, the client connects, the client immediately knows what's available. No per-server code on the client side.
What's in the repo
The schema/ directory holds the TypeScript schema definition that's exported as JSON Schema for cross-language consumption. This is the source of truth — every MCP implementation, in every language, is checking against this schema.
The seps/ directory (Specification Enhancement Proposals, modeled after Python's PEPs) is where the protocol evolves. New capabilities, transport changes, and breaking-version proposals all live here as numbered SEPs with public discussion.
The docs/ directory is the user-facing documentation built with Mintlify and hosted at modelcontextprotocol.io. The clients reference it; new MCP server authors learn from it.
The tools/ directory holds automation and tooling — schema validators, test harnesses, code generators for spec-derived bindings.
Where MCP wins
- Ecosystem network effects. Build an MCP server once for your database; every MCP client gets database access. The compounding value of "speak the protocol once" grows with every new client and server.
- Vendor neutrality. Anthropic designed MCP but made it open and not Claude-specific. Cursor and other clients consume it. Third-party MCP servers come from across the ecosystem.
- Transport flexibility. Local stdio for power-user tools, HTTP+SSE for hosted SaaS — the protocol covers both shapes without forking.
- Minimal surface area. Resources / Tools / Prompts is a deliberately small primitive set. Easy to implement, easy to reason about.
Where MCP loses
- It's not a runtime. MCP describes how things should talk, not what they should do. You still need to build the client, the server, and whatever the server does. The protocol is the cheap part.
- Capability discoverability ≠ trustworthiness. A client knows what an MCP server says it can do, but the client has to decide whether to trust it. Auth, sandboxing, and consent are out-of-scope for the protocol — they're implementation concerns.
- Maturity is uneven. As of 2026, the protocol is ~18 months old. Edge cases (binary resources, long-lived connections, advanced auth flows) are still being specified in SEPs.
- Tool discoverability for users isn't solved. A user has to find MCP servers somehow — there's no central registry baked into the protocol. Third-party directories exist but aren't part of the spec.
The practical effect
If you've used Claude Code or Cursor in 2026, MCP is the substrate enabling many of the integrations you use without knowing it. A "GitHub" tool, a "Postgres" tool, a "Slack" tool — these are typically MCP servers running locally or remotely, exposing capabilities to whichever client you're using.
The strategic implication: AI client teams (Anthropic, Cursor, others) don't need to ship every integration anyone might want. The MCP ecosystem fills the long tail. This is similar to how VS Code doesn't ship every language server — the LSP ecosystem fills it.
When to build on MCP
- You're building an AI client and want to inherit a growing ecosystem of tools without per-tool integration work
- You're building a tool and want it to be usable across multiple AI clients (Claude Code, Cursor, custom agents) for free
- You're building a domain-specific agent and want a clean way to expose your tools to a model
When NOT to build on MCP
- You're shipping an entirely model-vendor-specific product where ecosystem composability isn't a goal
- Your tool needs are exotic enough that the Resources/Tools/Prompts primitives don't fit (rare — they're flexible)
- You need synchronous request/response with no streaming, and SSE-shaped semantics feel like overkill (in this case, build a simple HTTP server and ignore MCP)
Where to drill in deeper
- How AI Coding Tools Actually Work — cluster pillar that situates MCP among other AI-coding architectures
- How Claude Code Actually Works — Claude Code is an MCP-compatible client; understanding the client side completes the picture
- What Is AI Code Research? — the agent that read the MCP spec and produced this analysis
Want this on a different protocol or repo?
This article is itself a worked example. We pointed AI Code Research at modelcontextprotocol/modelcontextprotocol, it read the spec and reference implementations, and produced this code-level walkthrough.
→ Try the same on any GitHub repo — free to start, no credit card.