All articles
AI Code Research11 min read

How MCP Works: Reading the Spec and Reference Servers

Model Context Protocol (MCP) is the standard for how AI clients talk to tools and data sources. We read the actual specification (TypeScript schema, JSON-RPC over stdio/HTTP-SSE) and several reference servers to produce a code-level walkthrough — and an honest take on why MCP has 8K stars and is being adopted by Claude Code, Cursor, and others.

By AI Code Research

Key takeaways

  • MCP (Model Context Protocol) is a specification, not a product — it defines how AI clients (Claude Code, Cursor, Claude Desktop, custom agents) discover and invoke external tools and data sources.
  • Released by Anthropic in late 2024 (created 2024-09-24), the spec repo at modelcontextprotocol/modelcontextprotocol has roughly 8,000 stars, 1,474 forks, and active development as of 2026-04-29.
  • The protocol uses JSON-RPC over stdio (for local MCP servers) or HTTP+SSE (for remote ones), with three core concepts: resources (read-only data), tools (invokable functions), and prompts (parameterized templates).
  • Why MCP matters: it lets the ecosystem compose. Build an MCP server once, every MCP client benefits. Claude Code and Cursor both consume MCP, which means a database integration written for one works in the other.
  • Where MCP wins: composability and ecosystem network effects. Where it loses: it requires implementation effort — the protocol doesn't run anything, it just describes how things should talk.

Model Context Protocol (MCP) is the standard for how AI clients talk to tools and data sources. It's not a product — it's a specification — and that's the most important architectural fact about it.

We read the actual spec repository (verified on 2026-04-29) and several reference servers to produce this walkthrough.

What MCP is

In one sentence: MCP defines how an AI client (like Claude Code, Cursor, or Claude Desktop) discovers and invokes external tools, data sources, and prompt templates over a standard wire protocol.

In one paragraph: AI clients before MCP each shipped their own way of integrating tools — Claude Desktop had one mechanism, OpenAI's function-calling another, custom agents whatever the developer invented. The result was an N×M problem: every tool needed a separate integration for every client. MCP collapses that to N+M: every client speaks MCP, every tool speaks MCP, and any compatible client works with any compatible tool.

Verified GitHub data (2026-04-29)

MetricValue
Stars~8,000
Forks1,474
Open issues224
Created2024-09-24
Last push2026-04-28
LanguageTypeScript
Top-level dirsdocs/, schema/, seps/ (specification enhancement proposals), tools/, blog/
Homepagemodelcontextprotocol.io
LicenseNOASSERTION (open / vendor-neutral intent)

The architecture

Three core concepts

The MCP spec defines three primitive concepts a server can expose:

  1. Resources: Read-only data the model can request. Examples: a file's contents, a database query result, a Git log. The client asks for the resource by URI; the server returns the data. The model treats resources as context.
  2. Tools: Invokable functions with structured arguments. Examples: run_query(sql), send_email(to, subject, body), fetch_url(url). The model decides when to call a tool based on the user's request; the server executes it and returns a structured result.
  3. Prompts: Parameterized templates the server provides for the user (or model) to use. Example: a "code-review" prompt with parameters for repo and PR number.

These three concepts are deliberately minimal. Every MCP capability is one of these three, which keeps the protocol simple and predictable.

Two transports

MCP runs over JSON-RPC, with two supported transports:

  • stdio: For local servers. The client launches the server as a subprocess and communicates via stdin/stdout. Ideal for tools that need access to the local filesystem, the user's shell environment, or local databases.
  • HTTP+SSE: For remote servers. The client makes HTTP requests for synchronous operations and receives streaming events via Server-Sent Events. Ideal for SaaS integrations, hosted databases, or tools that don't need local access.

The two transports cover the realistic deployment shapes. Local stdio servers run alongside the client (low latency, full local access). Remote SSE servers run on someone's infrastructure (multi-tenant, scalable).

Capability negotiation

When a client connects to a server, the first exchange is capability negotiation. The server tells the client what resources, tools, and prompts it offers. The client adapts — it knows what the server can do without prior configuration.

This is what enables ecosystem composability. A new MCP server appears, the client connects, the client immediately knows what's available. No per-server code on the client side.

What's in the repo

The schema/ directory holds the TypeScript schema definition that's exported as JSON Schema for cross-language consumption. This is the source of truth — every MCP implementation, in every language, is checking against this schema.

The seps/ directory (Specification Enhancement Proposals, modeled after Python's PEPs) is where the protocol evolves. New capabilities, transport changes, and breaking-version proposals all live here as numbered SEPs with public discussion.

The docs/ directory is the user-facing documentation built with Mintlify and hosted at modelcontextprotocol.io. The clients reference it; new MCP server authors learn from it.

The tools/ directory holds automation and tooling — schema validators, test harnesses, code generators for spec-derived bindings.

Where MCP wins

  • Ecosystem network effects. Build an MCP server once for your database; every MCP client gets database access. The compounding value of "speak the protocol once" grows with every new client and server.
  • Vendor neutrality. Anthropic designed MCP but made it open and not Claude-specific. Cursor and other clients consume it. Third-party MCP servers come from across the ecosystem.
  • Transport flexibility. Local stdio for power-user tools, HTTP+SSE for hosted SaaS — the protocol covers both shapes without forking.
  • Minimal surface area. Resources / Tools / Prompts is a deliberately small primitive set. Easy to implement, easy to reason about.

Where MCP loses

  • It's not a runtime. MCP describes how things should talk, not what they should do. You still need to build the client, the server, and whatever the server does. The protocol is the cheap part.
  • Capability discoverability ≠ trustworthiness. A client knows what an MCP server says it can do, but the client has to decide whether to trust it. Auth, sandboxing, and consent are out-of-scope for the protocol — they're implementation concerns.
  • Maturity is uneven. As of 2026, the protocol is ~18 months old. Edge cases (binary resources, long-lived connections, advanced auth flows) are still being specified in SEPs.
  • Tool discoverability for users isn't solved. A user has to find MCP servers somehow — there's no central registry baked into the protocol. Third-party directories exist but aren't part of the spec.

The practical effect

If you've used Claude Code or Cursor in 2026, MCP is the substrate enabling many of the integrations you use without knowing it. A "GitHub" tool, a "Postgres" tool, a "Slack" tool — these are typically MCP servers running locally or remotely, exposing capabilities to whichever client you're using.

The strategic implication: AI client teams (Anthropic, Cursor, others) don't need to ship every integration anyone might want. The MCP ecosystem fills the long tail. This is similar to how VS Code doesn't ship every language server — the LSP ecosystem fills it.

When to build on MCP

  • You're building an AI client and want to inherit a growing ecosystem of tools without per-tool integration work
  • You're building a tool and want it to be usable across multiple AI clients (Claude Code, Cursor, custom agents) for free
  • You're building a domain-specific agent and want a clean way to expose your tools to a model

When NOT to build on MCP

  • You're shipping an entirely model-vendor-specific product where ecosystem composability isn't a goal
  • Your tool needs are exotic enough that the Resources/Tools/Prompts primitives don't fit (rare — they're flexible)
  • You need synchronous request/response with no streaming, and SSE-shaped semantics feel like overkill (in this case, build a simple HTTP server and ignore MCP)

Where to drill in deeper

Want this on a different protocol or repo?

This article is itself a worked example. We pointed AI Code Research at modelcontextprotocol/modelcontextprotocol, it read the spec and reference implementations, and produced this code-level walkthrough.

→ Try the same on any GitHub repo — free to start, no credit card.

Next reads in this topic

Structured to move from head-term discovery to deeper, more citable cluster pages.

Try a HowWorks specialist agent

Stop reading about the work — run it. These specialist agents do the thing this article describes, end-to-end.

FAQ

What is MCP (Model Context Protocol)?

MCP is a specification for how AI clients talk to external tools and data sources. It's not a product — it's a standard. The repository at github.com/modelcontextprotocol/modelcontextprotocol contains the protocol definition (TypeScript schema, JSON Schema export), official documentation, and reference servers. Released by Anthropic in late 2024, MCP is now consumed by Claude Code, Cursor, Claude Desktop, and many third-party agents.

What does an MCP server do?

An MCP server exposes capabilities — typically resources (read-only data the model can request), tools (functions the model can invoke), or prompts (parameterized templates) — over a JSON-RPC protocol. AI clients connect to MCP servers and discover what each server can do at runtime, without per-server integration code. Common MCP servers expose databases, file systems, APIs, browser automation, or domain-specific tools.

How is MCP different from OpenAI function calling?

Function calling is a model capability — the model can output structured calls, but each application has to implement how those calls actually run. MCP is an inter-process protocol — it standardizes how clients and tools exchange capability descriptions and invocations across process boundaries. You can use both: a client implementing function calling at the model level can also speak MCP at the tool-discovery level.

Is MCP only for Anthropic / Claude?

No. The protocol was designed by Anthropic but is open and vendor-neutral. Cursor (closed-source IDE), custom agents, Claude Desktop, and many third-party tools have adopted it. The MIT-adjacent license and TypeScript+JSON Schema definition make MCP straightforward for anyone to implement.

What does MCP NOT do?

MCP is a protocol, not a runtime. It doesn't execute anything itself — it describes how AI clients and tools should communicate. You still need a client (Claude Code, Cursor, custom agent) that speaks MCP, and a server that implements specific capabilities. MCP also doesn't define the model itself, the agent loop, or anything model-vendor-specific.

Explore all guides, workflows, and comparisons

Use the HowWorks content hub to move from idea validation to build strategy, with practical playbooks and decision-focused comparisons.

Open content hub