The phrase "AI coding tool" covers four radically different products. Cursor is an IDE. Claude Code is a terminal agent. MCP is a protocol layer. OpenClaw is a personal AI assistant. They share a vocabulary but not an architecture.
This piece reads the source of each — where it's open — and identifies the architectural decisions that actually distinguish them. The data is verified against GitHub APIs as of 2026-04-29.
The four shapes of AI coding tool
| Shape | Example | Source | Key trade-off |
|---|---|---|---|
| Agentic terminal | Claude Code | Open (119K stars) | Portability over IDE integration |
| In-IDE assistant | Cursor, GitHub Copilot, Windsurf | Mostly closed | Zero-context-switch over autonomy |
| Protocol layer | MCP | Open (8K stars, spec) | Composability over product opinion |
| Personal AI agent | OpenClaw, Manus | Mixed | Cross-surface autonomy over simplicity |
That table is the executive summary. The rest of this article reads each architecture in detail and links to full deep dives where they exist.
Shape 1: Agentic terminal (Claude Code)
Claude Code is Anthropic's flagship agentic coding tool. It lives as a process in your terminal, takes natural-language instructions, and executes multi-step coding tasks autonomously by calling tools like read_file, run_command, and edit_file.
Verified GitHub data (2026-04-29):
- 119,000 stars
- 605 commits on main
- 5,000+ open issues, 516 active PRs
- Languages: Shell 47.1%, Python 29.2%, TypeScript 17.7%
- Top-level structure:
.claude-plugin/,.claude/commands/,examples/,plugins/,scripts/
The architectural commitments:
The shell-script-first language breakdown is informative. Claude Code optimizes for "wherever your terminal is" — every developer's shell is the universal interface, more universal than any IDE. The Python skills layer adds ergonomic task scripts. The TypeScript core handles the agentic loop and protocol integrations.
Where this wins: portability (works on any OS), autonomy (it can run multi-step tasks while you do something else), composability (every Unix tool is already integrated by virtue of being shell-callable).
Where this loses: the editor experience. There's no syntax highlighting, no hover-tooltip, no inline autocomplete. You're reading code and seeing AI output through terminal panes.
For the full deep dive, see How Claude Code Actually Works (W2 launch).
Shape 2: In-IDE assistant (Cursor, Copilot, Windsurf)
This is the most populous shape. Cursor, GitHub Copilot, and Windsurf all live inside the editor, embedding AI into the writing flow with autocomplete, chat, and refactor features.
Public surface (Cursor is closed-source):
Cursor's architecture is deducible from its docs, the Cursor Forum, released SDKs, and engineering writeups. The notable architectural decisions:
@Codebaseindexing: Cursor maintains a vector index of your repo for semantic context retrieval at chat time- MCP support: Cursor consumes the MCP protocol, which lets external tools plug in without Cursor needing to rebuild every integration
- Tab autocomplete model: A custom small model optimized for the autocomplete-while-typing latency budget (sub-100ms)
- Composer mode: Multi-file editing with constrained agent autonomy inside the editor sandbox
GitHub Copilot, originally built on a fine-tuned OpenAI model, has expanded into a similar shape: codebase chat, agent mode, and integrations.
The architectural commitments:
In-IDE tools optimize for zero context-switching. The cost-benefit calculation: developers already live in editors, so building AI inside the editor minimizes cognitive overhead. The trade-off is that the AI is bounded by what makes sense inside an editor. Multi-hour autonomous tasks, broad codebase research, cross-machine workflows — those don't fit well.
Where this wins: ergonomic integration, fastest path from "I want X" to "X appears in my code."
Where this loses: tasks that don't fit "while I'm writing code in this file." The architecture is biased toward synchronous, file-local, micro-decisions.
For the full deep dive on Cursor, see How Cursor Works Inside (W3 launch).
Shape 3: Protocol layer (MCP)
Model Context Protocol is not a product — it's a specification.
Verified GitHub data (2026-04-29):
- ~8,000 stars (specification repo)
- 1,474 forks
- 224 open issues
- Created 2024-09-24 by Anthropic, MIT-adjacent license
- TypeScript schema, exported as JSON Schema
- Top-level:
docs/,schema/,seps/(specification enhancement proposals),tools/
The architectural commitments:
MCP defines how AI clients (Claude Desktop, Cursor, custom agents) discover and invoke external tools and data sources. The protocol is JSON-RPC over either stdio or HTTP-SSE. The key concepts: resources (read-only data the model can request), tools (functions the model can invoke), and prompts (parameterized templates).
The trade-off MCP makes: it has no product opinion. It's not trying to be a great IDE or a great agent runtime. It's trying to be the wiring everyone agrees on, so the ecosystem can compose. Claude Code, Cursor, custom agents, third-party MCP servers (databases, APIs, file systems) — all interoperate because they speak the same protocol.
Where this wins: ecosystem network effects. Build an MCP server once, every MCP client benefits.
Where this loses: it requires implementation effort. The protocol doesn't run anything; it just describes how things should talk. You still need a client and a server.
For the full deep dive on MCP, see How MCP Works (W3 launch).
Shape 4: Personal AI agent (OpenClaw, Manus)
The newest and most architecturally ambitious shape. OpenClaw is the canonical example.
Verified GitHub data (2026-04-29):
- 365,782 stars — making it one of the fastest-growing AI projects of 2026
- 74,969 forks
- Created 2025-11-24 (less than 6 months old)
- TypeScript primary, with Python skills, Swift macOS layer, Shell scripts
- Top-level:
.agents/,apps/,packages/,skills/,extensions/,src/,vendor/
The architectural commitments (from the OpenClaw deep dive report we generated by reading the source):
OpenClaw is a "local-first AI operator platform." The defining decision: one gateway governs identity, sessions, tools, plugins, channels, and approvals across all surfaces. New surfaces (Discord, iMessage, Google Chat, voice, CLI) plug into the same gateway and inherit auth, session safety, and control rules.
Other architectural notables:
- Multi-channel ingress at the transport edge: each channel normalizes its native message format into validated assistant-readable events before reaching the agent runtime
- Browser automation via Playwright + Chrome DevTools Protocol
- Durable memory via SQLite + vector storage, hookable per session
- Cross-platform daemons: launchd / systemd / Windows scheduled tasks for "always-on" behavior
Where this wins: cross-surface autonomy. The same agent answers your iMessage, manages your browser, and runs scheduled tasks — without you switching tools.
Where this loses: complexity. Local-first means you bring the runtime; multi-surface means there's a lot to set up; autonomous means the failure modes are larger when something goes wrong.
For the full deep dive on Manus, see How Manus Actually Works (W3 launch).
What they share, what they differ on
Three primitives are common to all four shapes:
- Tool / function calling. Whether it's MCP's protocol-defined tools, Cursor's @Codebase, Claude Code's shell-callable utilities, or OpenClaw's plugin gateway — every shape exposes invokable capabilities to the model. The architecture differs (protocol-defined vs. product-built-in), but the primitive is universal.
- Context and memory management. Every tool has to fit the relevant code into the model's context window. Cursor uses vector indexing. Claude Code reads files lazily on tool calls. MCP's resource model is read-on-demand. OpenClaw uses durable session-scoped memory. Different architectures, same problem.
- Streaming output. Every tool streams response tokens as they're generated. This is now table stakes; users expect output before a generation finishes.
Beyond those three, the architectures diverge sharply on:
- Where the agent runs: terminal (Claude Code) / editor (Cursor) / nowhere, just a wire (MCP) / everywhere (OpenClaw)
- How context is acquired: model has to ask via tool (Claude Code, MCP) / tool pre-indexes (Cursor) / channel ingress (OpenClaw)
- What the agent is optimized to ship: completed tasks (Claude Code), in-flight code (Cursor), composable interactions (MCP), cross-surface workflows (OpenClaw)
- Source visibility: open (Claude Code, MCP, OpenClaw) vs. closed (Cursor, Copilot)
Why the architecture matters when you're picking
The marketing pages of every tool above promise roughly the same things: "AI that understands your code," "agentic capabilities," "production-ready." The architectures are where those promises become concrete or fall apart.
If you pick Cursor expecting Claude-Code-style autonomy, you'll be frustrated by the editor sandbox. If you pick Claude Code expecting Cursor's autocomplete UX, you'll be frustrated by the terminal. If you adopt MCP expecting a product, you'll be frustrated by having to build the product around it. If you deploy OpenClaw expecting a single-purpose tool, you'll be frustrated by the configuration surface area.
Reading the architecture clarifies what trade-off you're absorbing. That's the point of this exercise.
Where to drill in
The deep dives below read the actual source of each tool and produce structured architectural writeups:
- How Claude Code Actually Works — agentic terminal teardown
- How Cursor Works Inside — closed-source, public-surface research
- How MCP Works — protocol-level walkthrough
- How Manus Actually Works — cross-surface personal AI
- How v0.dev Works — Vercel's UI generator
- How Lovable Works — app generator architecture
- How ComfyUI Works — custom-node architecture
- How AutoGPT Actually Works — original autonomous agent
For the broader brand context — what AI Code Research is and how it generates analyses like these — see What Is AI Code Research?.
How to use this map
Start with your job, not the tools.
- I'm writing code, I want help while I write → in-IDE assistant (Cursor, Copilot, Windsurf)
- I'm running multi-step coding tasks, I want autonomy → agentic terminal (Claude Code)
- I'm building tooling that should compose with multiple AI clients → protocol layer (MCP)
- I'm running automation across multiple surfaces (chat, browser, daemons) → personal AI agent (OpenClaw, Manus)
- I'm doing pre-build research on tools, projects, or migrations → AI Code Research (us)
The shapes don't compete; they complement. Engineering teams in 2026 typically use 2–3 of them in combination — Cursor + Claude Code + MCP servers is a common stack.
Try AI Code Research → — read any AI coding tool's source and get an engineer's answer in roughly 60 seconds.