How to Build an App Like Linear
Building a Linear-like product means solving one problem first: how do you make issue management feel instant and reliable at any team size? Linear reached $35M ARR with 3 engineers by making responsiveness the non-negotiable product contract — not a feature, but a requirement that every architectural decision serves. The tech stack is standard (React, TypeScript, PostgreSQL). The differentiator is the sync discipline and the culture of treating latency as a product bug.
"Build me something like Linear" sounds specific. It usually hides four different requests:
- "I want the product to feel fast."
- "I want teams to trust it for daily execution, not just occasional tracking."
- "I want high signal and low visual clutter."
- "I do not want enterprise complexity in the first version."
If you treat this as a feature-clone project, you will fail. If you treat it as a product-feel and architecture-tradeoff project, you have a realistic path.
What people misread about Linear
The visible layer of Linear is beautiful, minimal UI. The invisible layer — the thing that makes it actually work — is sync discipline, data model consistency, and a performance culture that is reflected in every engineering decision.
Linear's own engineering writeup on their sync engine is illuminating: they describe the technical debt that accumulated as they scaled, the API constraints that emerged from early decisions, and the specific work required to maintain responsiveness at scale. The conclusion is not that responsiveness is nice to have — it is the core product contract. Users who work in Linear every day are implicitly trusting that the product will not slow down as the issue count grows.
This means two things for someone building in this category:
- You should not copy features; you should copy the principle. Every feature in Linear exists to serve fast, reliable daily execution. If a feature would slow the core loop, Linear cut it or deferred it.
- The architecture decisions are not optional decorations. The sync model, the state management, the API design — these are chosen to serve the responsiveness contract. If you adopt Linear's design language without the architecture, you get the appearance of the product without the substance.
Start from one loop, not one backlog
The most common mistake in this category is trying to build the whole system at once. Linear's current product has years of layering. Your v1 should have one.
Pick one workflow loop and make it excellent:
- capture → prioritize: the user creates issues, the team organizes them into priority order
- prioritize → execute: the team takes prioritized work and moves it through in-progress states
- execute → close: the team ships work and records completion with relevant context
Remove everything not required to make that loop feel immediate and reliable. The features that make Linear great (cycles, projects, roadmaps, integrations) are all layered on top of a fast, reliable core loop. Build the core loop first. Everything else is valid if and only if the core loop works.
Two viable v1 architecture paths
The fundamental architectural choice in this category is how you handle state synchronization. There are two honest paths.
Path A: server-first, optimistically-rendered client
Use when:
- Offline usage is not a real requirement for your users
- Your team is small and delivery speed matters more than optimal UX
- You want to validate product-market fit before investing in complex sync
How it works: the server is the source of truth. The client sends mutations to the API and re-renders on confirmation. Optimistic updates (showing the change immediately before server confirmation) can make this feel fast for most operations.
Tradeoff: simpler to build and reason about, but creates visible latency on poor connections and requires careful handling of failed mutations. The user experience degrades predictably when the network degrades.
When this path works well: most B2B SaaS products with reasonable network assumptions are fine with this model. If your users are in offices with reliable internet and your mutations are infrequent (adding tasks, updating status), optimistic rendering with a clean server-first model works.
Path B: local-first, eventually-consistent sync
Use when:
- Perceived speed under real-world network conditions is a core differentiator
- Users work in environments where connectivity is unreliable
- High collaboration frequency (multiple users editing the same objects simultaneously)
How it works: the client holds authoritative local state. Operations are applied locally and synced to the server asynchronously. Conflict resolution is required when concurrent edits create divergent states.
Tradeoff: significantly stronger perceived speed and offline tolerance, at the cost of substantially higher sync complexity. Conflict resolution, merge strategies, and sync debugging are non-trivial problems that consume significant engineering time.
The honest warning: most teams that choose Path B underestimate the sync engineering investment. The complexity is not in writing the initial sync logic — it is in handling edge cases: users who come back online with a week of offline edits, concurrent edits to the same field, objects that were deleted on one client and modified on another. These cases are rare but visible, and they damage trust when they occur.
The practical v1 stack (for most teams)
For a team starting fresh with real users as the goal:
- Frontend: a modern web client with disciplined local state management. Use optimistic updates for the interactions that matter most (status transitions, assignments, quick edits).
- API layer: a clean REST or GraphQL API with real-time subscriptions for the specific operations that need live updates (issue status, presence indicators).
- Database: a relational database with a schema that matches your core issue model closely. Over-normalizing too early creates join complexity; under-normalizing creates migration pain later.
- Background jobs: for non-blocking operations — sending notifications, updating search indexes, computing rollup counts.
- Event logging: instrument every meaningful user action from day one, even if you do not analyze the data immediately. This is the foundation of your future cycle metrics and team analytics features.
Do not add a separate sync layer, CRDT library, or local-first database until user behavior and scale provide specific evidence that the server-first model is insufficient.
The "not now" list (keep out of v1 unless your specific case demands it)
These features appear in mature products in this category. They are not appropriate for v1 unless your specific user research shows otherwise:
- Custom workflow automation builder: the trigger/action engine that lets users define their own status flows. This is a platform feature, not a core product feature.
- Deep permission matrix: granular role definitions, view restrictions, and permission inheritance. Start with a simple model (admin, member, guest) and add complexity when real users ask for it.
- Plugin or integration ecosystem: third-party developers building on your platform is a growth strategy, not a product strategy. It requires API stability you do not have yet.
- Advanced multi-tenant analytics: reporting across teams, velocity charts, cycle time tracking. These are valuable, but they require having teams actually using the product first.
The pattern is consistent: every item on this list is a feature that requires a thriving core product as its prerequisite. Build the prerequisite.
What to measure in week one
Before adding any features after launch, verify these metrics:
- Time-to-first-action: from landing on the product to completing the first meaningful task. Linear's reputation is built on this being fast.
- Action-to-confirmation latency: how long between user action and visible state update? Users tolerate up to ~200ms without noticing; above 500ms starts to register as "slow."
- Session task completion rate: what percentage of sessions end with at least one task completed? Low completion rate often indicates the core loop has friction.
- Error rate on critical paths: status transitions, issue creation, assignment. Any error here is visible and trust-damaging.
If these numbers are poor, adding features makes the problem worse, not better. A slow, unreliable product with more features is still a slow, unreliable product.
Scope boundary example
v1 (build this):
- Create, edit, archive issues
- Assign to team members
- Status transitions (todo, in progress, done)
- Keyboard shortcuts for common actions
- Fast list view and detail view
v2 (build after v1 is working):
- Cycles / sprints
- Projects and roadmaps
- Custom workflow states
- Automation rules
- Detailed team analytics
The principle behind this boundary: v1 is about proving that your core loop works and that users will come back. Everything in v2 is about improving outcomes for users who are already committed.
Related Reading on HowWorks
- How Notion Was Built: Block Model, Architecture, and Sync Pipeline — Deep dive into another product that bet on a specific data architecture
- How Top Tech Products Are Built: A Guide for Non-Developers — Framework for extracting architectural decisions from any product's primary sources
- How to Build an App Like Perplexity — Architecture breakdown of a different AI product category
- How to Validate a Startup Idea Before You Build — Validation framework before committing to an architecture
Sources
- Linear engineering on sync scaling, API constraints, and responsiveness tradeoffs: Scaling the Linear Sync Engine
- Figma on realtime collaboration complexity and client/server sync design: How Figma's multiplayer technology works