All articles
How Products Are Built11 min read

How to Build an App Like Linear: Architecture, Stack, and Tradeoffs (2026)

Linear reached $35M ARR with 3 engineers by making one architectural bet: responsiveness above all else. Here's a founder-focused breakdown of Linear's sync architecture, the two honest v1 paths, and what to copy vs what to leave for v2.

By HowWorks Team

Key takeaways

  • Linear-like products win on responsiveness and information density, not feature count — copy the feel before copying the features.
  • Start with one workflow loop and make it fast before expanding to adjacent modules.
  • Local-first patterns improve perceived speed but substantially increase sync complexity; choose based on your actual offline requirements.
  • A scoped architecture plan beats feature parity — most 'Linear clone' projects fail by copying the breadth, not the depth.

Decision checklist

  1. Define one core workflow loop (capture → prioritize, or prioritize → execute, or execute → close) and build only that in v1.
  2. Set strict v1 boundaries: no custom workflow automation engine, no plugin system, no advanced permission matrix.
  3. Choose your sync model based on your actual offline and collaboration frequency requirements, not best-case scenarios.
  4. Measure perceived speed and task completion rate before adding any features — these are your product health metrics.

How to Build an App Like Linear

Building a Linear-like product means solving one problem first: how do you make issue management feel instant and reliable at any team size? Linear reached $35M ARR with 3 engineers by making responsiveness the non-negotiable product contract — not a feature, but a requirement that every architectural decision serves. The tech stack is standard (React, TypeScript, PostgreSQL). The differentiator is the sync discipline and the culture of treating latency as a product bug.

"Build me something like Linear" sounds specific. It usually hides four different requests:

  • "I want the product to feel fast."
  • "I want teams to trust it for daily execution, not just occasional tracking."
  • "I want high signal and low visual clutter."
  • "I do not want enterprise complexity in the first version."

If you treat this as a feature-clone project, you will fail. If you treat it as a product-feel and architecture-tradeoff project, you have a realistic path.

What people misread about Linear

The visible layer of Linear is beautiful, minimal UI. The invisible layer — the thing that makes it actually work — is sync discipline, data model consistency, and a performance culture that is reflected in every engineering decision.

Linear's own engineering writeup on their sync engine is illuminating: they describe the technical debt that accumulated as they scaled, the API constraints that emerged from early decisions, and the specific work required to maintain responsiveness at scale. The conclusion is not that responsiveness is nice to have — it is the core product contract. Users who work in Linear every day are implicitly trusting that the product will not slow down as the issue count grows.

This means two things for someone building in this category:

  1. You should not copy features; you should copy the principle. Every feature in Linear exists to serve fast, reliable daily execution. If a feature would slow the core loop, Linear cut it or deferred it.
  2. The architecture decisions are not optional decorations. The sync model, the state management, the API design — these are chosen to serve the responsiveness contract. If you adopt Linear's design language without the architecture, you get the appearance of the product without the substance.

Start from one loop, not one backlog

The most common mistake in this category is trying to build the whole system at once. Linear's current product has years of layering. Your v1 should have one.

Pick one workflow loop and make it excellent:

  • capture → prioritize: the user creates issues, the team organizes them into priority order
  • prioritize → execute: the team takes prioritized work and moves it through in-progress states
  • execute → close: the team ships work and records completion with relevant context

Remove everything not required to make that loop feel immediate and reliable. The features that make Linear great (cycles, projects, roadmaps, integrations) are all layered on top of a fast, reliable core loop. Build the core loop first. Everything else is valid if and only if the core loop works.

Two viable v1 architecture paths

The fundamental architectural choice in this category is how you handle state synchronization. There are two honest paths.

Path A: server-first, optimistically-rendered client

Use when:

  • Offline usage is not a real requirement for your users
  • Your team is small and delivery speed matters more than optimal UX
  • You want to validate product-market fit before investing in complex sync

How it works: the server is the source of truth. The client sends mutations to the API and re-renders on confirmation. Optimistic updates (showing the change immediately before server confirmation) can make this feel fast for most operations.

Tradeoff: simpler to build and reason about, but creates visible latency on poor connections and requires careful handling of failed mutations. The user experience degrades predictably when the network degrades.

When this path works well: most B2B SaaS products with reasonable network assumptions are fine with this model. If your users are in offices with reliable internet and your mutations are infrequent (adding tasks, updating status), optimistic rendering with a clean server-first model works.

Path B: local-first, eventually-consistent sync

Use when:

  • Perceived speed under real-world network conditions is a core differentiator
  • Users work in environments where connectivity is unreliable
  • High collaboration frequency (multiple users editing the same objects simultaneously)

How it works: the client holds authoritative local state. Operations are applied locally and synced to the server asynchronously. Conflict resolution is required when concurrent edits create divergent states.

Tradeoff: significantly stronger perceived speed and offline tolerance, at the cost of substantially higher sync complexity. Conflict resolution, merge strategies, and sync debugging are non-trivial problems that consume significant engineering time.

The honest warning: most teams that choose Path B underestimate the sync engineering investment. The complexity is not in writing the initial sync logic — it is in handling edge cases: users who come back online with a week of offline edits, concurrent edits to the same field, objects that were deleted on one client and modified on another. These cases are rare but visible, and they damage trust when they occur.

The practical v1 stack (for most teams)

For a team starting fresh with real users as the goal:

  • Frontend: a modern web client with disciplined local state management. Use optimistic updates for the interactions that matter most (status transitions, assignments, quick edits).
  • API layer: a clean REST or GraphQL API with real-time subscriptions for the specific operations that need live updates (issue status, presence indicators).
  • Database: a relational database with a schema that matches your core issue model closely. Over-normalizing too early creates join complexity; under-normalizing creates migration pain later.
  • Background jobs: for non-blocking operations — sending notifications, updating search indexes, computing rollup counts.
  • Event logging: instrument every meaningful user action from day one, even if you do not analyze the data immediately. This is the foundation of your future cycle metrics and team analytics features.

Do not add a separate sync layer, CRDT library, or local-first database until user behavior and scale provide specific evidence that the server-first model is insufficient.

The "not now" list (keep out of v1 unless your specific case demands it)

These features appear in mature products in this category. They are not appropriate for v1 unless your specific user research shows otherwise:

  • Custom workflow automation builder: the trigger/action engine that lets users define their own status flows. This is a platform feature, not a core product feature.
  • Deep permission matrix: granular role definitions, view restrictions, and permission inheritance. Start with a simple model (admin, member, guest) and add complexity when real users ask for it.
  • Plugin or integration ecosystem: third-party developers building on your platform is a growth strategy, not a product strategy. It requires API stability you do not have yet.
  • Advanced multi-tenant analytics: reporting across teams, velocity charts, cycle time tracking. These are valuable, but they require having teams actually using the product first.

The pattern is consistent: every item on this list is a feature that requires a thriving core product as its prerequisite. Build the prerequisite.

What to measure in week one

Before adding any features after launch, verify these metrics:

  • Time-to-first-action: from landing on the product to completing the first meaningful task. Linear's reputation is built on this being fast.
  • Action-to-confirmation latency: how long between user action and visible state update? Users tolerate up to ~200ms without noticing; above 500ms starts to register as "slow."
  • Session task completion rate: what percentage of sessions end with at least one task completed? Low completion rate often indicates the core loop has friction.
  • Error rate on critical paths: status transitions, issue creation, assignment. Any error here is visible and trust-damaging.

If these numbers are poor, adding features makes the problem worse, not better. A slow, unreliable product with more features is still a slow, unreliable product.

Scope boundary example

v1 (build this):
- Create, edit, archive issues
- Assign to team members
- Status transitions (todo, in progress, done)
- Keyboard shortcuts for common actions
- Fast list view and detail view

v2 (build after v1 is working):
- Cycles / sprints
- Projects and roadmaps
- Custom workflow states
- Automation rules
- Detailed team analytics

The principle behind this boundary: v1 is about proving that your core loop works and that users will come back. Everything in v2 is about improving outcomes for users who are already committed.

Related Reading on HowWorks

Sources

Next reads in this topic

Structured to move from head-term discovery to deeper, more citable cluster pages.

FAQ

How was Linear built? What is Linear's architecture?

Linear is built around a local-first sync engine — client state is authoritative, with operations synced asynchronously to the server. The engineering team published their sync engine scaling post describing the specific technical debt that accumulated, the API constraints that emerged from early decisions, and the work required to maintain responsiveness at scale. The core product contract is that Linear stays fast even as issue count grows — every architectural decision serves that contract.

What is Linear's tech stack?

Linear uses a React frontend with TypeScript, a custom local-first sync architecture, PostgreSQL for server-side persistence, and a real-time sync layer. Their engineering blog describes the sync engine as one of the hardest technical investments — the local-first approach gives users perceived speed but required significant sync complexity to implement reliably.

Should I build a local-first or server-first issue tracker?

Start server-first unless offline usage or perceived speed under unreliable networks is a core, validated requirement. Server-first with optimistic updates handles 80% of issue tracker use cases. Local-first adds significant sync complexity — conflict resolution, reconnect handling, and edge case management — that consumes engineering time better spent validating product-market fit. Move to local-first when user behavior provides specific evidence that the server-first model is insufficient.

Can a small team build a Linear-like product?

Yes, if scope is narrowed to one core workflow loop. Linear reached $35M ARR with 3 engineers by focusing on a single interaction: fast issue management. The trap is trying to match Linear's full feature surface before finding product-market fit on the core experience. Pick one loop (capture → prioritize, or prioritize → execute, or execute → close) and make it excellent before adding cycles, projects, or roadmaps.

What is the hardest technical problem in building a Linear-like app?

Reliable state consistency under frequent concurrent updates — what Linear's engineering posts call the sync engine problem. When multiple users edit the same issue simultaneously, or when a user comes back online after working offline, the system needs to resolve conflicting states without visible errors. This is the problem that required Linear to eventually overhaul their sync architecture, and it's the problem that kills most issue tracker projects at scale.

What should I absolutely not build in a Linear clone v1?

Four features consistently kill v1 scope in this category: (1) Custom workflow automation engine — build the fixed status transitions first, automation is a platform feature. (2) Deep permission matrix — start with admin/member/guest, add complexity when users ask. (3) Plugin or integration ecosystem — requires API stability you don't have yet. (4) Advanced team analytics — requires a thriving core product as prerequisite. Build the core loop first; every one of these features requires the core to already be working.

How do I measure if my Linear-like product is working?

Four metrics before adding any features: time-to-first-action (from landing to first meaningful task), action-to-confirmation latency (under 200ms is invisible, over 500ms registers as slow), session task completion rate (what percentage of sessions end with at least one task completed), and error rate on status transitions and issue creation. If these numbers are poor, adding features makes the problem worse.

Explore all guides, workflows, and comparisons

Use the HowWorks content hub to move from idea validation to build strategy, with practical playbooks and decision-focused comparisons.

Open content hub