A PRD (Product Requirements Document) is the single document that aligns your team on what to build, who it's for, and why it matters — before anyone writes code. This guide gives you a ready-to-use template, section-by-section writing instructions, and real examples. Plus: how to use AI to auto-generate technical sections from existing codebases.
The PRD Template
Copy this template and fill in each section. Guidance for each section follows below.
[Product/Feature Name] — PRD
Author: [Your name] Date: [Date] Status: Draft / In Review / Approved Last Updated: [Date]
1. Problem Statement
What pain exists? Who feels it? How do we know?
[Describe the problem in 2-3 sentences. Include evidence: user quotes, support ticket volume, data from analytics, or competitive pressure. If you can't point to evidence, the problem may not be real.]
2. Target Users
Who specifically will use this? Who will NOT?
| Segment | Description | Priority |
|---|---|---|
| Primary | [Who benefits most] | Must serve |
| Secondary | [Who benefits partially] | Should serve |
| Out of scope | [Who this is NOT for] | Explicitly excluded |
3. Proposed Solution
What are we building? What does it do?
[Describe the solution in 3-5 sentences. Focus on what the user can do, not implementation details. If a wireframe exists, link it here.]
4. Scope
| In Scope | Out of Scope |
|---|---|
| [Feature/capability included] | [Feature/capability explicitly excluded] |
| [Feature/capability included] | [Feature/capability explicitly excluded] |
| [Feature/capability included] | [Feature/capability explicitly excluded] |
The out-of-scope column is the most important part of this section. Be explicit.
5. User Stories (optional)
- As a [user type], I want to [action] so that [outcome].
- As a [user type], I want to [action] so that [outcome].
6. Technical Approach
How will this be built? What are the key architecture decisions?
- Architecture: [High-level approach — new service, extension of existing, third-party integration]
- Dependencies: [What existing systems does this touch?]
- Data model changes: [New tables, schema changes, migrations]
- API changes: [New endpoints, breaking changes]
- Key constraints: [Performance requirements, security, compliance]
Tip: If you're building something similar to an existing open-source project, analyze that project's architecture first. Tools like HowWorks can translate any codebase into plain-language technical documentation — useful for PMs who need to understand implementation patterns without reading code.
7. Success Metrics
| Metric | Target | Measurement Method |
|---|---|---|
| [Primary metric] | [Specific number] | [How you'll measure] |
| [Secondary metric] | [Specific number] | [How you'll measure] |
| [Guardrail metric] | [Should not decrease] | [How you'll measure] |
8. Milestones & Timeline (optional)
| Milestone | Target Date | Owner |
|---|---|---|
| PRD approved | [Date] | PM |
| Design complete | [Date] | Design |
| Engineering complete | [Date] | Eng |
| Launch | [Date] | PM |
9. Open Questions
- [Unresolved question that affects scope or approach]
- [Unresolved question that affects scope or approach]
How to Write a PRD: Section-by-Section Guide
Problem Statement: Lead with Evidence
The most common PRD mistake: describing a feature instead of a problem.
Bad: "We need to add a dashboard for users to view their analytics."
Good: "Power users (12% of MAU) are exporting raw data to spreadsheets weekly to build their own analytics views. Support tickets about data access increased 40% last quarter. Three churned enterprise accounts cited 'lack of visibility' in exit interviews."
The first version tells engineering what to build. The second tells them why — which leads to better solutions.
Scope: The Out-of-Scope Column Is More Important
In the AI era, building is cheap. The expensive mistake is building the wrong thing. Your out-of-scope list protects the team from scope creep.
For every item in your scope, ask: "If we cut this, would the feature still solve the core problem?" If yes, move it to out-of-scope for v1.
Technical Approach: Research Before You Specify
This is where most PMs either (a) skip the section entirely and leave engineering guessing, or (b) over-specify implementation details they don't fully understand.
The better approach: research how similar products solve this problem, then summarize what you learned.
Concrete workflow:
- Search for open-source projects that solve a similar problem
- Analyze their architecture — what tech stack do they use? What are the key design decisions?
- Summarize the patterns you found in the Technical Approach section
- Let engineering decide the specific implementation, informed by your research
Example: Instead of writing "Use WebSockets for real-time updates," write: "Linear and Figma both use real-time sync — Linear with a custom sync engine, Figma with CRDTs. Real-time updates are a requirement; the specific approach should be an engineering decision. See Linear architecture analysis for reference."
This gives engineering useful context without micromanaging the implementation.
Success Metrics: Be Specific or Don't Bother
Bad: "Increase user engagement."
Good: "Increase weekly active usage of the analytics dashboard from 0% (doesn't exist yet) to 15% of MAU within 8 weeks of launch. Guardrail: overall app load time should not increase by more than 200ms."
Every metric needs three things: a number, a timeframe, and a measurement method.
PRD Example: AI-Powered Search Feature
Here's a filled example using the template above.
AI-Powered Project Search — PRD
Author: Sarah Chen Date: 2026-03-01 Status: In Review Last Updated: 2026-03-10
1. Problem Statement
Users currently rely on keyword search to find projects, which returns poor results for intent-based queries. 65% of searches result in zero clicks (analytics, Feb 2026). User interviews (n=12) consistently mention "I know what I want to build but can't describe it in keywords." Competitor X launched semantic search in January and has seen 3x growth in their discovery metrics.
2. Target Users
| Segment | Description | Priority |
|---|---|---|
| Primary | Builders searching for reference implementations ("I want to build something like X") | Must serve |
| Secondary | PMs researching competitive landscape | Should serve |
| Out of scope | Users browsing casually with no specific intent | Explicitly excluded |
3. Proposed Solution
Replace keyword search with AI-powered semantic search. Users describe what they want to build in natural language, and the system returns relevant projects ranked by architectural similarity — not just keyword match. Results include a one-sentence explanation of why each project is relevant.
4. Scope
| In Scope | Out of Scope |
|---|---|
| Natural language query input | Voice input |
| Semantic ranking of results | Personalized recommendations based on history |
| "Why this result" explanation per result | Full architecture comparison between results |
| English queries | Non-English query support (v2) |
5. Technical Approach
- Architecture: New search service wrapping existing project index + embedding model for semantic matching
- Dependencies: Project metadata index (existing), embedding API (new — evaluate OpenAI ada-002 vs Cohere)
- Data model changes: Add vector column to project index for embeddings
- Key constraint: P95 search latency must stay under 500ms
Reference: Perplexity's search architecture uses a five-stage RAG pipeline with hybrid retrieval (keyword + semantic). Our implementation will be simpler — single-stage semantic ranking on top of existing keyword index. See Perplexity architecture analysis for patterns.
6. Success Metrics
| Metric | Target | Measurement |
|---|---|---|
| Search click-through rate | From 35% to 55% | Analytics |
| Zero-result searches | From 65% to under 20% | Analytics |
| Search-to-signup conversion | 5% improvement | Funnel tracking |
| Search latency P95 | Under 500ms | APM |
7. Open Questions
- Should we keep keyword search as a fallback or replace it entirely?
- What embedding model gives the best relevance-to-cost ratio at our scale?
- Do we need to re-index all existing projects or can we do it incrementally?
Using AI to Speed Up PRD Writing
Auto-Generate Technical Sections from Code
If you're building something similar to an existing open-source project, you don't need to reverse-engineer the technical approach from scratch.
Workflow:
- Find reference projects: Search for open-source projects that solve a similar problem. HowWorks lets you search by describing what you want to build and returns relevant projects.
- Analyze architecture: Use HowWorks DeepDive to get a plain-language breakdown of any project's architecture, tech stack, and design decisions — no code reading required.
- Generate documentation: Code-to-Docs translates any codebase into structured technical documentation that you can directly reference in your PRD's Technical Approach section.
This saves hours of back-and-forth with engineering and produces more accurate technical context than guessing.
Use AI for Drafting and Review
Once you have your research:
- Draft: Paste your bullet points into Claude or ChatGPT and ask it to structure them into a PRD format
- Review: Ask AI to check for missing sections, ambiguous requirements, and scope gaps
- Refine: Ask engineering to review the technical approach section — the best PRDs are co-authored
Common PRD Mistakes
Writing features, not problems. If your PRD starts with "We need to build X," you've already made the biggest mistake. Start with the problem and evidence.
No out-of-scope list. Without explicit exclusions, scope creep is guaranteed. Every stakeholder will assume their pet feature is included.
Over-specifying implementation. The PRD owns the what and why. Engineering owns the how. Write enough to communicate constraints and context, then let engineers design the solution.
No success metrics. A PRD without measurable outcomes is a wishlist, not a requirements document. If you can't define success, you can't evaluate whether the feature was worth building.
Skipping research. Writing a technical approach based on assumptions instead of studying how similar products actually work. 30 minutes of architecture research produces better specs than hours of guessing.