All articles
Product Research9 min read

Product Research Checklist for Founders: 8 Sections Before You Build (2026)

90% of startup failures trace back to decisions made in the first week. This 8-section checklist surfaces the gaps founders avoid: who the user is (with evidence), where alternatives break, what technical patterns exist, and what explicitly stays out of v1.

By HowWorks Team

Key takeaways

  • 90% of startup failures are traceable to poor market research and failure to understand customer needs (CB Insights, 2025).
  • A checklist prevents the selective amnesia that affects founders who are excited about their idea.
  • Reference products and reference implementations are both required — one tells you what users expect, the other tells you what the build actually involves.
  • The most valuable output of product research is the explicit out-of-scope list for v1.
  • More than 30% unknowns in this checklist means you are in discovery mode, not build mode.

Decision checklist

  1. Define target user, context, and top 3 job-to-be-done scenarios with evidence from real user conversations.
  2. Map 3 direct and 3 indirect alternatives users already rely on, and document where each breaks.
  3. Collect 3–5 open-source references and note the specific technical pattern each one represents.
  4. Publish a one-page build thesis and formally lock v1 scope before beginning implementation.

Product Research Checklist for Founders

This is an 8-section checklist that surfaces the research gaps founders avoid: user evidence, problem validation, competitor failure points, reference implementations, architecture constraints, and explicit v1 scope. 90% of startup failures trace back to poor market research and failure to understand customer needs (CB Insights, 2025). The checklist takes 4-8 hours to complete — less time than most teams spend on their first sprint planning session, and far less than the cost of building the wrong thing.

This checklist is not a "nice to have." It is a filter against self-deception.

Use it when your team feels momentum but cannot explain, in plain language, why this specific product should exist now for this specific user. The checklist does not generate insight — it surfaces the gaps you have been avoiding.

How to use this checklist honestly

Complete it in one sitting (90–120 minutes). Use evidence only — no "we think users probably..." entries. If you are unsure about something, mark it as unknown rather than forcing a confident-sounding answer.

Count your unknowns when you finish. If more than 30% of the items are unknown or guessed, you are in discovery mode. Build mode requires evidence.

This distinction matters because build mode and discovery mode require different resource allocations. Running build-mode practices (sprints, velocity tracking, feature planning) in discovery mode burns time and produces false confidence. Running discovery-mode practices (user interviews, assumption tests, reference analysis) in build mode creates delays without improving direction quality. Know which mode you are in.


1) Problem definition (no storytelling, no generalizations)

  • Exact user profile: job title, context, constraints, and what they are responsible for
  • Repeated painful workflow: documented across multiple users, not one memorable anecdote
  • Frequency and urgency: how often does this happen? How costly is each occurrence?
  • Consequence of inaction: what happens if the problem remains unsolved for six more months?

Common failure mode here: describing the problem in terms of your solution ("users need a better way to do X") rather than in terms of their current experience. A well-defined problem is stated entirely from the user's perspective, using their language.


2) Existing behavior map

  • Current tools or processes users already rely on to handle this problem
  • Why those substitutes remain acceptable today (cost, inertia, familiarity)
  • Where the current substitutes break in real usage — specific scenarios, not general complaints

Important: if users have no current behavior around this problem, there is usually low urgency. No substitute behavior usually means users have adapted to living without a solution. That is a harder market to activate than one where users are already paying with time or money for an imperfect workaround.

The substitute map also tells you what you are really competing against. For most startup ideas, the primary competitor is not another startup — it is a spreadsheet, a Notion database, a manual workflow, or a combination of existing tools stitched together with Zapier.


3) Solution hypothesis

  • One-sentence value proposition (who, what outcome, compared to what)
  • One measurable v1 outcome that proves the product is working
  • Clear, specific reason why users would switch now rather than six months from now

The "switch now" question is often skipped because the answer is uncomfortable. If you cannot identify a specific trigger — a new regulation, a recent painful incident, a workflow change — the adoption timeline is harder to predict. This does not mean you should not build, but it changes how you plan go-to-market.


4) Reference intelligence

  • 3–5 reference products: what does each one optimize for? What user segment?
  • 3–5 reference implementations: open-source GitHub repos, with stars, maintenance status, and language
  • For each reference: what technical pattern did they choose, and what did they explicitly not build?
  • What constraints appear in their public docs, issue trackers, and changelogs?

How to read a reference repo fast: do not start with the code. Start with the README (claimed scope), the dependency file (build-vs-buy choices), and the open issues (real problems). You learn more from the issue tracker of a mature open-source project than from reading its implementation.


5) Build vs buy boundary

For each category below, write: build, use existing library/service, or defer:

  • Auth
  • Payments
  • Search
  • Real-time / sync
  • Storage / file handling
  • Email / notifications

Rule: if everything in this list is "build," your v1 scope is probably unrealistic. The teams that ship fastest and most reliably are the ones who are most opinionated about what they will outsource. Building auth from scratch when Auth0 or Supabase Auth exist is not a competitive advantage — it is six weeks of implementation and security review that could be spent on your actual differentiation.


6) Risk tests

For each of these risks, define one concrete test that can be run this week — not "we'll look into it," but a specific action with a clear pass/fail outcome:

  • Highest technical risk (the assumption that, if wrong, changes the architecture)
  • Highest distribution risk (the assumption about how you will reach users)
  • Highest monetization risk (the assumption about willingness to pay)

Example: if your technical risk is "we can process documents in under 2 seconds," the test is not to design the full pipeline — it is to run three documents through a prototype and measure latency. You want a number, not a design.


7) Execution boundary

  • Explicit list of features out of scope for v1
  • Primary success metric for v1 (one number, not five)
  • List of decisions explicitly deferred until after v1 launch

The "deferred decisions" list is underrated. It is your contract with yourself about what you are not deciding until you have data. Without it, decisions keep getting made implicitly — by whoever is loudest, by the developer who built something "while they were in there," or by scope creep that nobody officially approved.


Copy/paste decision brief (fill in before starting implementation)

Target user:
Painful workflow (with evidence source):
Current substitute behavior:
v1 outcome (one sentence, measurable):

Reference products:
-
-
-

Reference GitHub repos (with stars + last updated):
-
-
-

Core technical bet (the assumption v1 depends on):
Highest technical risk:
Hardest build-vs-buy decision:

Out-of-scope for v1:
-
-
-

Next 14-day tests:
1.
2.
3.

GitHub search: high-signal query patterns

Use qualifiers to avoid low-quality repositories that inflate search results.

"collaborative editor" in:name,description stars:>500 pushed:>2025-01-01
"issue tracker" in:readme language:TypeScript stars:>300
"notion clone" in:name,description fork:true stars:>200

When evaluating results, check three things in order:

  1. Recency: when was the last commit? An unmaintained repo shows you how far you can get, not where the current state of the art is.
  2. Issue quality: are open issues detailed bug reports and thoughtful feature requests, or is it a graveyard of unanswered questions? High-quality issues indicate a project that has been seriously used.
  3. Dependency choices: what did the author use instead of building from scratch? Their build-vs-buy decisions are a free research output.

Related Reading on HowWorks

Sources

Next reads in this topic

Structured to move from head-term discovery to deeper, more citable cluster pages.

FAQ

What should a product research checklist for founders include?

Eight sections: (1) User definition with evidence from real conversations, (2) problem validation with cost of current alternatives, (3) market signal from trend and search data, (4) direct and indirect competitor mapping with documented failure points, (5) reference implementations from open-source, (6) architecture constraints and technical feasibility, (7) v1 scope with explicit out-of-scope list, and (8) build thesis with key assumptions. Completing this before writing code is the difference between building evidence-backed and intuition-backed products.

How long does founder product research take?

A complete first pass takes 4-8 hours: 2 hours on user and problem validation (finding 5-10 real users to speak to), 1-2 hours on competitive landscape mapping, 1-2 hours on technical reference research, and 1 hour writing the build thesis. 90% of startup failures trace to decisions made in the first week. The 4-8 hours is the cheapest insurance available.

What is the difference between market research and product research?

Market research validates demand size and segment behavior. Product research defines the implementation path, technical constraints, and tradeoffs. Both are necessary, but product research is the one founders more often skip — particularly the section on reference implementations and architecture constraints that determines whether the idea is buildable with your team and timeline.

What is a build thesis and why does it matter?

A build thesis is a one-paragraph statement that locks in: who you're building for, what problem you're solving, how your approach differs from existing alternatives, the core technical bet the product depends on, and what explicitly stays out of v1. Writing it forces the team to agree on tradeoffs before any code is written. Decisions made in the build thesis take hours to change. Decisions made after engineering starts take weeks.

What does 30% unknown mean on the checklist?

If more than 30% of the checklist items have answers like 'not sure,' 'need to confirm,' or 'we'll figure this out later' — you're in discovery mode, not build mode. Discovery mode means your next investment should be research activities (user interviews, technical prototyping, market validation) rather than full product development. Starting to build before reducing unknowns below 30% is where most budget goes before a product finds traction.

How is this checklist different from a lean canvas?

A lean canvas captures business model hypotheses at a high level. This checklist goes deeper on implementation specifics: the exact technical patterns in reference implementations, the specific failure points of each competitor, the architecture constraints that apply to your use case, and the explicit out-of-scope list for v1. It's designed for teams moving from idea to first sprint, not for investor communication.

Explore all guides, workflows, and comparisons

Use the HowWorks content hub to move from idea validation to build strategy, with practical playbooks and decision-focused comparisons.

Open content hub