Product Research Checklist for Founders
This is an 8-section checklist that surfaces the research gaps founders avoid: user evidence, problem validation, competitor failure points, reference implementations, architecture constraints, and explicit v1 scope. 90% of startup failures trace back to poor market research and failure to understand customer needs (CB Insights, 2025). The checklist takes 4-8 hours to complete — less time than most teams spend on their first sprint planning session, and far less than the cost of building the wrong thing.
This checklist is not a "nice to have." It is a filter against self-deception.
Use it when your team feels momentum but cannot explain, in plain language, why this specific product should exist now for this specific user. The checklist does not generate insight — it surfaces the gaps you have been avoiding.
How to use this checklist honestly
Complete it in one sitting (90–120 minutes). Use evidence only — no "we think users probably..." entries. If you are unsure about something, mark it as unknown rather than forcing a confident-sounding answer.
Count your unknowns when you finish. If more than 30% of the items are unknown or guessed, you are in discovery mode. Build mode requires evidence.
This distinction matters because build mode and discovery mode require different resource allocations. Running build-mode practices (sprints, velocity tracking, feature planning) in discovery mode burns time and produces false confidence. Running discovery-mode practices (user interviews, assumption tests, reference analysis) in build mode creates delays without improving direction quality. Know which mode you are in.
1) Problem definition (no storytelling, no generalizations)
- Exact user profile: job title, context, constraints, and what they are responsible for
- Repeated painful workflow: documented across multiple users, not one memorable anecdote
- Frequency and urgency: how often does this happen? How costly is each occurrence?
- Consequence of inaction: what happens if the problem remains unsolved for six more months?
Common failure mode here: describing the problem in terms of your solution ("users need a better way to do X") rather than in terms of their current experience. A well-defined problem is stated entirely from the user's perspective, using their language.
2) Existing behavior map
- Current tools or processes users already rely on to handle this problem
- Why those substitutes remain acceptable today (cost, inertia, familiarity)
- Where the current substitutes break in real usage — specific scenarios, not general complaints
Important: if users have no current behavior around this problem, there is usually low urgency. No substitute behavior usually means users have adapted to living without a solution. That is a harder market to activate than one where users are already paying with time or money for an imperfect workaround.
The substitute map also tells you what you are really competing against. For most startup ideas, the primary competitor is not another startup — it is a spreadsheet, a Notion database, a manual workflow, or a combination of existing tools stitched together with Zapier.
3) Solution hypothesis
- One-sentence value proposition (who, what outcome, compared to what)
- One measurable v1 outcome that proves the product is working
- Clear, specific reason why users would switch now rather than six months from now
The "switch now" question is often skipped because the answer is uncomfortable. If you cannot identify a specific trigger — a new regulation, a recent painful incident, a workflow change — the adoption timeline is harder to predict. This does not mean you should not build, but it changes how you plan go-to-market.
4) Reference intelligence
- 3–5 reference products: what does each one optimize for? What user segment?
- 3–5 reference implementations: open-source GitHub repos, with stars, maintenance status, and language
- For each reference: what technical pattern did they choose, and what did they explicitly not build?
- What constraints appear in their public docs, issue trackers, and changelogs?
How to read a reference repo fast: do not start with the code. Start with the README (claimed scope), the dependency file (build-vs-buy choices), and the open issues (real problems). You learn more from the issue tracker of a mature open-source project than from reading its implementation.
5) Build vs buy boundary
For each category below, write: build, use existing library/service, or defer:
- Auth
- Payments
- Search
- Real-time / sync
- Storage / file handling
- Email / notifications
Rule: if everything in this list is "build," your v1 scope is probably unrealistic. The teams that ship fastest and most reliably are the ones who are most opinionated about what they will outsource. Building auth from scratch when Auth0 or Supabase Auth exist is not a competitive advantage — it is six weeks of implementation and security review that could be spent on your actual differentiation.
6) Risk tests
For each of these risks, define one concrete test that can be run this week — not "we'll look into it," but a specific action with a clear pass/fail outcome:
- Highest technical risk (the assumption that, if wrong, changes the architecture)
- Highest distribution risk (the assumption about how you will reach users)
- Highest monetization risk (the assumption about willingness to pay)
Example: if your technical risk is "we can process documents in under 2 seconds," the test is not to design the full pipeline — it is to run three documents through a prototype and measure latency. You want a number, not a design.
7) Execution boundary
- Explicit list of features out of scope for v1
- Primary success metric for v1 (one number, not five)
- List of decisions explicitly deferred until after v1 launch
The "deferred decisions" list is underrated. It is your contract with yourself about what you are not deciding until you have data. Without it, decisions keep getting made implicitly — by whoever is loudest, by the developer who built something "while they were in there," or by scope creep that nobody officially approved.
Copy/paste decision brief (fill in before starting implementation)
Target user:
Painful workflow (with evidence source):
Current substitute behavior:
v1 outcome (one sentence, measurable):
Reference products:
-
-
-
Reference GitHub repos (with stars + last updated):
-
-
-
Core technical bet (the assumption v1 depends on):
Highest technical risk:
Hardest build-vs-buy decision:
Out-of-scope for v1:
-
-
-
Next 14-day tests:
1.
2.
3.
GitHub search: high-signal query patterns
Use qualifiers to avoid low-quality repositories that inflate search results.
"collaborative editor" in:name,description stars:>500 pushed:>2025-01-01
"issue tracker" in:readme language:TypeScript stars:>300
"notion clone" in:name,description fork:true stars:>200
When evaluating results, check three things in order:
- Recency: when was the last commit? An unmaintained repo shows you how far you can get, not where the current state of the art is.
- Issue quality: are open issues detailed bug reports and thoughtful feature requests, or is it a graveyard of unanswered questions? High-quality issues indicate a project that has been seriously used.
- Dependency choices: what did the author use instead of building from scratch? Their build-vs-buy decisions are a free research output.
Related Reading on HowWorks
- How to Validate a Startup Idea Before You Build — Validation methodology for testing demand before committing to a build
- The Non-Technical Founder's Product Research Guide — Comprehensive guide to market, user, and competitor research
- Perplexity vs NotebookLM for Product Research — Which AI research tool to use at each checklist stage
- Before You Vibe Code: Why Research Changes Everything — Applying checklist outputs to improve vibe coding prompts
Sources
- GitHub official repository search qualifiers (in, stars, pushed, language, fork): Searching for repositories
- YC on launch, user conversations, and rapid iteration: YC's Essential Startup Advice