All articles
Vibe Coding13 min read

Why 8,000 Vibe Coding Projects Failed (And What the Survivors Did First)

An estimated 8,000+ vibe-coded startups require rebuild or rescue work, with cleanup costs of $50K–$500K per project. The builders who shipped production apps did one thing differently before opening any AI tool.

By HowWorks Team

Key takeaways

  • As of December 2025, 8,000+ vibe-coded startups require rebuild or rescue work, with per-project costs of $50K–$500K (Vexlint, 2025).
  • The failure mode is not bad AI output — it's starting to build before understanding the architecture. AI generates demo-ready code, not production-ready systems.
  • AI-generated technical debt compounds at 23% monthly, turning a $1,000 problem into a $30,000 crisis in six months (byteiota, 2025).
  • Andrej Karpathy — who coined 'vibe coding' — hand-coded his most ambitious project because AI tools were 'net unhelpful.' The problem: his repo was too novel for LLMs to assist effectively.
  • The builders who ship production apps do the same thing first: they understand how similar products are built before writing a single prompt.

Most vibe coding projects fail not because AI writes bad code, but because builders start prompting before understanding the architecture. As of December 2025, 8,000+ vibe-coded startups require rebuild or rescue work costing $50K–$500K per project (Vexlint, 2025). The builders who ship production apps do one thing first: they research how similar products are architecturally built before writing a single prompt. If you do not yet have a strong set of reference products, start with Where to Find AI Projects in 2026.


The Failure Pattern Nobody Talks About

As of December 2025, an estimated 8,000+ vibe-coded startups require rebuild or rescue operations, with per-project cleanup costs ranging from $50,000 to $500,000 (Vexlint, 2025). The total industry cleanup cost is estimated between $400 million and $4 billion.

This is not a story about bad AI tools. Lovable, Cursor, and Bolt.new work exactly as advertised. The failure pattern is something else entirely, and it happens before the first prompt is written.

One Y Combinator Winter 2025 founder took the stage at Demo Day: "We built a $5M ARR SaaS platform in 6 months with 3 developers. 95% of our codebase is AI-generated." The room buzzed. Six months later: a $200,000 "rescue engineering" budget and a complete codebase rewrite (Vexlint, 2025).

This is not one company. It's the pattern behind most of those 8,000 failures.


Why AI Generates Demo-Ready Code, Not Production-Ready Systems

The core misunderstanding: AI tools are designed to generate working demonstrations. Working demonstrations and production systems are fundamentally different things.

What a Demo NeedsWhat Production Needs
"Happy path" functionalityHundreds of edge cases
Simple authenticationMulti-tenancy, session management
Basic database readsOptimized queries, indexing, caching
Works for 10 usersScales to 10,000 users
No error handlingGraceful failures everywhere
No complianceGDPR, SOC 2, HIPAA (if applicable)
No security auditOWASP Top 10 minimum
No backup strategyDisaster recovery

AI doesn't know which category you're building for. It generates code that compiles and runs. The demo looks identical to a production app. The problems only surface under real load, with real users, and real edge cases — typically after you've built 40,000 lines on a foundation that can't support them.

The numbers are specific: features built with over 60% AI assistance take 3.4x longer to modify later (byteiota, 2025). AI-generated technical debt compounds at 23% monthly — a $1,000 problem in month one becomes a $30,000 crisis in month six.

The Reddit community named this "vibe slopping": the stage where flow spills into chaos, bloated unrefactored code, duct-tape fixes, shortcuts that harden into debt. One founder documented it directly:

"I just want to say that I am giving up on creating anything anymore. I was trying to create my little project, but every time there are more and more errors and I am sick of it. I am working on it for about 3 months, I do not have any experience with coding and was doing everything through AI. But every time I want to change a little thing, I kill 4 days debugging other things that go south."

This is the reality nobody screenshots for X.


What the Builders Who Shipped Actually Did

The counter-evidence matters as much as the failures. One team rebuilt a 100,000-user product using Lovable and Claude Code in 7 days (r/vibecoding, 2025). Jason Lemkin — whose Replit experience became the most-documented vibe coding disaster in the industry — later built seven production applications serving 30,000+ monthly users, with one tool used 334,835 times in 30 days. He replaced an agency relationship that cost $200,000/year.

The difference between the failures and the successes is not the tool. It's what happened before the first prompt.

The builders who ship consistently do the same thing: they understand the architecture of what they're building before they ask AI to build it.

This sounds obvious. It almost never happens.


The Research Workflow That Prevents Most Failures

Step 1: Understand how similar products handle your hardest problem

Every product has one genuinely difficult engineering decision. For a multi-tenant SaaS: how is data isolation handled? For a real-time collaboration tool: how does sync work without conflicts? For a marketplace: how are payments split and disputed?

Before your first prompt, find out how existing solutions answered this question.

Search HowWorks for products similar to what you're building. HowWorks breaks down the tech stack and architectural decisions of real AI products — not marketing copy, but the actual implementation choices. Spend 30 minutes here before opening any AI coding tool. You'll understand what database schema works for your use case, what authentication pattern your category uses, and what third-party libraries already solve 80% of your core problem. For a broader tool-by-tool comparison of discovery workflows, use Best Tools for Discovering AI Projects.

This research costs 30 minutes. The lack of it costs weeks.

Step 2: Review 3–5 existing implementations on GitHub

Search GitHub for open-source projects solving the same problem. Don't read every line — read:

  • The README.md (what problem does it solve and how)
  • The data model (database schema files, or whatever defines data structure)
  • The main configuration file (what technology choices were made)
  • Open issues and closed PRs labeled "architecture" or "refactor" (what problems were discovered later)

That last part is the most valuable. Issue trackers are where the limitations of each architectural choice are documented in real time by people who hit them. You're reading field reports from builders who already walked the path you're about to walk.

Three repos, 20 minutes each, gives you more architectural context than hours of prompting into the unknown.

Step 3: Write a project rules document before your first prompt

The most common cause of "spaghetti code after day 3" is inconsistent conventions compounding across sessions. Each AI response builds on the previous one, and if the first response chose Tailwind while the second chose CSS modules, the codebase fragments. By day 3, you're fighting the tool instead of building.

Before your first prompt, write a one-page document that locks in:

  • Stack: exactly which framework, database, and auth library (don't let AI choose — it will choose differently each session)
  • Folder structure: where components, pages, utilities, and API routes live
  • Naming conventions: camelCase vs kebab-case, file naming patterns
  • Styling system: Tailwind OR CSS modules, never both
  • Data model draft: the core entities and their relationships, even rough

Paste this document at the start of every new AI session. This single practice eliminates the most common form of vibe coding drift.

Step 4: Write a one-paragraph technical thesis

Before your first real build prompt, write this:

"I'm building [X]. It works by [mechanism]. The core technical challenge is [Z]. I want to use [technology A] because it handles [Z] well. I'm explicitly not building [Y], because [reason]."

The explicit exclusion at the end matters as much as the inclusion. AI systems fill unspecified space with defaults, and defaults are often wrong for your specific context. By defining what you're not building, you prevent AI from assuming you want features you don't need.

This paragraph — shared at the start of the first substantive prompt — produces dramatically different (and better) architectural output than starting with "build me an app that does X."

Step 5: Export to GitHub within the first 24 hours

This step is not about backup. It's about understanding.

Getting code out of a platform's hosted environment and into GitHub forces a specific kind of reckoning: you see the file structure, the dependencies, the environment variables, and what's real versus what's placeholder. Things that looked complete in the live preview often reveal missing implementations when you look at the actual code.

A 30-minute export and audit after your first day of building reveals:

  • Which parts of the architecture the AI chose without asking you
  • What external services are being called (and whether you've authorized them)
  • What's genuinely implemented versus what returns mock data
  • What the actual database schema looks like (often different from what you described)

This audit costs 30 minutes and consistently reveals things you didn't know about what you built.


The Mistakes That Are Always Expensive

Mixing styling systems The most common early sign of a project in trouble: "styles are a mix of CSS and Tailwind." Once a codebase has inconsistent conventions, each new prompt makes the inconsistency worse. Fix it immediately or it compounds.

Skipping security on auth and payments The Veracode 2025 GenAI Code Security Report tested 100+ LLMs across 80 coding tasks. Finding: 45% of AI-generated code introduced OWASP Top 10 security vulnerabilities. Cross-site scripting defenses failed 86% of the time. This is not hypothetical — Tea, a dating safety app, had 72,000 images exposed because an AI-generated Firebase instance was left with default open settings.

For auth and payment-handling code specifically: review every line. Use established libraries (Clerk, Auth.js, Stripe's official SDK). Never ask AI to build auth from scratch.

Building features before validating the data model Database schema changes are expensive in vibe coding — AI often writes code that works with the current schema but doesn't abstract it properly. Changing the schema after building on top of it forces cascading fixes across the codebase. Sketch the data model before building features.

Ignoring the infrastructure cost cliff Lovable, Bolt.new, and Replit all have generous free tiers. They all have sharp cliffs. Replit's free plan becomes pay-as-you-go once credits are exhausted — this is how a $25/month plan became $607.70 in 3.5 days for one documented founder. When you add real users, Vercel's free tier disallows commercial use, Supabase pauses inactive databases, and Clerk's free tier has auth limits. Budget $150–$350/month for production infrastructure before you launch.


Even the Person Who Coined "Vibe Coding" Stopped Vibe Coding

Andrej Karpathy — former Director of AI at Tesla, OpenAI co-founder, the person who wrote the original "vibe coding" post in February 2025 — built his most ambitious project, nanochat (a from-scratch LLM training pipeline), almost entirely by hand.

When asked why, he said AI tools were "net unhelpful, possibly the repo is too far off the data distribution."

His reason is the most instructive part of this story: AI assists effectively when building in well-documented territory — patterns with extensive training data (React apps, CRUD backends, standard auth flows). It struggles with novel architecture that doesn't exist in its training distribution.

This doesn't invalidate vibe coding for standard product patterns. It identifies exactly where the limits are: the more novel and specialized your architecture, the less AI can help, and the more important it becomes to understand the territory yourself before prompting.

The research step is how you move your project from "too far off the data distribution" back into territory where AI can be genuinely useful.


The 2-Hour Investment That Pays for Itself

The pattern across every documented vibe coding success is the same:

  1. 30 minutes: Research how similar products are architecturally built on HowWorks and GitHub
  2. 30 minutes: Write the project rules document and technical thesis
  3. 60 minutes: Review 3 GitHub repos, read issue trackers for the architectural problems that surfaced

Total: 2 hours before the first prompt.

A Forrester study (August 2025) found that organizations investing in upfront technical discovery achieved a 415% ROI over three years and reduced development iterations by 25%. Building the wrong feature costs 5–10x more than discovering the mistake before development (Standish Group CHAOS Report).

The 2-hour research investment doesn't slow you down. It prevents the 4-day debugging sessions that happen when you build on the wrong foundation.

The builders who ship production apps aren't the fastest prompters. They're the ones who understood what they were building before they started.


Related Reading on HowWorks

Next reads in this topic

Structured to move from head-term discovery to deeper, more citable cluster pages.

FAQ

Why do most vibe coding projects fail in production?

AI tools generate demo-ready code, not production-ready systems. Demos need only 'happy path' functionality. Production needs accessibility, multi-tenancy, rate limiting, security (OWASP Top 10), disaster recovery, compliance (GDPR, SOC 2), and hundreds of edge cases — none of which AI generates by default. The complexity wall hits between 60-70% completion on every project.

What should I research before starting to vibe code?

Three things: (1) How do similar products handle your hardest technical problem — authentication architecture, real-time sync, database schema design. (2) What tech stack did successful implementations choose and why — this prevents AI from making uninformed default choices. (3) What open-source libraries already solve your core challenge — there's no reason to vibe-code a payment system when Stripe exists. HowWorks shows you the architecture of real AI products so you can research these decisions before your first prompt.

How long should I research before starting to vibe code?

Two to four hours for a typical MVP. Spend 30 minutes on HowWorks to see how similar products are architecturally built. Spend 60 minutes reviewing 3–5 GitHub repos solving the same problem. Spend 30 minutes writing a one-paragraph technical thesis. That 2-4 hour investment prevents weeks of rework. A Forrester study (August 2025) found teams that invested in upfront technical discovery achieved 415% ROI over three years.

What is the 60-70% problem in vibe coding?

Every vibe coding tool reliably generates 60-70% of a working app. The remaining 30-40% contains 90% of the actual work: complex business logic, production-grade security, performance optimization, and edge cases. Without architectural understanding before you start, you hit this wall without knowing how to get past it — and each new prompt makes the underlying mess harder to fix.

What is a project rules document and why does it matter?

A project rules document is a one-page specification you write before your first prompt that locks in: folder structure, naming conventions, styling system (pick one — never mix CSS and Tailwind), component patterns, and database design principles. Pasting this rules doc into every subsequent prompt keeps AI-generated code consistent across sessions. Without it, each AI response builds on inconsistent foundations and the codebase fragments.

Did Andrej Karpathy — who coined vibe coding — actually use vibe coding?

For his most ambitious project (nanochat, a from-scratch LLM training and inference pipeline), Karpathy hand-coded almost everything. When asked why, he said AI tools were 'net unhelpful' because his repo was 'too far off the data distribution' — the code was too novel and specialized for LLMs to assist effectively. This doesn't invalidate vibe coding for standard product patterns, but it shows the limits: AI assists best when building in well-documented territory, not uncharted ground.

Explore all guides, workflows, and comparisons

Use the HowWorks content hub to move from idea validation to build strategy, with practical playbooks and decision-focused comparisons.

Open content hub