Every week, another AI startup closes a round at a $50M pre-money with six months of revenue. Most of them will be dead in 18 months.
The investors writing those checks aren't dumb. They're using the wrong filters.
Here's what I keep seeing: angels and scouts are evaluating AI companies the same way they evaluated SaaS companies in 2015. ARR growth. Churn rate. NPS. These metrics matter, but they tell you almost nothing about whether an AI startup has a real moat or is just wrapping GPT-4 with a Stripe integration.
The playbook needs to change. Here's where most investors are getting it wrong.
Mistake 1: Treating "AI" as a Business Model
"We use AI to..." is the new "We're a platform play." It means nothing.
The question isn't whether a startup uses AI. It's whether AI is doing something in their product that competitors can't easily replicate. That distinction separates the investments worth making from the ones that look great in a pitch deck and collapse the moment OpenAI ships a new feature.
The most dangerous category right now: AI wrappers dressed up as vertical SaaS. They acquire customers fast because the demo is genuinely impressive. But retention craters six months in when users realize the outputs aren't consistently good enough to replace the workflow they were sold on replacing.
Before you write a check, ask for a cohort retention chart. Not a screenshot. The underlying data. If a founder won't share that, you know why.
Mistake 2: Overweighting Virality, Underweighting Stickiness
AI products go viral. This is well-established. What's less discussed: viral growth and durable growth are almost completely uncorrelated in this category.
I've watched three different AI writing tools each hit 100,000 users in under 60 days. All three are now below 10,000 monthly actives. Viral adoption in AI often reflects novelty, not utility. The moment the novelty fades, so do the users.
The signal that actually predicts staying power: are people integrating this tool into their workflow so deeply that removing it would cause real pain? That's a different question than "is this cool?" and it requires different due diligence.
GitHub star velocity is one useful proxy for organic developer interest, but it only applies to technical products. For B2B AI tools, look at seat expansion over time within existing accounts. Are teams adding users, or are they shrinking back to one power user?
Mistake 3: Ignoring the Infrastructure Layer
Every AI application boom creates infrastructure demand. This happened with mobile apps and cloud storage, with e-commerce and payment rails. It's happening again.
The best-performing early-stage investments in the AI wave aren't all chatbots and copilots. Some of them are data pipeline companies. Evaluation frameworks. Fine-tuning infrastructure. Developer tooling that AI teams use every day but no one writes press releases about.
These companies are harder to find because they don't get TechCrunch coverage. They get GitHub stars and Hacker News front pages and quiet word-of-mouth in Slack groups. If you're not reading Hacker News as a signal layer, you're missing a disproportionate share of the genuinely interesting technical bets.
For sourcing these deals at scale, some investors use data tools like Bright Data ([BRIGHTDATA_AFFILIATE_LINK]) to surface developer communities and open-source projects gaining traction before they become obvious. The information is public. The edge is in aggregating it before everyone else does.
Mistake 4: Underestimating the Open-Source Threat to Your Portfolio
If you've written checks into AI startups building proprietary models, you need to honestly assess what happens when an open-source alternative catches up.
This isn't hypothetical. It's been happening repeatedly since 2023, and the pace is accelerating. Meta's Llama releases, Mistral's models, and a half-dozen Chinese open-source releases have already obsoleted the moats of multiple VC-backed companies.
The open-source to unicorn pattern cuts both ways. The same dynamic that creates a dominant open-source company can also destroy a proprietary one. The question to ask about any AI company with model-level IP: what happens to their business when a comparable open-source model ships?
If the answer is "we'd need to rebuild," that's not a moat. That's a window.
Mistake 5: Confusing Traction with Market Fit
AI products get traction faster than almost anything else in tech right now. Distribution via Reddit threads and Twitter demos can create 10,000 sign-ups overnight.
But Product Hunt launch metrics tell you almost nothing about product-market fit. They tell you about marketing fit, which is different and worth considerably less at the early stage.
The investors making the best AI bets aren't chasing the hottest Product Hunt launches. They're looking at what happened to the cohort that signed up 90 days ago. What percentage are still active? What are they doing? Are they paying?
Those questions sound basic. Most investors aren't asking them rigorously because the AI category moves fast and FOMO is real. The discipline to ask them anyway is exactly what separates good angel investing from expensive pattern-matching.
What Good AI Startup Evaluation Actually Looks Like
The investors building consistent track records here are doing a few things differently.
They're running quantitative signal detection earlier, before a company is on anyone's radar. That means finding breakout startups before they raise by tracking GitHub activity, community engagement, developer chatter, and open-source contribution velocity, not waiting for a Sequoia-backed seed announcement.
They're asking for proof of retention, not just proof of growth.
They're separating AI-enabled businesses (companies that use AI as a feature) from AI-native businesses (companies where the AI capability is the primary value driver and creates real defensibility). The former can be worth backing. The latter is where the outlier returns live.
And they're being honest with themselves about which AI companies are genuinely building something hard versus building a good demo on top of someone else's model.
The bar for "impressive AI product" has dropped dramatically. The bar for "AI company that's still relevant in three years" has not.
Want to find AI startups before the crowd does? The beforeVC weekly briefing tracks GitHub momentum, Hacker News traction, and open-source signal data across hundreds of early-stage companies each week. Subscribers see which projects are gaining real developer adoption, not just press coverage. Subscribe to beforeVC.
Some links are affiliate links. You will not pay more.
Get the signal before the noise
Each week we scan thousands of signals and surface the highest-momentum projects. Five emerging signals, ranked and scored. Read in under 2 minutes.
Free weekly briefing. No spam, unsubscribe anytime.
Keep reading

The 2026 AI Startup Signal Stack: Real Traction vs. AI Hype
In 2026, AI startup evaluation requires a new signal stack. Here's how to separate real traction from hype before you write a check.
Mar 16, 2026

How VCs Actually Source Deals (And How Angels Can Do It Better)
VCs have a deal sourcing system, not magic. Once you understand it, you realize angels can replicate most of it and beat the parts they can't.
Mar 15, 2026

What Happens After a Project Hits 1,000 GitHub Stars
The 1K milestone is a filter, not a finish line. Here's what the next 90 days reveal about whether a project has real traction.
Mar 13, 2026