Why So Many AI Startups Are Lying
199 segments
Google and Excel just reviewed over
4,000 AS startups for their accelerator
program, but they rejected 70% easily
because they were just useless rappers,
thin layers on top of existing models
with nothing special proprietary
underneath. Out of 4,000 applications,
they ended up only accepting five into
the actual program. In this video, I
want to talk to you about why most AI
companies are lies and why you could
maybe even build a better product
yourself if you were just honest. You
might have seen videos from Anthony Syly
or Moar covering individual cases of AI
companies faking their technology
recently. Now, those videos are great,
but what I want to show you today is
that those individual cases are symptoms
of something much bigger. This is a
systemic problem in the software
industry backed by data, regulatory
filings, and peer-reviewed science. So,
if you want to learn the truth, well,
keep watching. This article from
TechBuzz breaks down the numbers from
Google and Excel's joint AI accelerator
called Atoms. The 70% rejection rate is
bad enough on its own, but the economics
behind these kinds of rapper companies
are even worse. Their gross margins sit
around 23% compared to 80% or higher for
traditional SAS. API costs often eat 45
to 60% of the revenue already because
you know AI models are expensive and
none of these companies actually own
their models. So the business model is
structurally broken from day one. Google
Cloud FP Maui said it directly. If
you're just counting on the backend
model to do all the work and you're
simply white labeling it, the industry
doesn't have patience for that anymore.
And we already saw this play out with
Jasper AI, which was one of the poster
childs for this model of in a way just
reselling the AI models. Jasper reached
90 million in annual revenue and a $1.5
billion valuation as a rising tool on
GBT. And to be fair, it does work well.
Those are rapper companies. So, these
are companies that aren't necessarily
lying, but have nothing real under the
hood. Now, sometimes they are still
lying because they might put something
on their front page saying that they've
trained an AI model on your data, which
is often not true. Nothing is trained at
all. They're just putting some of your
own data in their prompts. But some
companies here go further and actively
lie about their AI systems and what they
do. The SEC and FTC have started
criminally charging some of those
founders. And this article from Stone
lays out the full enforcement timeline.
Some of this started all the way back in
March 2024 when two investment advisory
firms, Calia and Global Predictions,
settled SEC charges for claiming they
used AI that didn't exist and the
combined penalty was $400,000.
Then in January 2025, Presto Automation
got hit, a publicly traded drive-through
AI company, no less, that claimed 95% of
orders needed no human help. Now, the
reality was that over 70% required human
intervention, and at some locations, it
just came close to 100%. The core voice
AI technology that they used wasn't even
theirs. It was owned by a third party.
And then there's Nate Incorporated from
April 2025, which was a shopping app
that raised $42 million and the actual
AI automation rate was effectively 0%.
The entire product was powered by a call
center of hundreds of workers in the
Philippines. Now, I'm not saying you
cannot make an AI system like this work,
but you have to do more than this. You
cannot just lie away out of it. The
founder was indicted on securities fraud
and wire fraud, each carrying a maximum
of 20 years. So, he's looking at 40
years in prison. The SEC described it as
an old school fraud using new school
buzzwords. And Nate is not unique in
this pattern. You might have heard of
Builder.ai, which raised $450 million
for Microsoft, Soft Bank, and more,
which claimed an AI called Natasha could
build apps autonomously. The company
filed for bankruptcy in June 2025 with
liabilities reaching $100 million
against assets under 10 million.
Amazon had a similar situation with
their just walk out technology in
cashless stores where roughly a thousand
workers in India were manually reviewing
70% of transactions and they quietly
face it out in April 2024. Again here I
fully believe that we will be able to
create AI systems that do this kind of
work but companies aren't always upfront
that they're still in a training process
and this really is the problem with the
current bubble of AI as well. If you're
interested to learn how AI works beyond
just prompting an LLM, well, that's what
I made this channel for. And I also have
a set of free open source local AI
projects linked in the description below
that can walk you through building
things from scratch. And this way, I
just hope to push the industry to a
better standard, honestly. So, why does
this keep happening? Now, it's not just
greed that people just want to make a
quick buck. Researchers at Alto
University in Finland ran many
experiments where people use Chachi Piti
to solve certain school admission
reasoning problems and every single
group of users overestimated their
performance after using AI regardless of
their actual ability level. Now normally
with this kind of Dunning Kruger effect
the least skilled people are the most
overconfident but with AI that flips
completely. Sometimes people with higher
AI literacy, the power users who use
these tools the most show the greatest
overconfidence. And we are seeing this
happening in the founder space. Young
people with not that much career
experience just think that they're
geniuses when they can create an MVP
with clawed code. Professor Robin Welch
put it clearly. When it comes to AI, the
Dunning Krueger effect vanishes. What's
really surprising is the higher AI
literacy brings more overconfidence.
Picture
this, a non-technical founder sitting
down with claude code five coding a demo
in three hours while the AI tells him
it's brilliant the entire time. It will
never push back. It will never say this
won't scale and it might not even say
that they need a real engineer for this.
After that experience, the founder is
just completely gassed up and genuinely
believes that they built something
amazing. Some of these people really
aren't cynical grifters, but the tools
themselves just made them delusional.
88% of Y Combinator's latest batch is AI
native and a quarter of the W25 batch
had code bases that were 95% AI
generated which makes sense because you
can just throw together a quick demo and
impress a couple of venture capitalists.
Now I've shipped production software for
years and the gap between a demo and a
real product is enormous but the tools
make that gap invisible to the person
building the demo. The whole startup
ecosystem is running on overconfidence.
And the AI itself is the engine that's
going to keep this going for years. So
next time you see an AI product launch
go viral with a really slick 60-second
demo, ask yourself whether that's a real
product with real engineering behind it
or a prompt rapper that someone vibe
coded and that will not be supported in
2 years from now. Because based on the
data we just looked at, the odds say
it's probably more of a scammy product
70% of the time. If you want to work
towards a world where we have really
useful AI products that go beyond just
prompting an LLM, check out my set of
free open source projects linked in the
description below that teach you how to
build real things from scratch. Now,
subscribe if you want more AI
engineering truth like this one, and
then I'll see you next time.
Ask follow-up questions or revisit key timestamps.
The video discusses a widespread problem in the AI startup industry, where many companies are either "useless wrappers" of existing models or actively deceptive about their technology. Google and Excel's accelerator rejected 70% of startups due to a lack of proprietary technology and poor economic models from high API costs. Examples like Jasper AI represent the "wrapper" model, while others such as Presto Automation, Nate Incorporated, Builder.ai, and even Amazon's "Just Walk Out" technology, were found to rely heavily on human labor despite claiming AI automation, leading to legal action and bankruptcies. This issue stems not just from greed, but from an "AI Dunning-Kruger effect" where users, particularly those with higher AI literacy, become overconfident and deluded by AI-generated demos, failing to see the significant gap between a demo and a functional product. The speaker advises skepticism towards slick AI product launches, as data suggests a high percentage are superficial "prompt wrappers" rather than robust engineering solutions.
Videos recently processed by our community