HomeVideos

Why So Many AI Startups Are Lying

Now Playing

Why So Many AI Startups Are Lying

Transcript

199 segments

0:00

Google and Excel just reviewed over

0:02

4,000 AS startups for their accelerator

0:05

program, but they rejected 70% easily

0:07

because they were just useless rappers,

0:09

thin layers on top of existing models

0:11

with nothing special proprietary

0:13

underneath. Out of 4,000 applications,

0:17

they ended up only accepting five into

0:19

the actual program. In this video, I

0:21

want to talk to you about why most AI

0:23

companies are lies and why you could

0:25

maybe even build a better product

0:27

yourself if you were just honest. You

0:30

might have seen videos from Anthony Syly

0:31

or Moar covering individual cases of AI

0:34

companies faking their technology

0:36

recently. Now, those videos are great,

0:38

but what I want to show you today is

0:40

that those individual cases are symptoms

0:42

of something much bigger. This is a

0:45

systemic problem in the software

0:47

industry backed by data, regulatory

0:49

filings, and peer-reviewed science. So,

0:52

if you want to learn the truth, well,

0:54

keep watching. This article from

0:56

TechBuzz breaks down the numbers from

0:59

Google and Excel's joint AI accelerator

1:01

called Atoms. The 70% rejection rate is

1:05

bad enough on its own, but the economics

1:08

behind these kinds of rapper companies

1:10

are even worse. Their gross margins sit

1:13

around 23% compared to 80% or higher for

1:17

traditional SAS. API costs often eat 45

1:22

to 60% of the revenue already because

1:24

you know AI models are expensive and

1:26

none of these companies actually own

1:28

their models. So the business model is

1:30

structurally broken from day one. Google

1:34

Cloud FP Maui said it directly. If

1:36

you're just counting on the backend

1:38

model to do all the work and you're

1:40

simply white labeling it, the industry

1:42

doesn't have patience for that anymore.

1:46

And we already saw this play out with

1:48

Jasper AI, which was one of the poster

1:50

childs for this model of in a way just

1:53

reselling the AI models. Jasper reached

1:56

90 million in annual revenue and a $1.5

1:59

billion valuation as a rising tool on

2:01

GBT. And to be fair, it does work well.

2:05

Those are rapper companies. So, these

2:07

are companies that aren't necessarily

2:08

lying, but have nothing real under the

2:10

hood. Now, sometimes they are still

2:12

lying because they might put something

2:13

on their front page saying that they've

2:14

trained an AI model on your data, which

2:17

is often not true. Nothing is trained at

2:19

all. They're just putting some of your

2:20

own data in their prompts. But some

2:23

companies here go further and actively

2:25

lie about their AI systems and what they

2:28

do. The SEC and FTC have started

2:31

criminally charging some of those

2:32

founders. And this article from Stone

2:35

lays out the full enforcement timeline.

2:38

Some of this started all the way back in

2:40

March 2024 when two investment advisory

2:44

firms, Calia and Global Predictions,

2:47

settled SEC charges for claiming they

2:50

used AI that didn't exist and the

2:51

combined penalty was $400,000.

2:55

Then in January 2025, Presto Automation

2:57

got hit, a publicly traded drive-through

3:01

AI company, no less, that claimed 95% of

3:04

orders needed no human help. Now, the

3:06

reality was that over 70% required human

3:09

intervention, and at some locations, it

3:11

just came close to 100%. The core voice

3:14

AI technology that they used wasn't even

3:16

theirs. It was owned by a third party.

3:19

And then there's Nate Incorporated from

3:21

April 2025, which was a shopping app

3:24

that raised $42 million and the actual

3:26

AI automation rate was effectively 0%.

3:30

The entire product was powered by a call

3:32

center of hundreds of workers in the

3:34

Philippines. Now, I'm not saying you

3:36

cannot make an AI system like this work,

3:38

but you have to do more than this. You

3:40

cannot just lie away out of it. The

3:42

founder was indicted on securities fraud

3:44

and wire fraud, each carrying a maximum

3:45

of 20 years. So, he's looking at 40

3:47

years in prison. The SEC described it as

3:50

an old school fraud using new school

3:52

buzzwords. And Nate is not unique in

3:53

this pattern. You might have heard of

3:55

Builder.ai, which raised $450 million

3:59

for Microsoft, Soft Bank, and more,

4:01

which claimed an AI called Natasha could

4:03

build apps autonomously. The company

4:05

filed for bankruptcy in June 2025 with

4:08

liabilities reaching $100 million

4:11

against assets under 10 million.

4:14

Amazon had a similar situation with

4:16

their just walk out technology in

4:18

cashless stores where roughly a thousand

4:20

workers in India were manually reviewing

4:22

70% of transactions and they quietly

4:24

face it out in April 2024. Again here I

4:27

fully believe that we will be able to

4:29

create AI systems that do this kind of

4:31

work but companies aren't always upfront

4:34

that they're still in a training process

4:36

and this really is the problem with the

4:38

current bubble of AI as well. If you're

4:40

interested to learn how AI works beyond

4:42

just prompting an LLM, well, that's what

4:43

I made this channel for. And I also have

4:45

a set of free open source local AI

4:47

projects linked in the description below

4:49

that can walk you through building

4:50

things from scratch. And this way, I

4:53

just hope to push the industry to a

4:55

better standard, honestly. So, why does

4:58

this keep happening? Now, it's not just

5:00

greed that people just want to make a

5:02

quick buck. Researchers at Alto

5:04

University in Finland ran many

5:05

experiments where people use Chachi Piti

5:07

to solve certain school admission

5:09

reasoning problems and every single

5:11

group of users overestimated their

5:13

performance after using AI regardless of

5:15

their actual ability level. Now normally

5:17

with this kind of Dunning Kruger effect

5:20

the least skilled people are the most

5:21

overconfident but with AI that flips

5:23

completely. Sometimes people with higher

5:25

AI literacy, the power users who use

5:27

these tools the most show the greatest

5:29

overconfidence. And we are seeing this

5:31

happening in the founder space. Young

5:34

people with not that much career

5:36

experience just think that they're

5:38

geniuses when they can create an MVP

5:40

with clawed code. Professor Robin Welch

5:43

put it clearly. When it comes to AI, the

5:45

Dunning Krueger effect vanishes. What's

5:48

really surprising is the higher AI

5:50

literacy brings more overconfidence.

5:53

Picture

5:55

this, a non-technical founder sitting

5:57

down with claude code five coding a demo

6:00

in three hours while the AI tells him

6:02

it's brilliant the entire time. It will

6:03

never push back. It will never say this

6:06

won't scale and it might not even say

6:08

that they need a real engineer for this.

6:10

After that experience, the founder is

6:12

just completely gassed up and genuinely

6:14

believes that they built something

6:15

amazing. Some of these people really

6:17

aren't cynical grifters, but the tools

6:19

themselves just made them delusional.

6:22

88% of Y Combinator's latest batch is AI

6:26

native and a quarter of the W25 batch

6:29

had code bases that were 95% AI

6:32

generated which makes sense because you

6:34

can just throw together a quick demo and

6:36

impress a couple of venture capitalists.

6:39

Now I've shipped production software for

6:41

years and the gap between a demo and a

6:43

real product is enormous but the tools

6:46

make that gap invisible to the person

6:48

building the demo. The whole startup

6:50

ecosystem is running on overconfidence.

6:52

And the AI itself is the engine that's

6:54

going to keep this going for years. So

6:56

next time you see an AI product launch

6:58

go viral with a really slick 60-second

7:01

demo, ask yourself whether that's a real

7:03

product with real engineering behind it

7:05

or a prompt rapper that someone vibe

7:07

coded and that will not be supported in

7:09

2 years from now. Because based on the

7:12

data we just looked at, the odds say

7:14

it's probably more of a scammy product

7:16

70% of the time. If you want to work

7:18

towards a world where we have really

7:20

useful AI products that go beyond just

7:22

prompting an LLM, check out my set of

7:24

free open source projects linked in the

7:26

description below that teach you how to

7:28

build real things from scratch. Now,

7:29

subscribe if you want more AI

7:31

engineering truth like this one, and

7:32

then I'll see you next time.

Interactive Summary

The video discusses a widespread problem in the AI startup industry, where many companies are either "useless wrappers" of existing models or actively deceptive about their technology. Google and Excel's accelerator rejected 70% of startups due to a lack of proprietary technology and poor economic models from high API costs. Examples like Jasper AI represent the "wrapper" model, while others such as Presto Automation, Nate Incorporated, Builder.ai, and even Amazon's "Just Walk Out" technology, were found to rely heavily on human labor despite claiming AI automation, leading to legal action and bankruptcies. This issue stems not just from greed, but from an "AI Dunning-Kruger effect" where users, particularly those with higher AI literacy, become overconfident and deluded by AI-generated demos, failing to see the significant gap between a demo and a functional product. The speaker advises skepticism towards slick AI product launches, as data suggests a high percentage are superficial "prompt wrappers" rather than robust engineering solutions.

Suggested questions

5 ready-made prompts