HomeVideos

AGI: Don't Know What It Is But Build It Anyway

Now Playing

AGI: Don't Know What It Is But Build It Anyway

Transcript

454 segments

0:00

This year, Mr. Alman said that AGI was

0:03

imminent. Now, he says that the term AGI

0:05

itself is not useful.

0:10

Every few months, someone in big tech

0:11

declares that we are on the edge of

0:14

artificial general intelligence. And

0:16

every time that line gets blurred, a few

0:18

more billions get raised. I have never

0:20

talked about AGI on my channel, but

0:22

lately it keeps showing up in the

0:24

comments. So, I thought, wouldn't it be

0:27

symbolic to share my thoughts and

0:29

research on the theme of AGI? Especially

0:32

now that we're approaching the end of

0:34

the year and I'm wrapping up my series

0:36

on the business economics of AI. I will

0:38

now be dissecting the technical

0:39

feasibility of AGI and when or whether

0:42

it'll see the light of day. This is

0:44

completely outside of my line of work. I

0:46

am not an AI researcher. I would like to

0:48

dig into the business of pretending that

0:51

we're close to AGI and the economics of

0:54

this narrative. Let's dive in.

0:59

It is considered that there are three

1:01

dimensions so to say in the realm of

1:03

artificial intelligence AI,

1:07

AGI and ASI.

1:10

AI stands for artificial narrow

1:13

intelligence. The purpose of the A and I

1:15

is to perform a single specific task

1:18

extremely well like the way Claude is

1:20

generating text or Siri recognizing

1:23

voice to a very certain extent or your

1:27

phone recognizing your face. This is the

1:29

only type of AI that currently exists in

1:32

2025. AGI stands for artificial general

1:36

intelligence.

1:38

AGI is a theoretical term. The operative

1:41

word here being theoretical. AGI would

1:45

supposedly match human level

1:46

intelligence in the ability to learn,

1:49

reason or transfer knowledge and ideally

1:52

solve novel problems without having to

1:55

be retrained. The closest example of Agi

1:57

that you might have seen is a human

2:00

resembling robot in the movies. And ASI

2:04

which stands for artificial super

2:06

intelligence is a purely hypothetical

2:09

concept of AI that would surpass human

2:12

intelligence across all domains. ASI

2:15

would hypothetically be able to solve

2:17

problems and learn far faster than any

2:20

human alive and have the ability to

2:21

improve itself autonomously which is

2:25

essentially technological singularity if

2:27

I may. Let's come back to the topic of

2:29

this conversation which is AGI. The

2:32

problem of AGI is that very few people

2:36

can define what it actually means. Even

2:39

prominent AI researchers and scientists

2:41

cannot land on a single definition of

2:44

this thing. But ironically, there are

2:46

quite a few people in the general

2:48

public, so to say, who would gladly tell

2:50

you the definition of AGI, thinking that

2:53

they're well aware of what it is and

2:55

what it's supposed to do. Now, if you

2:57

search for the definition of AGI, the

3:00

word that you're going to come across is

3:02

going to be consciousness. Something

3:04

like AI with consciousness or conscious

3:07

AI. So, some kind of magical AI entity

3:13

with cognition. But the question that

3:16

isn't really being answered is what is

3:19

consciousness?

3:21

The paradox is that the people who do

3:23

PhD level research when talking about

3:26

AGI do so with extreme caution without

3:30

any specific timelines and they're doing

3:32

so exactly because the definition of AGI

3:36

is extremely vague and hardly

3:39

quantifiable. But those who do speak

3:41

freely and confidently about a GI

3:44

usually cannot properly explain what it

3:48

means and what consciousness means in

3:50

the context of artificial intelligence.

3:53

Nevertheless, the noise that they create

3:56

is loud enough to induce anxiety into

3:58

the part of the population who mistakes

4:00

confidence for credibility.

4:04

You guys know that I work in product and

4:06

a regular discipline that I do is

4:08

competitive research. I have to look

4:10

through competitor product pages,

4:12

pricing tables, LinkedIn announcements,

4:14

summarize it all, put it in an email,

4:16

send it to myself and to my colleagues.

4:18

And I automate a lot of things for my

4:19

business outside of work using N8N,

4:22

which basically connects APIs, apps, and

4:25

models into one system that runs

4:27

automations end to end. And if you

4:29

automate as much as I do, your workflows

4:32

get really expensive really fast because

4:35

NA10 needs to be hosted and I need to

4:37

pay for API credits for multiple apps

4:39

that I have nodes for. Every automation

4:41

has a cost and it's fine when it's just

4:43

one workflow, but what if you need 50?

4:46

The easiest and the most efficient way

4:48

to do this is to self-host in the cloud

4:51

via virtual private server. And

4:53

Hostinger offers one of the best

4:55

self-hosted NA10 plans on the market.

4:57

And that plan can get you up and running

4:59

in literally minutes. With Hostinger,

5:01

you can host unlimited workflows,

5:03

unlimited parallel executions, and

5:05

plenty of readytouse N8N templates. And

5:08

all of that at a fraction of a cost and

5:10

with fantastic performance because you

5:12

can choose what type of server you need

5:14

depending on your needs. All you got to

5:16

do is go to hosting your website, choose

5:18

your plan. KVM2 is actually going to

5:20

cover the vast majority of your needs.

5:22

It gives you everything you need to get

5:24

started with a small number of workflows

5:25

and the ability to scale them

5:27

indefinitely. They're also having a

5:28

Black Friday sale right now which gives

5:30

you 60% off. But even without the

5:32

discount, Hostinger's plan allows you to

5:34

use N8N itself for free and pay only for

5:38

VPS hosting. This first of all

5:41

automatically saves up to four times

5:43

your automation expenses. And secondly,

5:45

you get your own server. Go to the check

5:47

out page, pick the server location that

5:49

is closest to you and the application

5:51

you want to install. Plug in your coupon

5:53

code to get a discount and you're good

5:55

to go. And then once you're in the app,

5:57

simply click on manage app and log into

6:00

your NA10 account directly. If you're a

6:02

content creator, a founder, a business

6:04

owner, large or small, and you're

6:06

serious about scaling automations or

6:08

just want to experiment without burning

6:10

your budget, this setup gives you the

6:12

best quality for your money. You don't

6:14

have to worry about security and privacy

6:16

because everything you do stays private.

6:18

And don't forget self-hosted NAD is

6:21

often required for businesses that must

6:23

follow privacy rules like GDPR or SOCK

6:25

2. You can check it out at

6:26

hostinger.com/tech10

6:29

and use the code tech10 at checkout to

6:31

get extra discount on any yearly VPS

6:33

plans. It's a limited time deal, so

6:35

don't miss it. And huge thanks to

6:37

Hostinger for sponsoring this part of

6:39

the video. Here's a chart of predictions

6:42

from the most prominent names in AI

6:44

research who are way less optimistic or

6:48

certain about the timeline for the AGI

6:50

and the predictions he offer are very

6:52

different from the media narrative that

6:54

makes you believe that AGI is imminent.

6:56

the podcast episode that I referenced in

6:58

the previous videos in which Dwaresh

7:00

Patel interviewed Andre Karpathy and

7:02

Cararpathy said that we are at least 10

7:04

years away from AGI and he clearly

7:07

states that even that prediction is

7:09

based on pure extrapolation and

7:12

speculation really. I covered

7:14

extrapolation bias at length in one of

7:16

the previous episodes and in my opinion

7:19

the extrapolation bias is one of the

7:21

biggest contributors to the current

7:22

bubble that is present even among

7:24

professional investors. Sam Alman's now

7:27

famous statement that he made when

7:29

giving a talk at Stanford summarizes the

7:32

delusion of chasing AGI whether it makes

7:35

sense or not. He said that he doesn't

7:37

care if they burn $500 million or $5

7:40

billion or $50 billion a year. They're

7:43

making AGI and it's totally worth it. So

7:46

the way AGI is being described today is

7:50

LLM's

7:52

multimodality,

7:54

some kind of magic

7:58

equals AGI. So we're definitely making

8:02

AGI. We're burning billions and it's

8:05

totally worth it. Except we don't know

8:08

what it is and what it really takes.

8:11

This framing makes an idea that would

8:14

typically be labeled as unsustainable

8:17

delusion into a vision.

8:21

OpenAI projects burning 115 billion

8:24

through 2029 despite posting a 13.5

8:28

billion loss against a 4.3 billion in

8:32

revenue during 2025 first half. And the

8:36

AGI narrative justifies this.

8:40

This wouldn't fly for anybody else. But

8:42

when Alman says it, it's fine. And in my

8:46

view, there is another layer to this.

8:49

When we say that we must control AI for

8:52

safety,

8:54

this argument becomes justification for

8:57

market concentration.

9:00

Because how do you apply anti- monopoly

9:02

rules when critics argue that safety is

9:05

used for monopoly and proponents argue

9:08

that concentration is necessary for

9:11

safety and the argument follows a

9:14

structure where AGI is inevitable.

9:17

Therefore, whoever gets there first will

9:19

capture extraordinary value. Therefore,

9:23

current losses don't matter. Now map

9:26

this to the problem that I will once

9:28

again remind you of. We still cannot

9:31

define what AGI is. I found the

9:34

following snippet in a fortunes article.

9:36

Among the biggest factors in AGI's

9:38

sudden fall from grace seems to have

9:40

been the roll out of OpenAI's GPT5 model

9:43

in early August that landed with a thud.

9:47

Yes, the release of GBT5 had mixed

9:49

reviews to say the least and I actually

9:52

was among those who did not like it at

9:54

all at first. And then there is research

9:56

from Yale Law Policy Review on how this

10:00

anti- monopoly approach to AI governance

10:03

shows concentrated control across the AI

10:05

stack. The semiconductor market is

10:07

dominated by Nvidia with 92% market

10:10

share. cloud computing, AWS, Azure,

10:14

Google Cloud controlling 63% globally.

10:18

What the AGI safety narrative is doing

10:20

is that it helps to legitimize this

10:23

concentration by framing it as

10:26

protective

10:28

rather than extractive. This purely

10:31

hypothetical conversation about the

10:33

imminence of AGI creates superficial

10:36

urgency and the urgency benefits

10:39

fundraising. And that is why it is being

10:42

lobbied so much. For example, Ilia

10:44

Sutskver's safe super intelligence

10:46

raised $2 billion at a 12 billion

10:50

valuation without a working product or

10:54

revenue. And Thinking Machines Labs

10:57

secured billions purely on AGI promises.

11:00

I want to make a quick pause here

11:02

because I've been noticing lately that

11:04

people in the comments choose to hear

11:06

what they would like to hear and I want

11:08

to clarify the lens through which I

11:11

approach all of my content and all of my

11:12

research. I analyze every single topic

11:15

from product and business perspective.

11:18

My interest is purely academic and I'm

11:21

studying how narratives translate into

11:23

business. The point I'm trying to make

11:25

is that this would never fly in normal

11:27

circumstances. This is not about safe

11:29

super intelligence. Whether I like it or

11:31

not, I have immense respect for Mr.

11:34

Saskca and his work. What I'm

11:36

highlighting here is that we're talking

11:38

about billiondoll investments

11:42

justified by an idea that lacks a basic

11:46

definition. Another popular fear is how

11:49

AGI will affect the labor market. In

11:51

fact, I got inspired for this episode

11:53

after seeing the comments under my own

11:55

videos where people were saying, "You're

11:57

so cynical now, but wait till AGI comes

12:00

for your job." So, I was like, "You know

12:03

what? Maybe I'm the one delusional here.

12:05

Let me do the research." So, I did some

12:07

digging, found a bunch of AGI related

12:10

future of tech, future of job reports,

12:12

and after analyzing them for a week, I

12:15

stand by what I said. Let me show you.

12:18

Goldman Sachs and IMF 300 million jobs

12:21

globally could be affected which is 9.1%

12:25

of global workforce. My next logical

12:27

question is is 300 million a lot

12:31

comparing to major historical crisis

12:33

from the past. Yes, it is a lot. For

12:36

example, great depression, jobs lost 15

12:39

million and that was only in the US. The

12:41

unemployment rate was 25% at peak. The

12:44

duration was 4 years to peak and then 10

12:47

plus years afterwards. It really

12:49

required World War II mobilization to

12:51

recover from the Great Depression. 2008

12:54

financial crisis. Jobs lost 27 million

12:57

globally. The peak unemployment was 15

12:59

million in the US. The recovery took 5

13:02

to seven years with wage stagnation. And

13:04

lastly, COVID 33 million increase in

13:06

global unemployment in 1 year.

13:09

Unemployment peak 13%. 6.5 globally. The

13:13

recovery pattern was different. It was

13:15

K-shaped. High earners gain jobs but low

13:18

earners lost. If we step away from the

13:20

crisis analogy and consider normal

13:23

annual churn in the US, for example,

13:25

we're looking at 50 million job

13:27

separations every single year in the US

13:30

alone. But these are immediately

13:32

replaced by the new hires. But the

13:34

prediction around AGI and its impact on

13:37

employment is net loss, not replacement.

13:41

And the more I was reading, the more I

13:44

kept questioning what is the likelihood

13:47

of this prediction fulfilling itself

13:49

without AGI because the entire research

13:52

is based on the assumption that AGI

13:55

exists. But it doesn't. It doesn't

13:58

exist. The report analyzes how

14:01

generative AI could impact 300 million

14:06

jobs. But the displacement it describes

14:08

requires AGI level functionality

14:11

performing general cognitive work across

14:13

entire occupations replacing fully

14:16

replacing human workers at scale

14:19

operating without any sort of

14:20

supervision. That is not A&I. Their

14:24

research objective and the methodology

14:26

is solid. They analyzed 800 plus

14:28

occupations for automation potential.

14:31

But the predictions are based on the

14:33

technological assumptions that AGI

14:36

exists, which again it doesn't. I'm not

14:40

implying that this research is not

14:42

substantive or that it isn't worth your

14:44

attention. All I'm trying to say is that

14:46

it assumes that AGI exists. And remember

14:50

how I told you in the last video that a

14:52

lot of people hear almost works and

14:54

extrapolate that to works. And as a

14:58

result of this extrapolation, there is a

15:00

lot of general anxiety about something

15:02

that doesn't exist. Compared to the

15:06

whole AGI narrative, the AI bubble with

15:09

its inflated expectations and deceptive

15:11

marketing pales in utter insignificance

15:14

because at least the technology exists.

15:17

You can touch it, you can smell it, you

15:19

can use it, you can see it.

15:22

People dramatically overestimate current

15:25

AI job displacement. The claims about AI

15:28

job displacements are highly

15:30

questionable, and I made a series of

15:32

videos dissecting layoff data. But the

15:35

problem is that the AGI narrative keeps

15:37

fueling that anxiety. There is a recent

15:39

analysis from Brookings and Yale Budget

15:41

Lab, great piece of work, by the way. I

15:43

highly recommend that you guys read it.

15:45

That found that there was no detectable

15:47

labor market disruption from GPT's

15:49

release, which took place 33 months ago.

15:52

There was no disruption on the

15:54

economywide level. So, what this

15:56

essentially implies is that there is no

15:59

documented labor market disruption from

16:02

AI,

16:04

but we're already worried about AGI. And

16:08

this anxiety stems from highly visible

16:10

tech layoffs that dominate the headlines

16:13

and the fact that there is a lot of

16:14

blame being put on AI without noting

16:16

that traditional economics like

16:18

inflation, interest rates or

16:20

restructuring drive business decisions a

16:23

lot more than any AI restructuring. And

16:26

the last angle that I want to dissect

16:27

here is the financial angle because as I

16:30

was doing the research I made an

16:32

observation that the investment dynamics

16:34

around AGI is a bit different from the

16:36

ones associated with AI SAS and inflated

16:40

VC capital. OpenAI can burn $ 8.5

16:43

billion annually and raise 40 billion

16:47

more.

16:49

Every AI SAS company that you see in

16:51

this list has a high multiple. As a

16:55

refresher, a multiple is valuation

16:58

divided by revenue. For Perplexity, for

17:00

example, 18 billion divided by 300

17:03

million is 60 times revenue. This means

17:07

that investors are paying $60 for every

17:10

$1 of annual sales that Perplexity is

17:13

generating. All of AI SAS companies

17:15

operate on extraordinarily high

17:17

multiples, but at least they have a

17:20

product. forget whether it brings ROI to

17:23

the client or not. At least there is a

17:26

product and hopefully a path to

17:28

profitability.

17:30

But for AGI, there is no product. The

17:34

AGI narrative isn't about getting

17:36

premium multiples. AI SAS companies get

17:38

those automatically. AGI language buys

17:42

permission.

17:43

Permission to lose money at

17:46

extraordinary scale.

17:48

The real premium here is tolerance for

17:51

loss.

17:53

Just so you can compare, open eyes

17:55

revenue $15 billion. Annual losses $8

17:59

billion, which is 53% of the revenue.

18:02

Monthly burn $78 million.

18:06

Profitability target 2029, four years

18:09

away. 2025 funding raised $40 billion.

18:13

Valuation $300 billion. And these are

18:17

the numbers for cursor and mjourney.

18:21

Both profitable, both with small teams.

18:24

Mjourney is actually bootstrapped. This

18:26

is what I mean by the business of

18:28

pretending we're close to AGI. Not

18:31

because it's imminent, but because it

18:33

pays off.

18:37

So where does this leave us? In my

18:39

humble opinion, AGI as it stands is

18:43

somewhere between magical thinking and

18:46

Terminator.

18:48

The irony is that the people closest to

18:51

the actual cutting edge research are the

18:53

most cautious about promises and

18:55

timelines, while the people furthest

18:57

from it get intimidated by the

18:59

narrative. 33 months after the release

19:01

of GBT, we still haven't seen the labor

19:04

market disruption, but a new definition

19:06

of what AGI means that changes quarterly

19:09

is tied to another round of investment.

19:12

So, until someone can at least clearly

19:15

define what AGI is supposed to mean,

19:19

maybe it's time we all just collectively

19:22

chill. I mean, holidays are coming. Why

19:25

don't we worry a little bit less about

19:28

something that doesn't exist?

19:32

We hope this was helpful. We'll see you

19:34

next time. Bye.

Interactive Summary

The video argues that the concept of Artificial General Intelligence (AGI) is ill-defined and primarily serves as a narrative to justify massive investments, market concentration, and a tolerance for significant financial losses in the tech industry. While experts are cautious about AGI's feasibility and timelines, public anxiety is fueled by vague pronouncements. The speaker distinguishes AGI as a theoretical concept from existing narrow AI and hypothetical super AI, highlighting that predictions about AGI's impact on the labor market are based on a technology that does not yet exist. The narrative ultimately allows companies to burn billions without a clear product or path to profitability.

Suggested questions

5 ready-made prompts