HomeVideos

Will AI Destroy the Economy? (According to Economists: No.) | AI Reality Check | Cal Newport

Now Playing

Will AI Destroy the Economy? (According to Economists: No.) | AI Reality Check | Cal Newport

Transcript

972 segments

0:00

There have been some pretty dark

0:02

articles published recently about all

0:03

the ways in which AI is about to destroy

0:06

the worldwide economy. Now, these

0:08

include tales of mass unemployment and

0:10

collapsing industries and white collar

0:12

workers trying to retrain for skilled

0:14

crafts jobs like woodworking and

0:15

plumbing. One of these pieces, a World

0:18

War IIstyle dispatch from the year 2028,

0:22

which was put out by a small financial

0:24

services firm named Catrini Research,

0:26

spread so widely and scared so many

0:28

people that it was blamed for a

0:30

temporary dip in the S&P 500. All that's

0:35

missing from these tales are the garbage

0:37

can fires. So, how seriously should we

0:40

take these economics doomsday articles?

0:43

Well, if you've been following AI news

0:45

recently, this is probably a question

0:47

that you've been asking. And today, I

0:49

want to try to find some measured

0:51

answers. I'm Cal Newport, and this is

0:55

the AI reality check.

1:00

All right, here's the thing. Coverage of

1:03

AI topics moves in waves. You'll have a

1:06

certain sort of take or idea that will

1:09

become popular and everyone is writing

1:11

and talking about it and then sort of

1:12

seemingly all at once, all the attention

1:14

will move on to a new topic as if the

1:18

other one didn't exist. Like back in

1:19

2023, for example, um I spent a lot of

1:22

time trying to explain to people that a

1:24

static feed forward large language model

1:26

could not be considered conscious. I had

1:28

fierce debates about this and then at

1:30

some point the whole conversation just

1:32

moved on with no resolution. Late last

1:34

year, to give another example, all the

1:37

discussion was around super

1:39

intelligence. And I found myself having

1:41

to argue about how you cannot uh infer

1:45

intention

1:46

in an anthropomorphized manner from the

1:49

auto reggressively produced outputs of a

1:51

chatbot. But then we've moved on from

1:52

that recently as well. The topic dour in

1:56

AI coverage is this idea that we might

1:58

not be ready for mass economic

2:02

displacement that AI is now poised to

2:06

wreak. Now I want to go over quickly a

2:10

few examples among many of some of the

2:13

articles recently that have been making

2:15

this point.

2:17

Uh the first article was published

2:19

online in February and it's part of the

2:21

March print issue of the Atlantic and it

2:23

was titled America isn't ready for what

2:28

AI will do for jobs. All right. So if

2:30

you read this piece, it opens on a

2:32

somewhat long history of the Bureau of

2:34

Labor Statistics, which is actually

2:35

quite interesting the the history of the

2:36

BLS. And so you're thinking, okay, maybe

2:39

this is going to be a sort of

2:40

thought-provoking exploration of job

2:42

cycles and technological disruption, but

2:44

nope. it uh it gets a little darker. Let

2:47

me read from the piece here. But like

2:50

all statistical bodies, the BLS has its

2:52

limits. It's excellent at revealing what

2:54

has happened and only moderately useful

2:56

at telling us what's about to. The data

2:58

can't foresee recessions or pandemics or

3:02

the arrival of a technology that might

3:04

do to the workforce what an asteroid did

3:06

to the dinosaurs. I'm referring, of

3:08

course, to artificial intelligence.

3:12

Yikes. Remember, the asteroid that

3:15

killed the dinosaurs uh killed off most

3:18

of life on Earth. So, we've kind of

3:20

raised the stakes pretty high for what's

3:22

about to happen with AI. All right. So,

3:24

the the article goes on, the author

3:25

says, "Tasks that once required skill

3:28

judgment and years of training are now

3:30

being executed relentlessly

3:32

and indifferently by software that

3:35

learns as it goes." I don't know what it

3:37

means for a language model to be

3:39

relentless or indifferent, but I guess

3:41

they are. Uh quick fact check. The

3:44

language models driving most of the

3:45

tools that are talking about here uh

3:47

they don't learn as they go. They're

3:48

static and trained in static batches. I

3:51

guess you could make a case that if

3:52

you're looking at like a a terminal

3:54

agent like cloud code that it could be

3:56

doing updates to a markdown file that it

3:58

uses as part of its prompting, but I

3:59

don't think that's a great understanding

4:00

of how this AI works. It's treat it more

4:03

like a human brain. All right. Anyways,

4:04

let's keep going here. But anyone

4:06

subcontracting task to AI is clever

4:09

enough to imagine what might come next.

4:11

a day when augmentation crosses into

4:13

automation and cognitive obsolescence

4:15

compels them to seek work at a food

4:16

truck, pet spa, or massage table, at

4:20

least until the humanoid robots arrive.

4:22

Man, the word might does a lot of work

4:24

in this essay. He said before, AI might

4:28

be like the asteroid that destroyed 99%

4:30

of life on Earth. And here he said, AI

4:32

might make us all have to work at pet

4:35

spas until the robots come. Um, but

4:39

there's evidence for this. So what's the

4:41

main argument for why we should be

4:42

concerned about this? Let me read from

4:43

the article again. In May 2025, Dario

4:47

Amade, the CEO of the AI company

4:48

Anthropic, said that AI could drive

4:50

unemployment up to 10 to 20% in the next

4:53

1 to 5 years and quote wipe out half of

4:55

all entry-level white collar jobs. End

4:57

quote. Jim Farley, the CEO of Ford,

5:00

estimated that it would eliminate

5:01

literally half of all white collar

5:03

workers in a decade. Sam Alman, the CEO

5:05

of OpenAI, revealed that quote, "My

5:06

little group chat with my tech CEO

5:09

friends in quote has a bet about the

5:11

inevitable date when a billion-doll

5:13

company is staffed by just one person."

5:16

I step out of the quote here. Uh, the

5:18

Atlantic piece then goes on to mention

5:21

layoffs that recently happened at many

5:23

companies, including Meta, Amazon,

5:25

United Health, etc. All right, back to

5:27

the quote. Taken together, these

5:29

statements are extraordinary. the owners

5:31

of capital warning workers that the ice

5:33

beneath them is about to crack while

5:36

continuing to stomp on it. All right, we

5:37

got to hold on for a second here. I want

5:40

to break apart. This is the evidence for

5:43

the claim that well, we got two claims.

5:46

Either all life on Earth is going to be

5:47

wiped out like the dinosaurs or

5:49

knowledge workers are going to have to

5:50

be a massage therapist. It's worth

5:52

taking a little bit closer look uh at

5:54

exactly what this evidence is stating.

5:57

Um, I want to start with the layoff

5:58

piece because we covered this in last

6:00

week's episode of the AI reality check

6:02

and I've covered it on my newsletter at

6:03

calupport.com as well.

6:06

We don't, for the most part, these

6:08

layoffs have nothing to do with AI

6:11

automating jobs or increasing efficiency

6:13

to the point that you don't need more

6:15

workers. Now, I haven't covered every

6:16

one of these companies uh mentioned in

6:18

this article, but I did cover the first

6:20

two companies mentioned, Amazon and

6:22

Meta. I've talked on background to

6:24

multiple people within both of those

6:25

companies and they're both very clear.

6:27

Recent layoffs have nothing to do with

6:29

AI making those workers unnecessary.

6:32

They have everything to do with

6:33

overhiring during the pandemic that's

6:35

now being corrected for the bulk of the

6:38

layoffs at Meta recently where in the

6:41

reality labs which uh Zuckerberg had put

6:44

a massive amount of money in over the

6:46

last 5 years to try to build uh the

6:48

metaverse where we're all going to put

6:50

on virtual reality helmets and float

6:52

around space stations and play cards.

6:53

Remember that? Yeah. It was a bad idea.

6:55

So they're firing a lot of those people.

6:56

They want to put that money elsewhere.

6:59

So, right off the bat, okay, we um this

7:01

is vibe reporting 101. You you take a

7:04

fact that you have a scenario that's

7:06

scary and then you take a fact that

7:08

directionally seems aligned with that

7:10

scenario, but in reality is not and you

7:12

list it next to it to try to ground the

7:15

hypothetical into something that's

7:17

happening now, which vastly increases

7:19

its value to actually cause anxiety or

7:21

fear. All right, but what about the

7:23

other piece of this argument? the idea

7:25

that AI CEOs are making dire

7:27

predictions. If the owners of capital

7:29

are warning us, then for sure we have to

7:32

listen. But wait a second, we could flip

7:35

this on its head.

7:37

Of course, the CEOs of AI companies are

7:40

making dire predictions about how

7:42

powerful their tools are going to be

7:46

because they are like the Wizard and

7:48

Wizard of Oz say don't look behind a

7:50

curtain. Don't look behind a curtain.

7:52

terrified that people are going to spend

7:54

more time asking about their financials,

7:58

asking about the fact that in order for

8:00

them to keep up with their debt, I'm

8:02

talking about the major AI companies,

8:04

and not face implosion over the next one

8:05

to two years, they need to be the

8:08

fastest growing companies in the history

8:10

of companies. We're talking about

8:12

hundreds and hundreds of billions of

8:14

dollars of revenue that needs to be

8:15

generated at some point in the next year

8:18

or two. And it's unclear how they're

8:20

going to do this beyond putting ads on

8:22

chat GPT and claude code subscriptions

8:25

which they're currently losing money on.

8:28

So yes, of course they would rather be

8:30

talking about dire predictions of some

8:32

future because guess what? That makes

8:34

their technology the most important

8:36

technology in the world and justifies

8:38

investors continuing to put money into

8:41

their company. So, I'm not saying that's

8:42

definitely what's happening, but I don't

8:44

have to stretch to find an alternative

8:47

explanation for why Dario Amade or Sam

8:49

Alman love to spout out these sort of

8:53

big predictions.

8:55

It completely serves their purpose. And

8:58

I want to say, look, this is it's it's a

9:00

good writer. The rest of the art it's a

9:01

good article after this like it's well

9:03

researched. He talks to a lot of people.

9:05

Um, you learn a lot about labor

9:07

statistics. You hear from a lot of

9:08

experts. But I just want to kind of

9:10

point out the core. The beginning of the

9:12

article has this uh combination of vibe

9:14

reporting and appeal to biased authority

9:17

that as we're going to see is sort of a

9:19

theme in these economic doomsdays

9:21

article. All right, let me talk about

9:22

another one. Our second example here uh

9:26

this was from last week I think in the

9:28

New York Times. It was an op-ed that had

9:30

a uh a happy feel-good title. Mass

9:32

hysteria, thousands of jobs lost. Just

9:36

how bad is it going to get? Oh jeez. All

9:40

right. So, the piece opens, you know,

9:43

the you don't choose the titles if you

9:44

write an op-ed. So, let's put that

9:45

aside. Let's look at the piece to see

9:47

what it actually argues. The piece opens

9:50

with the story of a college graduate

9:52

having a hard time finding a job. Let me

9:54

read this here. Just a few years ago, an

9:56

entry-level role with a bank or an asset

9:58

management firm might have been Mr.

10:00

griefenburgers for the asking. But the

10:02

white collar job market has cooled

10:04

sharply. While the unemployment rate

10:05

remains relatively low, 4.3% office jobs

10:09

are suddenly a lot harder to come by for

10:11

recent college graduates and experienced

10:12

professionals alike. Now, this is uh

10:15

that's a this is an important real

10:17

story. Unemployment's pretty good, but

10:20

there is a cooling, especially on entry-

10:22

level hiring in knowledgework jobs that

10:24

has been persistent really for multiple

10:27

years now and isn't yet improving.

10:30

All right. So, why is this happening?

10:32

Well, you can ask economist and there's

10:34

there's three reasons they'll give you

10:36

in a descending

10:38

order of importance. By far the number

10:40

one reason, most important reason

10:42

explaining this trend is that white

10:44

collar industries hired aggressively in

10:46

2020 to 2022 as pandemic era digital

10:49

growth was super strong.

10:51

Um, and there was like these great

10:53

resignation fears which led companies to

10:56

overcompensate and offer like very

10:58

attractive packages. It was like get

10:59

people in the door because we're worried

11:01

about uh losing workforce.

11:04

All right. Now, after that pandemic

11:06

period is over, the economy is trying to

11:08

correct for this. And we have a lot of

11:10

employers not firing people, but they're

11:13

going into what's called a no hire, no

11:14

fire phase where they say, "Okay, we

11:17

need to uh slow down here. We have too

11:19

many people. We don't most of us don't

11:21

want to do mass layoffs of too many

11:22

people because we might need, you know,

11:24

they might be useful in the future, but

11:26

let's let's uh let's do no hire, no

11:28

fire, which is how you get to this

11:29

unusual situation where unemployment is

11:31

actually pretty good. Um but you also

11:33

have low new job growth. All right. The

11:35

second secondary cause mentioned by

11:37

economists is the higher interest rates.

11:38

They started going up in 2022. They try

11:41

to offset the inflation caused by co era

11:43

stimulus investments that slows down

11:45

business expansion, right? That's

11:46

economics 101. Um the third cause is

11:48

global uncertainty

11:51

right with especially in you know the

11:54

American context we the tariffs what's

11:56

happening in educational and now

11:58

educational world. Um, and now we have

12:01

global wars. It's an uncertain time. So,

12:04

there's a lot of businesses that are

12:05

sort of like, let's just wait and see.

12:08

We don't need to, we are not sounding

12:09

the alarm bells yet. We don't have to,

12:11

you know, greatly reduce like we would

12:13

into a strong recession, but we're not

12:14

going to, let's be careful about hiring

12:15

right now as well. All right. So, let's

12:17

return now to that Times Opad. I'm sure

12:20

it says like this is what explains this.

12:22

So, uh, you know, it is what it is.

12:24

Hopefully, this will get better. All

12:25

right. Let's read what actually they

12:26

actually say instead. Many companies

12:29

went on hiring sprees of the pandemic

12:30

and the slowdown is perhaps just the

12:32

inevitable adjustment. All right, so far

12:33

so good. Are we going to leave it there?

12:36

Nope. Here's what comes next. But it is

12:39

happening against the backdrop of the

12:41

generative AI revolution and fears that

12:43

vast numbers of knowledge workers will

12:44

soon be evicted from their cubicles

12:46

replaced by machines. This is kind of a

12:49

remarkable statement because it's it's

12:51

vibe reporting, but it's vibe reporting

12:52

that's transparently acknowledging that

12:54

it's vibe reporting, right? They're

12:55

saying, "Look, there are good

12:56

explanations for this, but this other

12:58

thing is happening now that makes us

12:59

afraid. So, let's just pretend they're

13:03

connected. Even though we have other

13:05

explanations, it's directionally aligned

13:07

with this other fear we have. So, why

13:09

don't we just put them together?" What

13:12

is the main evidence site in this op-ed

13:15

uh for these fears? I'll quote here. The

13:18

people that the people selling the

13:19

artificial intelligence are among those

13:21

sounding the most ominous warnings about

13:22

its potential fallout is notable. Uh

13:25

some of them are prone to bombastic

13:27

claims, but it's hard to see how

13:28

spooking the public serves their

13:29

interest. It might be wise to take their

13:31

predictions at face value and assume

13:32

that AI is indeed going to devour a lot

13:34

of white collar jobs. Again, this is the

13:36

appeal to biased authority. It is not

13:39

hard to see why the CEOs of the

13:42

companies selling this technology like

13:44

stories that makes this the most

13:45

powerful important technology of the

13:47

last 200 years. Of course, they want

13:48

that story out there because without

13:50

that story again, it becomes how are you

13:52

going to generate $300 billion in

13:54

revenue in the next two years. They

13:55

don't want that question. So, they've

13:57

been spouting these things for the last

13:58

5 years. I I don't know why this idea of

14:01

like we need to take at face value what

14:03

the owners of the technologies

14:06

say about what their technology is going

14:08

to do. I don't think we should take them

14:11

at face value at all. We should be

14:13

highly suspicious of them. All right. So

14:15

anyways, again this this this article

14:17

goes on and it looks at a lot of things.

14:19

It's not a bad article but again we have

14:21

this sort of vibe reporting mention

14:24

stuff that's happening that's

14:26

directionally aligned with the fear.

14:28

Then you mention the fear and then you

14:29

justify the fear by saying, "Look, the

14:30

CEOs of these companies are the ones

14:32

sounding the alarm. Why would they sound

14:34

the alarm if it wasn't real?" All right,

14:38

let me get to the third article, which

14:39

is the one that spooked the stock

14:40

market. And this will be the the sort of

14:42

final example I point out here before I

14:43

get to some stronger responses. This

14:46

article was called the 2028 global

14:48

intelligent crisis. Intelligence crisis,

14:51

a thought exercise in financial history

14:53

from the future. It was published on

14:54

Substack by a small financial services

14:57

firm called Satrini Research. Now look

15:00

right off the bat if you read this

15:02

Substack piece. The authors are clear

15:04

that this is a they say this is a

15:06

thought experiment

15:08

and not a prediction.

15:10

And you'll hear actually that the

15:12

authors have been interviewed a lot in

15:15

the aftermath of this article going

15:16

viral and spooking people. And they're

15:18

really leaning into this. This was just

15:20

a thought experiment. I was writing

15:21

fanfiction. like why are people taking

15:23

this so seriously? But if you read their

15:24

same introduction, they then go on to

15:26

say hopefully reading this leaves you

15:28

more prepared for potential left tail

15:30

risk as AI makes the economy

15:32

increasingly weird. So clearly they're

15:34

saying this is a possibility. This is a

15:36

prediction. We're not saying it will

15:37

definitely happen, but it's on the table

15:39

and we need to be worried about it. So I

15:40

don't think they get off the hook by

15:42

saying, "Hey, we said this is not a

15:44

prediction." But you did say pay

15:45

attention to this so you're prepared for

15:46

what might come.

15:48

I'm not a linguist, but that kind of

15:50

sounds like the definition of a

15:52

prediction. All right. So, what does

15:53

this article actually say? Well, it is

15:55

written in the style of World War Z.

15:57

That is, it's written like a uh a

16:00

dispatch. I think it's a it's like a a

16:03

financial report like these companies

16:05

write, but from the year 2028,

16:08

reflecting on the dire current

16:09

circumstances and how the economy got

16:11

there. So, it's told in this sort of

16:13

fake future retelling style, which is a

16:16

very powerful style. Um, let me let me

16:19

read a quote here from early in this

16:21

sort of fake dispatch from the future.

16:24

Two years, that's all it took to get

16:26

from contained and sector specific to an

16:30

economy that no longer resembles the one

16:32

any of us grew up in. This quarter's

16:34

macro memo is our attempt to reconstruct

16:36

the sequence, a post-mortem on the

16:38

precrisis economy. And then it goes on

16:40

to lay out this scenario where it starts

16:42

like right now and it's like well

16:44

there's layoffs happening but we we were

16:47

happy about productivity booms and the

16:49

stock market goes up until about the

16:51

fall of 2026 and then as automation

16:55

continues these cyclally reinforcing

16:58

negative feedback loops emerge. The

17:00

economy crashes the next year in

17:02

November 2027 and you know again we're

17:05

back to garbage can fires and knowledge

17:07

workers having to eat their dogs. All

17:09

right, this was a very effective

17:10

article. It spread really far for two

17:12

reasons. One, that World War Z style of

17:14

storytelling where you're you're telling

17:16

a story like this is what happened. Let

17:18

me look back on it is very emotionally

17:20

engaging and it presses fear buttons

17:23

much more than sort of straightforward

17:25

analysis or prognostication. And two,

17:28

there's a vibe reporting trick here that

17:30

we've seen in the other two examples.

17:31

They peg their fake scenario to

17:33

something that's real happening right

17:34

now. It began with layoffs in the tech

17:38

sector in 2026, which there are

17:40

happening right now. Now, of course, as

17:41

I've covered in this episode, in the

17:42

last episode, and ad nauseium,

17:46

the layoffs in tech industry started a

17:47

few years ago, it's in response to

17:49

overhiring during the pandemic, but

17:51

whatever. When you you peg a story that

17:54

ends somewhere fantastical and terrible

17:56

with something that's happening right

17:58

now, your mind puts it on a reality

18:00

trajectory and it makes it much more

18:02

believable. So, that went viral. It was

18:05

uh people said it had to do with a

18:07

collapse in the S, not a collapse, a a

18:08

minor dip in the S&P 500. Other

18:10

commentators have said there's a lot of

18:12

factors why there might have been that

18:14

temporary collapse in SP500, but it got

18:16

a lot of news, especially in the

18:17

financial world. All right. So, how

18:19

seriously I mean, I talked about some of

18:21

the bad reporting techniques in these

18:23

articles, but it doesn't mean that

18:25

doesn't mean

18:27

a priori that they're also wrong. So,

18:29

how seriously should we take these

18:31

scenarios of economic doom? Well, I got

18:35

to say they're they're very

18:35

anxietyprovoking.

18:37

I I don't like dystopian

18:40

fiction, right? Like, I read World War

18:41

Z. I really didn't like it. I don't like

18:43

watching zombie movies. Dystopian,

18:44

especially like collapse of society,

18:46

tales and movies. They press a lot of

18:49

buttons for me. So, I'm someone who

18:50

knows a lot about AI

18:53

and uh am a critic of hype. And even for

18:56

me, these were distressing. So, I can

18:58

only imagine how much distress these

19:00

type of articles are causing for the

19:02

millions of people that are reading

19:03

these in major publications. So, how

19:05

seriously should we take them? Let me

19:07

tell you what made me feel better and

19:11

hopefully it'll make you feel a little

19:13

better as well in the wake of the

19:16

Satrini article because that spread

19:18

through the financial world and it might

19:21

have had an actual impact on the stock

19:22

market. In the wake of that Catrini

19:25

article,

19:27

professional economic economists and

19:29

global macro strategy analysts, people

19:31

who uh their their goal is not

19:33

engagement or impacting the

19:35

conversation. It's to make money based

19:36

on accurate understandings of what's

19:38

likely to happen in the economy. They

19:40

came out of the woodwork and said, "Hey,

19:43

enough.

19:45

This is ghost stories and they're not.

19:49

We have no reason to believe they're

19:51

true." And hearing from these

19:53

economists, I have to say, made me feel

19:55

a little bit better. I'm going to give

19:57

you some quotes and hopefully it'll make

19:58

you feel a little better as well. The

20:01

New York Times, to their credit, um,

20:03

published an article called Bleak

20:04

Research Report, Stokes AI debate on

20:07

Wall Street. It's written by Financial

20:09

Reporter and they actually quoted some

20:10

serious economist

20:12

who were not that impressed by the

20:14

Satrini article. Let me read you two

20:16

quotes. Here's one.

20:18

The argument leans heavily on narrative

20:20

and emotion rather than hard evidence.

20:22

Jim Reed, a strategist at Deutsche Bake,

20:25

said of the report, "That doesn't mean

20:27

it will ultimately be wrong, but he

20:28

added that the vibes to substance ratio

20:30

is undeniably high." All right, here's

20:32

another quote. On Tuesday, Christopher

20:34

Waller, a governor on the Fed board,

20:36

noted that he had not read the Satrini

20:38

report quote deeply in quote, but push

20:40

back on the broader idea that AI will

20:41

lead to a rapid rise in unemployment as

20:44

technology displaces white collar

20:46

workers. I don't think that is going to

20:48

happen, Mr. Waller said, adding that he

20:51

is not a doom and gloomer like that

20:53

report was. I think my favorite

20:55

response, however,

20:58

came from uh Citadel Securities. So, a

21:00

global macro strategy analyst for

21:02

Citadel Securities named Frank Fle put

21:04

out a report in the aftermath of the sup

21:07

the catrini article um that had a a sort

21:10

of sarcastic title, the 2026 global

21:13

intelligence crisis. So the Catrini

21:15

report was the 2028 global intelligence

21:17

crisis. Say like, hey, everything has

21:20

gone wrong in these two years. And so he

21:21

called it the 2026 global intelligence

21:23

crisis. But here he's referring to the

21:25

intelligence crisis being people

21:27

believing these types of stories. And so

21:29

he does a sort of faux opening. He's

21:32

like here I'm make he's doing an

21:33

describing our current situation. And it

21:36

and that sort of faux opening describing

21:38

our current situation uh sort of sticks

21:40

in the dagger with the following.

21:43

Despite the macroeconomic community

21:45

struggling to forecast two-month forward

21:47

payroll growth with any reliable

21:48

accuracy, the forward path of labor

21:50

destruction can apparently be inferred

21:52

with significant certainty from a

21:54

hypothetical scenario posted on

21:56

Substack. He's sort of making fun of

21:58

people in the community who were taking

22:01

that Substack post with any seriousness.

22:03

He then proceeds to kind of educate in a

22:06

semi-accessible way the types of things

22:09

that global macro financial analysts

22:11

look at especially when it comes to

22:12

technological disruption and why they

22:14

don't see signs of some sort of major

22:16

calamity coming and they're not

22:18

particularly worried about some sort of

22:19

collapse of the economy. I'm going to

22:21

read a few of these quotes just to give

22:23

you a sense of the type of things

22:24

covered in this article.

22:27

Number one, we would posit that if AI

22:29

represents imminent displacement risk,

22:31

the real-time population data would show

22:33

an inflection upwards in the daily use

22:35

of AI for work. The data seems

22:37

unexpectedly stable and presents little

22:39

evidence of any imminent displacement.

22:42

Right? So again, there's lots of

22:43

discussion about this, but they're

22:45

looking at the Fed's data out of the St.

22:46

Louis Fed, and they say there's no rapid

22:49

uptake uh in the way that the news media

22:51

would have you um believe in AI use.

22:54

Second quote, the current debate around

22:56

artificial intelligence conflates the

22:58

recursive potential of the technology

23:00

with expectations of recursive economic

23:03

deployment. Technological diffusion has

23:06

historically followed an S-curve. Early

23:08

adoption is slow and expensive. Growth

23:10

accelerates as cost fall and

23:12

complimentary infrastructure develops.

23:13

Eventually, saturation sets in and the

23:15

marginal adopter is less productive or

23:17

less profitable, which causes growth to

23:19

accelerate. Um, I'm seeing this argument

23:21

from a lot of professional analysts of

23:23

technological disruption. They say,

23:24

"Man, we always make the exact same

23:26

mistake.

23:28

You have slow and then you get a period

23:30

of speed up." And we say that speed up

23:32

will go on forever and let's keep

23:34

extrapolating out that curve. And if we

23:36

keep extrapolating out that curve,

23:38

collapse or singularity or whatever the

23:39

thing is that you want to say is going

23:41

to happen. But this is never what

23:42

happens. It scurves. It goes up and then

23:44

it begins other sort of factors

23:46

contained to growth. It goes slower than

23:47

you think. There's time to adjust. They

23:49

say have no reason to believe. Why would

23:50

this be different? All right, let me

23:52

read another quote here. Displacing

23:55

white collar work would require orders

23:56

of magnitude more compute intensity than

23:58

the current level utilization. If

24:00

automation expands rapidly, demand for

24:02

compute definitionally rises, pushing up

24:04

its marginal cost. If the marginal cost

24:06

of compute rises above the marginal cost

24:08

of human labor for certain tasks,

24:09

substitution will not occur, creating a

24:10

natural economic boundary. We don't have

24:13

nearly enough compute for these

24:14

scenarios. And as they're saying, as you

24:16

try to build out compute for more and

24:17

more use, it's going to uh drive up the

24:20

cost because we're going to have a

24:21

mismatch between demand and actual

24:23

supply. As the cost comes up, it drives

24:25

back down the demand. We are already

24:27

actually seeing this with the one sector

24:30

where after 5 years of work, we're

24:31

finally seeing tools. It's the best case

24:33

scenario for AI. We're finally seeing

24:35

tools that are really catching the

24:37

interest of a sector, and that's in

24:38

computer programming. All of the

24:40

evidence I can find right now seems to

24:42

imply that these companies are selling

24:45

the compute for these agents for

24:48

computer programming at a significant

24:49

loss because they're trying to fight for

24:51

market share when they have to actually

24:53

go because again they are have huge

24:56

debt. When these companies actually have

24:58

to try to make more profit off of this

25:01

and these costs get adjusted to the

25:03

reality of how much expense they're

25:05

incurring at the AI companies, you're

25:07

going to see like a real moderation

25:09

probably and like how much we use for

25:10

programming and is it really worth is

25:12

worth $2,000 a month for an individual

25:14

$5,000 a month? I mean, it's it's going

25:17

to be uh interesting and that's just for

25:19

this one first use case. So, I think

25:21

that's interesting to see as well. They

25:23

also say, quote, "Moreover, there's

25:25

little evidence of AI disruption in

25:26

labor market data as of today. In fact,

25:28

the forward-looking components of our

25:29

labor market tracking have improved

25:32

recently, though huge mismatch between

25:34

what the financial analysts are seeing

25:36

and what the oped writers are

25:38

hypothesizing. The evidence of the

25:40

financial analyst is their decades of

25:43

experience of trying to understand the

25:44

labor market and technological

25:45

disruption. the evidence of the article

25:47

in oped writers.

25:49

Amazon laid off people and Dario Amade

25:51

says his technology is the most powerful

25:53

thing ever. All right, let me read the

25:54

conclusion from this Citadel Securities

25:56

piece. For AI to produce a substained

25:59

negative demand shock, the economy must

26:02

see a material acceleration and

26:03

adoption, experience near total labor

26:06

substitution, no fiscal response,

26:08

negligible investment absorption, and

26:10

unconstrained scaling of compute. It is

26:12

also worth recalling that over the past

26:14

century, successive waves of

26:15

technological change have not produced

26:17

runaway exponential growth, nor have

26:20

they rendered labor obsolete. Instead,

26:22

they have been just sufficient to keep

26:25

long-term trend growth in advanced

26:27

economies near 2%. Today's secular

26:29

forces of aging population, climate

26:31

change, and deglobalization exert

26:33

downward pressure on potential growth

26:35

and productivity. Perhaps AI is just

26:37

enough to offset these headwinds. So

26:40

they're saying, and I think this is

26:41

actually pretty optimistic, they're

26:43

saying the reality of major major

26:45

disruptive technological changes

26:47

historically,

26:49

has been just enough to offset all sorts

26:52

of negative trends and keep at least

26:54

some growth happening in the economy.

26:56

They say, "Well, we hope for here's what

26:58

they're predicting from AI." They're

26:59

like, "We have lots of negative growth

27:00

forces that we're going to have to

27:02

encounter in the next couple of decades

27:03

that are going to pull down the economy.

27:05

Hopefully, we'll get enough out of AI to

27:07

sort of stave those off and still get at

27:08

least some economic growth. That is a

27:10

very different vision. Like AI is the

27:12

latest technological innovation to stave

27:15

off DGrowth

27:17

is a completely different argument than

27:20

no, it's going to that this is the one

27:22

technology in history where the S-curve

27:23

doesn't happen and it's going to go

27:24

exponentially um and it's going to crash

27:26

the economy. So, they kind of end on a

27:29

positive note there. All right. So,

27:32

here's let's step back. First of all, I

27:34

want to say the economist make me feel

27:36

better.

27:38

It doesn't necessarily mean, of course,

27:39

they're right and maybe we are going to

27:41

have all these factors will come

27:42

together, right, to destroy the economy,

27:44

but I do like the fact the economists

27:45

aren't uh they're not that worried about

27:48

it. I think we see this reflected in the

27:49

stock market where we're seeing, you

27:51

know, again, if serious investors really

27:53

believe that the economy was going to

27:55

crash in the fall of 2027 and that we're

27:59

going to have massive uh decline

28:01

starting in October of 2026,

28:05

the COVID dip from 2020 is going to look

28:08

like a minor correction, right? Like it

28:10

would be substantial, but the reactions

28:12

are small. like they're actually being

28:15

they're they're they're pessimistic on

28:16

the frontier AI companies because they

28:18

think they're spending too much money.

28:19

So they don't buy the AI tech CEO

28:21

stories that their technology is going

28:23

to automate all work which would make

28:24

them the most valuable companies in the

28:25

history of companies. The stock market

28:27

doesn't buy it. We see more moderate

28:29

bets against specific sectors where they

28:31

think they're going to have practical

28:33

disruption like the SAS uh um sector and

28:36

even those are modest. And we're seeing

28:37

actually much bigger reaction from

28:39

things like the cost of oil going up to

28:41

$100 a barrel. That caused way bigger

28:43

impacts on the stock market than these

28:47

scenarios of the last two months about

28:48

the economy collapsing. So to me that

28:50

makes me feel better. But it doesn't

28:52

mean there's not going to be an impact.

28:54

And they could be wrong or maybe the

28:56

impact is going to be smaller.

28:58

But let's let's put that on the table

29:00

now, right? Let's say, okay, maybe the

29:01

economy is not going to collapse. so I

29:03

don't have to learn how to light a uh

29:05

garbage can fire and become a pet

29:06

masseuse. But maybe we're going to have

29:08

like it's going to be a hard run.

29:10

There's going to be economic disruption

29:11

and it's going to be like more so than

29:13

almost any other technology in the past

29:14

and it is going to be disruptive in some

29:16

way. Let's say that was the case. If

29:19

that is, and it could be true, and I

29:20

hope not, but it could be true.

29:23

AI doomsday reporting isn't helping.

29:28

What I'm seeing is that these sort of AI

29:30

doomsday articles where you try to oneup

29:32

each other with how uh precient you are

29:36

about how bad things are going to get

29:39

prevents us from responding

29:43

in effective ways. If we instead treat

29:45

AI like a normal technology and we

29:47

respond with our normal tools when we

29:49

see it doing things that we would

29:50

normally say this is a problem that we

29:52

need to correct. I think we can have

29:54

much better progress in containing,

29:57

shaping, and directing the AI revolution

29:59

than instead falling back to these

30:00

massive dystopian World War Z tales.

30:04

The fall back on doomsday writing is

30:06

letting the AI companies off the hook.

30:09

Look at what I covered last week.

30:12

Jack Dorsey

30:14

uh negligently goes off and makes these

30:17

huge acquisitions sort of in an

30:19

impulsive fashion throughout the

30:20

pandemic of these crypto and blockchain

30:22

companies. they don't go well. So he

30:24

then impulsively fires half of his

30:28

workforce because he can't do anything

30:29

injectors. He can't do anything in

30:31

measured increments. Everything he does

30:33

is drastic, right?

30:35

But because he comes out and says, "This

30:38

is just the first sign of the AI

30:41

economic apocalypse. I for one am

30:43

learning how to make trash can fires

30:45

because I'm going to not only be a pet

30:46

masseuse, but I have to maybe eat the

30:47

dogs because there'll be no money left

30:49

in the world." Because he leaned in the

30:51

doomsday reporting. what was the

30:53

coverage of the block layoffs?

30:56

Reporters would rather treat it as

30:58

evidence of the narrative economic

30:59

doomsday. That's what they focused on.

31:01

In fact, he cited in uh one of those two

31:03

art one of the articles I talked about,

31:05

the block layoffs are cited as evidence

31:07

of what's coming. The right way to treat

31:09

that was like, yeah, sure, and like I'm

31:12

sure you have a perpetual motion machine

31:14

and you can fly. Back to the point, what

31:17

happened to those crypto investments?

31:18

Why did you have to lay off that many

31:20

people? Who did you lay off? wait a

31:22

second, most of these jobs have nothing

31:24

to do with AI automatable roles. We

31:26

would hold his feet to the fire. Like,

31:27

you're being negligent and impulsive.

31:29

But instead, we're like, "Oh, yeah.

31:31

Thank you, Cassandra, for helping us

31:32

understand what's coming."

31:34

The same thing has happened with these

31:35

AI CEOs. They find like the more the

31:38

more dramatic and fearful of a thing to

31:40

say, the more the attention turns away

31:42

from what's actually happening.

31:44

Journalists used to severely distrust

31:47

billionaire tech CEOs, but not when it

31:50

comes to this issue. We look to them as

31:52

like they are guiding us to understand

31:54

what's happening with this technology.

31:56

They these CEOs have been covering this

31:58

have been saying crazy stuff for the

32:00

last four years. They keep changing what

32:02

it is on mass.

32:04

They were all talking about super

32:06

intelligence

32:08

and the machines getting out of control

32:11

and like an alien mind. They're all

32:13

talking about that and they all shifted

32:15

at some point to to something else and

32:17

now they've shifted to like the economy

32:19

is going. just follow they they just say

32:21

stuff and it's entirely in their favor

32:25

because again your technology automates

32:28

all jobs. Well, where am I going to put

32:30

my money? The only place left to put my

32:32

money is in like the three companies

32:33

that are going to run all the jobs. So,

32:35

I think doomsday reporting prevents us

32:37

from actually responding. Prevents us

32:39

from saying when Derry is like 50% of

32:41

white collar jobs are going to be gone.

32:42

I'm like, uh-huh. Uh-huh. um you need to

32:46

make $300 billion dollars somehow in the

32:48

next four years in order to like save

32:51

off like to to get anywhere near

32:52

profitability. Uh how are you doing

32:55

that? Right? That's the question we

32:57

could be asking.

32:59

So I I think that we don't need to

33:02

ignore AI or its impact on jobs. But we

33:04

need to cover it like a normal

33:05

technology so we can deploy the type of

33:07

normal things we would do when we see

33:08

disruption or changes when we see that

33:10

as cover for malfeasants or

33:11

impulsiveness or whatever is going on.

33:14

And so I hope we move past by the time

33:16

this comes out we'll probably have moved

33:17

on to you know something else. Uh I

33:20

don't know what AI and birds are going

33:22

to spy on us whatever it is. And I hope

33:24

so because I think this AI doomsday

33:25

reporting not only is stressing people

33:27

like me out, but it's preventing us from

33:29

actually respond to real impacts of this

33:32

technology in a way that could really

33:33

matter. All right, enough of my sermon.

33:36

Uh hopefully some of this makes you a

33:37

little bit feel a little bit better this

33:38

week. Uh we'll be back probably next

33:40

week. I'm doing this on Thursdays, maybe

33:42

not every Thursday, but if there's

33:43

something to talk about, I'll be back

33:44

next Thursday. Remember, take AI

33:47

seriously, but not everything that's

33:49

written about it. See you next time.

33:51

Hey, if you like this video, I think

33:52

you'll really like this one as well.

Interactive Summary

The video discusses the recent surge in articles predicting AI-driven economic doom, focusing on mass unemployment and industry collapse. The speaker, Cal Newport, argues that much of this reporting relies on "vibe reporting" and appeals to biased authority rather than solid evidence. He analyzes three prominent articles: one from The Atlantic, an op-ed in The New York Times, and a Substack piece from Catrini Research that reportedly caused a dip in the S&P 500. Newport debunks the claims by explaining that recent layoffs are primarily due to pandemic-era overhiring, not AI automation. He also suggests that AI CEOs' dire predictions serve their own financial interests, aiming to secure continued investment. The speaker contrasts these sensationalist narratives with analyses from professional economists and financial analysts who, while acknowledging AI's potential, do not foresee an imminent economic collapse. These experts point to stable real-time data, the S-curve adoption of new technologies, and the current unfeasibility of widespread AI-driven labor substitution due to compute costs. Newport concludes that while AI will undoubtedly cause disruption, the doomsday scenarios are exaggerated and prevent effective, measured responses to the technology's real impacts. He urges viewers to treat AI as a normal technology and apply standard analytical tools rather than succumbing to dystopian narratives.

Suggested questions

5 ready-made prompts