HomeVideos

Don't Fall For the Stock Market Hype. The $7,000 Raise AI Is Giving You (That Nobody Mentions)

Now Playing

Don't Fall For the Stock Market Hype. The $7,000 Raise AI Is Giving You (That Nobody Mentions)

Transcript

845 segments

0:00

A fictional recession just crashed the

0:02

stock market. And the real story is what

0:03

nobody's going to write about tomorrow.

0:05

So here's what happened. A Substack post

0:07

written as speculative fiction from 2028

0:09

wiped out over a hundred billion dollars

0:12

in market cap on Monday. IBM created 13%

0:15

its worst day in 25 years just because

0:18

Anthropic published a blog post about

0:20

cobalt. But that wasn't the Substack

0:23

post. The Substack post was by

0:25

investment research firm Catrini, which

0:28

wrote a highly regarded piece about how

0:31

bad things could get if labor

0:33

replacement driven by AI really takes

0:35

hold due to massive AI capability gains

0:38

over the next couple of years. Look,

0:40

you've seen me cover the individual

0:42

sell-offs. This video is different

0:44

because the doom narrative that is

0:46

driving all of them, there's seven I'm

0:48

counting now, it's not been about the

0:50

technology, it's been about the

0:52

economics. And so I'm going to take a

0:54

little bit of time in this video to lay

0:56

out the economics of the bear case and

0:59

the bull case, the doomer case and the

1:02

boomer case for AI. And then I'm going

1:05

to tell you the thing that neither of

1:08

them is talking about. So we'll get to

1:11

that in a second. So first the doom meme

1:14

and why it hits so hard. So let me steal

1:16

man this out properly because it does

1:18

deserve it. Catrini research and Olab

1:20

Shaw wrote a piece called the 2028

1:22

global intelligence crisis and they

1:25

framed it as a fictional macro memo from

1:28

2 years in the future. The scenario is

1:30

simple. AI capabilities keep

1:32

compounding. Companies rationally cut

1:35

white collar headcount to protect

1:36

margins. Displaced workers spend less.

1:39

The consumption hit cascades through

1:41

mortgages. The credit ultimately

1:43

contaminates the entire financial

1:45

system. And so in their fictional

1:47

scenario, the S&P drops 38% from its

1:50

2026 highs. Unemployment hits 10.2% and

1:54

things are very, very bad. I'll cut it

1:55

short here. The mechanism they describe

1:58

is consistent. It's well constructed and

2:00

it's pretty easy to follow even if you

2:02

don't have a degree in economics. White

2:04

collar workers make up about half of US

2:07

employment and drive threearters of

2:09

discretionary consumer spending, stuff

2:11

you get to spend on because you like it.

2:13

The top 20% of earners account for about

2:16

65% of consumer spending. These are the

2:19

people who buy second homes, who buy

2:21

cars, who buy vacations, who buy private

2:23

school tuition. If AI structurally

2:26

impairs their earning power, the

2:28

consumption math gets really ugly for

2:30

the whole economy quickly. A 2% decline

2:34

in white collar employment could easily

2:36

translate into double that, like a 4%

2:38

hit on discretionary spending. And so,

2:41

Satrini describes what they call an

2:43

intelligence displacement spiral. I

2:45

would call it a negative feedback loop.

2:46

Basically, what they see is AI gets

2:48

better, companies cut payroll, savings

2:50

go into more AI, AI gets better, and so

2:52

on. There is no natural break on the

2:54

spiral. And the financial contagion

2:56

chain is plausible, too. It's certainly

2:58

familiar to anyone who lived through

3:00

2008, as I did. Essentially, once you

3:04

start to get into the financial

3:07

institutions that own AI vulnerable

3:10

companies, there is a risk of contagion

3:13

because of what they're linked to in the

3:14

global credit system. And so in this

3:16

case, the mechanism is private credit

3:18

because private credit grew from a

3:20

trillion dollars in 2015 to over $2.5

3:23

trillion by 2026 when private credit

3:26

over that decade or so picked up and

3:29

rolled up a bunch of SAS companies at

3:32

valuations that assumed perpetual

3:34

revenue growth. I I was part of some of

3:36

those exits. I've seen them at work.

3:38

Those assumptions are dying in real time

3:40

and that's been part of the sell-off

3:42

story. I think the most haunting line in

3:44

the piece is this one. In 2008, the

3:47

loans were bad on day one. In 2028, the

3:50

loans were good on day one. The world

3:52

just changed after the loans were

3:54

written. I get why this went viral. I

3:57

get why it was shared everywhere. I get

3:58

why the markets convulsed. The scenario

4:01

is vivid. It's simple. It's wellargued.

4:03

It's emotionally resonant. It's

4:05

plausible. It activates the same dread

4:07

that made the big short a cultural

4:09

touchstone. the feeling that the system

4:11

is fragile, that nobody in charge sees

4:14

what's coming, and that the smart money

4:16

that's already headed for the exits. But

4:18

the thing about doom narratives is that

4:20

they are dramatically more viral, not

4:22

due to their nuanced analysis, but due

4:25

to one of the most robust findings in

4:28

human psychology. I'm referring to

4:30

negativity bias. We humans, all of us,

4:33

are evolutionarily wired to pay

4:36

disproportionate attention to threats. A

4:38

headline that says AI can crash the

4:40

economy generates way more like 10 to 50

4:44

times more engagement than a headline

4:46

that says AIdriven deflation could cause

4:48

real purchasing power for the median

4:50

household. You're asleep already. Both

4:52

headlines describe potential futures.

4:55

One of them is going to get millions of

4:57

views and the other one won't. And

4:58

that's what I want to talk about because

5:00

the asymmetry right now is distorting

5:02

the information environment that people

5:05

are using that we are all using to make

5:08

career and investment decisions. The

5:11

doom narrative is not wrong because it

5:13

went viral. The fact that it went viral

5:16

while the counterveiling evidence barely

5:18

registers should make you suspicious

5:21

about whether you're getting the full

5:22

picture because you're not. I'm going to

5:24

give you two different bullcase

5:26

arguments. And yes, they both use

5:29

economics, but we're going to simplify

5:30

it so it actually works. Alex Emis is an

5:33

economist at the University of Chicago

5:35

Booth School of Business. He read the

5:38

same intuitive arguments about AIdriven

5:40

demand collapse that Catrini formalized

5:42

as fiction and he actually went out and

5:44

built a model to figure out what would

5:46

happen. I'm going to drop a lot of the

5:48

modeling stuff and cut straight to the

5:49

chase. When you model the actual

5:53

conditions that Catrini describes where

5:56

labor share in the economy dramatically

5:58

declines very quickly where there is no

6:02

consumption that comes back after prices

6:06

decline where wealthy capital owners who

6:09

own data centers don't end up spending

6:11

more. Where interest rates hit the floor

6:14

and they can't drop further and there's

6:16

no policy response from the government.

6:19

Yeah, you kind of get what Satrini came

6:21

up with. But what Alex argues is that if

6:24

you have all of those in a row, the idea

6:28

that you have no policy response is kind

6:31

of laughable. And as someone who lived

6:34

through 2008 with a divided government

6:36

where everyone was fighting tooth and

6:38

nail, when things get bad enough, if

6:41

things get as bad as the ST treaty memo

6:43

argues, yes, government does end up

6:46

responding. And the reason why is

6:47

entirely selfish. They want votes. They

6:49

realize they're in trouble if they don't

6:51

get votes. And so they figure out a way

6:53

to get it done. But there's a lot of

6:55

other reasons to suspect some of the

6:57

other assumptions that the Catrini memo

6:59

just kind of handwaves aside. And I

7:02

think Alex is making a good point. I'll

7:03

give you a couple of examples. One of

7:05

the things that the Catrini memo doesn't

7:07

take fully aboard is the idea that we

7:11

might consume more

7:14

if we have lower prices in the economy.

7:18

That's actually pretty reasonable.

7:20

Everywhere you look, you see evidence of

7:22

Jieven's paradox. So that's the policy

7:24

piece. The other things that the Catrini

7:27

memo talks about like labor replacement,

7:30

like prices falling and people not

7:31

buying more stuff, at least not at

7:33

scale. Those are things that might be

7:36

individually plausible, but it's sort of

7:38

difficult to add them all together and

7:40

assume that all of them are correct at

7:42

once to make sort of a perfect doom

7:44

scenario. One example is the consumption

7:47

side. Let's imagine for a moment that

7:50

prices are going down in the economy

7:52

because AI is making things cheaper. If

7:55

that's the case, then people are going

7:57

to probably buy more stuff. Now, they

8:00

may not buy 10 TVs because the price of

8:02

TVs go down, but net net in the economy

8:05

overall, if people end up having more

8:08

purchasing power, they're going to end

8:10

up buying more stuff. And this is not

8:12

just about TVs and shoes and the and

8:14

sort of the hard goods that we produce

8:17

as a society. This is about services

8:20

too. And the services case is actually

8:22

worth calling out because I think it is

8:26

pointed than a lot of the bears want to

8:28

acknowledge. And I have to give credit

8:30

to Michael Bloke who wrote a direct

8:32

response to the Catrini piece when he

8:34

saw it and made this argument really

8:36

really coherently. I think it deserves a

8:38

lot more attention than it's getting,

8:39

but again, it's not a doomer narrative,

8:41

so it doesn't tend to get the attention.

8:42

What he argued is that most of consumer

8:46

spending is in services. Think mortgage

8:49

services like how to buy a house. Think

8:51

tax preparation, think insurance

8:53

brokerage, think travel booking. You get

8:56

the idea. These are all tasks that AI

9:00

agents plausibly make dramatically

9:04

easier today because they're

9:06

fundamentally a function of complexity.

9:09

And so if you're sitting there and

9:11

you're like, where can AI agents impact

9:13

the economy? It's really plausible that

9:15

AI agents will impact the economy first

9:19

by making a bunch of those services

9:22

cheaper. I would argue that's more

9:24

plausible than say replacing all the

9:26

cobalt and the ATM machines because

9:29

that's something that the stock market

9:30

was worried about this week. But

9:32

services are really easy to replace.

9:35

They're not legacy code. They don't

9:37

touch like core of the financial system

9:39

kind of stuff. It's like, yeah, is an

9:42

agent going to be good at travel

9:43

booking? Maybe it will be, and if it is,

9:44

you'll use it. If that's true, AI agents

9:47

could plausibly compress costs by 40 to

9:51

70%. And Michael did these numbers. I'm

9:53

not just making them up. And plausibly

9:56

return 4 to7,000

9:59

in annual gain per median household

10:02

taxfree. No legislation. Basically, we

10:06

all get more money in our pockets in the

10:07

US because AI agents are compressing the

10:11

margins of all these services. And the

10:13

point is simple. Is that money just

10:15

going to evaporate? No. People are going

10:17

to spend it. Let's say it goes into home

10:20

mortgages. Let's say you pay less for

10:22

buying a house in commission because the

10:25

services cost comes down. Well, now

10:27

you're going to put that money into

10:28

furniture, into renovations, into moving

10:30

costs. It doesn't disappear. It goes

10:32

back into the economy. There's one more

10:34

piece in Michael's scenario that I think

10:36

is worth calling out. He identifies the

10:38

ongoing high trend of business formation

10:41

in the US as significant. The Census

10:44

Bureau reported 532,000

10:47

new business applications in January of

10:50

2026 alone, up over 7% from December.

10:53

That continues a long-term trend that's

10:55

been accelerating since 2021. And

10:57

Michael reasonably supposes it's going

10:59

to continue. And what he's suggesting is

11:01

that essentially oneperson businesses

11:04

have more leverage in the economy than

11:07

they've ever had before because now they

11:09

have the skills, they have the tools,

11:10

they have radically lower overhead, and

11:12

they have more reach all thanks to AI.

11:14

And this is not just theoretical. Know

11:17

personally people in my life who have

11:20

gone from not coding at all to I'm

11:23

setting up a business and I'm making

11:25

real money from it. and they feel so

11:27

motivated they're starting a formal

11:30

business out of it. This is not just one

11:32

story that I'm cherrypicking. I know

11:34

more than I can count on two hands.

11:36

There's a lot of folks out there who are

11:39

finding that the conditions the AI

11:41

revolution is bringing are ideal for

11:44

people who want to strike out on their

11:46

own. Of course, it's easy to hear the

11:48

bears responding. This time it's

11:50

different because if AI is a general

11:52

intelligence, it's going to replace

11:54

everything at once. So, where will these

11:56

entrepreneurs go? Sure, that's a real

11:58

argument, and I take it seriously, but

12:00

this brings me to the part of the video

12:02

that nobody else is talking about.

12:04

Whether or not AI displaces labor, the

12:08

way the bears describe depends on the

12:12

speed of labor displacement outrunning

12:15

the speed of technical adaptation. And I

12:18

think that is an incredibly

12:20

underrepresented part of this

12:21

conversation. And if that made no sense

12:22

to you, don't worry. We're going to get

12:24

into it. Fundamentally, both doomer and

12:27

boomer narratives assume that AI

12:30

capabilities translate incredibly

12:32

rapidly into economic impact. The doom

12:35

narrative assumes that everyone's

12:36

getting fired. The boom narrative

12:39

assumes really rapid technical

12:40

adaptation across society. Both assume

12:43

the conversion rate from AI can

12:45

technically do this to the economy has

12:48

reorganized around AI is incredibly

12:51

fast. It's not. And the reason it isn't

12:55

is the most underrepresented part of

12:57

this conversation because capabilities

13:00

are not the same as deployment.

13:03

Deployment is not the same as adoption.

13:05

Adoption is not the same as deep

13:07

integration. Deep integration on its own

13:09

is still not the same as economic

13:11

impact. Social inertia is a massive

13:15

force in the economy and it is

13:17

dramatically underrepresented in every

13:20

AI analysis I've read, bare or bull.

13:23

This is what I mean concretely. I'm

13:24

going to name kinds of inertia because I

13:27

don't want to just throw away a line and

13:29

say it's all about inertia. No, we're

13:30

going to get specific. Regulatory

13:32

inertia. Financial services firms that

13:35

want to use AI for compliance work need

13:37

approval from regulators who haven't

13:39

finished writing the rules. Health care

13:41

organizations need to navigate HIPPA and

13:44

FDA clearance and institutional review

13:46

boards. Government agencies run

13:48

procurement cycles measured in years,

13:50

not quarters. The cobalt systems that

13:52

Anthropic is talking about modernizing

13:55

run an estimated 95% of ATM transactions

13:59

in the US. Hundreds of billions of lines

14:02

of cobalt run in production today,

14:05

powering critical systems across finance

14:07

and airlines and the government. Nobody

14:10

is migrating those to a new codebase

14:13

just because a startup published a blog

14:15

post, even if that startup is anthropic.

14:17

IBM's own CEO, Arvin Krishna, said last

14:20

year that their mainframe AI coding

14:22

assistant has gotten wide adoption

14:25

because it understands existing Cobalt

14:28

code bases and decides what to modernize

14:31

across those code bases. It's not

14:33

replacing them, it's understanding them.

14:36

The distinction matters. IBM stock

14:38

dropping 13% doesn't change the fact

14:41

that their client switching costs are

14:43

measured in years of institutional pain.

14:46

Not in API calls. But we're not done.

14:48

What about organizational inertia? The

14:50

Satrini scenario assumes companies cut

14:52

headcount rationally and rapidly as AI

14:55

capabilities improve. Companies are not

14:57

rational actors. In practice, large

15:00

organizations don't work that way.

15:01

Headcount decisions are filtered through

15:03

HR policies, through employment law,

15:05

through union agreements, through

15:07

severance obligations, through

15:08

institutional knowledge preservation,

15:10

through management politics, and the

15:12

simple fact that most executives have

15:15

never managed an AI transition and do

15:17

not know what they do not know. The gap

15:20

between Claude can technically do the

15:22

parts of this job that matter and we've

15:24

reorganized our workflows and retrained

15:26

our remaining staff and built QA

15:28

processes for AI output. and we've

15:30

confidently reduced headcount. That's an

15:32

enormous gap. I've seen firsthand how

15:36

long it can take to go from AI strategy

15:39

to pilot program. It takes so long. I

15:41

have seen multiple cases where big

15:44

company pilot programs are abandoned

15:47

because the very piece of AI capability

15:50

that they worked on is no longer

15:52

relevant because AI has moved so fast

15:54

past it. You know what a good example of

15:55

that is? Rag. Everyone was excited about

15:58

Rag in early 2025. You hear a lot less

16:02

about it now because Agentic Search has

16:04

gotten better. You hear a lot less about

16:05

it now because context windows have

16:07

gotten larger. And all of the people

16:09

that spend an inordinate amount of time

16:11

fine-tuning rag systems for their wikis

16:13

are pretty much regretting it. Companies

16:16

move slowly. Here's another one.

16:18

Cultural inertia. Yes, that's different

16:20

from organizational inertia. Most people

16:23

still don't use AI in their daily work.

16:25

I know lots of those people. They are my

16:27

friends. Yes, I have friends who don't

16:28

use AI. The adoption curves are real,

16:31

but they're way, way slower for most

16:33

people than the capability curve on AI.

16:35

When Toby Lutkkey, one of the most

16:37

technically fluent CEOs on the planet,

16:40

running a company whose entire business

16:42

is tech, when he has to issue a

16:44

companywide mandate in April 2025 saying

16:48

reflexive AI usage is now the baseline

16:50

at Shopify. When he has to build it in

16:52

performance reviews in order to get it

16:54

adopted, that tells you something

16:56

important about how slowly even

16:59

high-erforming organizations change

17:02

their cultural behaviors. Toby was

17:04

really explicit about this on the

17:06

acquired podcast. He said using AI well

17:09

is a skill that needs to be carefully

17:11

learned by using it a lot. He's right.

17:14

He talked about using what he calls a

17:16

Toby Eval. Like he applies this first to

17:18

himself where he has a personal folder

17:20

of prompts he runs against every single

17:22

new model release systematically probing

17:25

capabilities as if he was a QA engineer

17:28

running unit tests. And he says that the

17:30

skill of learning to prompt AI well, of

17:33

learning to give AI all the context it

17:36

needs to write a really coherent answer

17:39

without additional search has made him

17:41

better at everything else in his job. I

17:43

actually will agree with that as someone

17:44

who's worked a lot on prompting. I feel

17:46

like I'm a much clearer communicator

17:48

because I am a prompter. But regardless,

17:51

step back for a minute with me. Toby is

17:53

Toby. He is a 1enter AI fluent CEO. Do

17:57

you think all of the personal work that

18:00

Toby put in, all of the cultural work

18:02

that Toby put in is something that a

18:04

mid-market manufacturing firm in the US

18:07

is going to replicate? Is that CEO going

18:10

to do what Toby did? No. Now multiply

18:12

that mid-market manufacturing firm times

18:15

a million. Look at all the other

18:17

businesses that are led by leaders who

18:20

are not as AI fluent as Toby. Cultural

18:22

inertia is real. The last inertia force

18:26

I'll call out is trust inertia.

18:28

Enterprises do not and should not trust

18:31

AI output by default. And the cost of

18:34

figuring out how you formally scale

18:37

verification systems is really high.

18:40

Unless you're willing to put in the

18:42

capital to invest in verification as a

18:46

competency, you're not going to get to

18:48

the point where you trust AI enough to

18:50

let it do the kind of high lever work

18:52

that Citrini needs you to do for their

18:55

memo to come true. And most

18:57

organizations don't have the capital for

19:00

that kind of investment. And most of

19:02

them frankly don't have the stomach

19:04

because moving your workforce from I

19:07

have to do this work to I have a new

19:09

skill and it's verifying the AI at scale

19:12

is really really hard. Figuring out how

19:14

to do that in a way that helps you go

19:16

faster is even harder. And all along the

19:19

way you have to build institutional

19:21

trust to deploy that AI at scale. You

19:24

have to show that you have the

19:26

appropriate guardrails, the appropriate

19:28

audit trails, the appropriate human

19:29

oversight. that takes time that no

19:32

amount of benchmark improvements can

19:34

compress. Look, these four forces don't

19:37

mean that AI is never going to transform

19:39

the economy. All they mean is that it

19:42

won't transform the economy on the

19:44

timeline the stock market is pricing,

19:47

frankly, in either direction. The

19:48

doomers require a speed of labor

19:50

displacement that social inertia simply

19:53

won't permit. And the boomers require a

19:55

speed of adoption and integration that

19:58

organizational reality won't permit.

20:00

What actually happens is slower than

20:02

both, messier than both, and far more

20:05

unevenly distributed than either

20:07

narrative allows. Here's how I think

20:09

about it. Imagine two curves on the same

20:11

chart. The first curve is really

20:13

familiar to you if you listen to me.

20:15

It's AI capability. It goes up really

20:17

fast. Model intelligence, reasoning

20:19

depth, agentic endurance, you name it. I

20:22

can tell you any number of numbers and

20:23

they all go up really fast. Gemini

20:25

doubled its reasoning in just three

20:26

months. There's an example. The second

20:28

curve is societal dissipation. And we

20:31

never talk about it and we should. The

20:34

rate at which those AI capabilities

20:37

actually permeate the economy and change

20:40

how work gets done, how money flows, how

20:42

institutions operate. This curve is way,

20:45

way flatter. It's governed by the

20:47

inertia forces I talked about. It does

20:49

compound over time, but it starts from a

20:51

really low base and it goes really

20:54

slowly. The gap between these two

20:56

curves, the really fast exponential

20:58

curve for AI and the really slow

21:00

societal dissipation curve is where we

21:03

all live today. And it's the gap that

21:06

explains almost everything that seems

21:08

confusing about this current moment. It

21:10

explains why AI capabilities are

21:12

stunning and the economic disruption is

21:14

still modest. It explains why the stock

21:16

market frankly cannot make up its mind

21:19

because it's simultaneously pricing

21:21

incredible return on investment for AI

21:23

capabilities and also pricing incredible

21:26

disaster on the other hand. It explains

21:28

why the doom narrative and the boom

21:30

narrative both sound compelling. It

21:32

explains why a blog post can crash a

21:34

stock. But there's something that's much

21:36

more important than all of that

21:37

narrative explanation that these two

21:39

curves do. and that is unveil reveal a

21:43

specific and very large economic

21:46

opportunity. That opportunity exists for

21:48

you and for me and for a bunch of

21:50

businesses precisely because this gap is

21:52

wide. If AI capabilities were

21:54

irrelevant, there would be no advantage

21:56

to adoption. Guys, do we see no

21:58

advantage to adoption? No, we do not.

22:00

It's the gap. It's the fact that the

22:02

tools are powerful but very unevenly

22:05

distributed, understood by very few, and

22:08

integrated by even fewer. That's the gap

22:10

that creates asymmetric economic returns

22:14

for you and me and for anyone who wants

22:16

to invest seriously in their AI

22:19

capability set. And that's true for

22:20

companies, not just for people. The

22:22

people and firms operating at the

22:24

capability frontier while the rest of

22:27

the economy moves at the dissipation

22:29

rate are capturing an outside share of

22:31

economic benefit. And because social

22:34

inertia is so strong, the advantage that

22:36

we're getting here does not erode very

22:39

quickly. It persists. It compounds. And

22:42

it may persist and compound for a whole

22:45

lot longer than a lot of the models

22:47

predict because the models are not

22:48

really accounting for how slowly

22:50

societies tend to change. This is not

22:52

the same as saying learn AI and you'll

22:54

be fine. By the way, it's more specific

22:56

and it's more structural than that. The

22:58

capability dissipation gap means that

23:01

the economic rewards for early

23:04

aggressive adoption are higher and more

23:07

persistent than anyone is currently

23:08

modeling. The bears assume the gap

23:11

closes really fast with rapid labor

23:12

displacement. The bulls assume the gap

23:15

closes really fast with rapid technical

23:17

adaptation. Both are wrong about the

23:19

speed. the gap stays wide and while it

23:22

stays wide the people on the right side

23:25

of it accumulate advantages that

23:27

compound with every single model

23:29

release. Now the implications of this

23:30

gap play out really differently

23:32

depending on your scale. Frankly, large

23:35

firms are positioned to win on every

23:38

dimension except one and that may be the

23:41

one that matters the most. Like start

23:42

with capital advantage. Large firms have

23:44

the money to spend 20 grand a month on

23:47

an AI agent if that's what OpenAI wants

23:49

to charge. They have data advantages.

23:51

They have decades of proprietary

23:53

information. They have distribution

23:54

advantages. They have existing customer

23:56

relationships that create deployment

23:58

surface area. And they have the budget

24:00

for extensive verification and

24:02

compliance infrastructure if that's what

24:03

they need. But but but they carry the

24:07

full weight of organizational inertia.

24:09

Every new AI workflow has to survive

24:12

procurement, legal review, security

24:14

audit, you name it. So it can take 18

24:16

months from this tool will save us 10

24:18

million a year. So we're actually saving

24:20

the money. The only exception to this is

24:23

a highly involved founder like Toby.

24:25

Those are the wild cards in the pack. If

24:27

you have a really aggressive AI friendly

24:29

founder like Toby at a large company,

24:30

that can change. Small firms and

24:32

individuals and that difference is

24:34

blurring. We have the opposite profile.

24:37

We lack the capital. We lack the data.

24:39

We lack the distribution. But we have

24:41

the one thing the big companies don't

24:44

and that is speed. The capability

24:46

dissipation gap creates an asymmetric

24:48

advantage for speed and for anyone who

24:51

can collapse the integration timeline.

24:53

So a solo consultant who can integrate

24:55

AI into their workflow today is

24:58

operating at the capability frontier

24:59

while their competitors are still doing

25:01

quarterly meetings. The practical

25:03

huristic is really today. One of the

25:06

things that marks people who are AI

25:08

native is they think in terms of the

25:10

next couple of hours or get it done by

25:12

the end of the day. They are not coming

25:14

back and talking to me about we'll get

25:16

it done in two weeks. They're not coming

25:17

back and saying can we do it next month,

25:20

next quarter. And in the cases where big

25:22

companies can move to that way of

25:24

operating which is an enormous cultural

25:25

change which hits cultural inertia etc.

25:28

they have tremendous advantages. But for

25:30

everybody else who's smaller, who's

25:33

missing the capital, who's missing the

25:35

resources, they do better if they can

25:38

get on that speed train, if they can

25:40

leverage the advantage of being small. I

25:44

think Toby understands this

25:45

instinctively, and I think it's worth

25:46

looking at a case study from Shopify as

25:49

a result. Toby's mandate with AI is not

25:51

use AI when it's convenient. It's

25:53

demonstrate why AI can't do this before

25:56

you're allowed to ask a human to do it.

25:58

and he treats model evaluation as a

26:00

personal discipline, right? He's running

26:02

structured evals on his own on his own

26:05

time and growing his test harness over

26:07

time. Toby also requires AI exploration

26:10

in the prototype phase of every single

26:13

project. Not because the output will be

26:15

production quality, but because even if

26:17

AI fails at the task, you now have an

26:19

eval for the next model. That last point

26:21

deserves emphasis. When Toby makes a

26:24

junior employee test their project

26:26

against an AI tool, he's not expecting

26:28

the AI to succeed. That's not the goal

26:31

here. He's building organizational

26:33

muscle memory. He's ensuring that when

26:35

the next model release drops, and it

26:37

will, his company has a pre-built

26:40

evaluation framework that immediately

26:42

reveals what's newly possible. He's

26:45

investing in the rate of dissipation

26:47

within his organization. Most other

26:49

companies are trying to run the AI race

26:51

with the same tools they brought to

26:53

cloud and Toby is busy shortening the

26:55

track and focusing on how he actually

26:58

can get adoption with teeth. Toby made a

27:00

really sharp observation on the podcast

27:02

this week that stuck with me. He pointed

27:04

out that the best chess game every year

27:07

for the past 20 years has been played by

27:09

machine versus machine and nobody

27:11

watches those games. But everybody in

27:14

chess knows who Magnus Carlson is. We

27:16

don't actually care about the chess. It

27:18

turns out we care about the humans

27:19

playing the chess. Toby sees this as the

27:23

key insight people get wrong about AI.

27:25

The tools are instruments to be played.

27:28

They're not replacements for the player.

27:30

The craft still matters. The judgment

27:32

still matters. What changes is the

27:34

ceiling of what a skilled player can

27:36

achieve. Look, if you've dug in this

27:38

far, I want you to walk away with three

27:40

things. First, please recontextualize

27:42

that stock market activity. I don't know

27:45

what role you play. Maybe you're a

27:46

passive investor. Maybe don't invest at

27:47

all. The memes are still there and the

27:49

memes will get into tech and get into

27:51

your company and affect you. The AI

27:53

scare trade is creating mispriced assets

27:56

and organizational chaos. Some of the

27:58

companies getting hammered aren't going

27:59

to face real disruption, but on a

28:01

timeline measured in years, not in the

28:03

weeks that the market is pricing.

28:05

Meanwhile, the market isn't pricing in

28:06

the buy side of any of this at all. What

28:09

do companies do with the savings from a

28:11

40% reduction in software costs? What

28:14

happens to the $42 billion that gets

28:16

redirected from real estate commissions

28:18

to home buyers? No one's investing in

28:20

that. The doom narrative just doesn't

28:22

have a place for it. It doesn't drive

28:23

clicks. Second, recontextualize those

28:26

doomer narratives, too. The bare case

28:28

for AI is built on real economic forces.

28:31

Demand side effects of income

28:33

redistribution from workers to capital

28:35

owners, potential savings gluts. But the

28:38

conditions required for a full economic

28:40

contraction, the thing that is making

28:42

everybody panic right now are really

28:44

extreme. And when you get an economist

28:46

from the University of Chicago modeling

28:48

out these scenarios and basically saying

28:50

they are too unrealistic to hold in

28:52

practice, don't read that as

28:53

dismissiveness. It is some needed rigor.

28:56

That is a great antidote to some of the

28:58

panic going around in both investor and

29:00

tech circles today. The doom narrative

29:03

is useful as a policy warning. We should

29:06

absolutely be thinking about how we can

29:09

support job and career transitions. We

29:11

should be thinking about broader capital

29:13

ownership. But the doom narrative is not

29:17

particularly useful as a career planning

29:19

framework or as an investment thesis.

29:21

It's it's a meme, right? It's 10 to 50

29:24

times more viral than the counter

29:25

evidence and you should calibrate

29:27

accordingly. Third, and this is by far

29:29

the biggest one, map the capability

29:32

dissipation gap as it applies to you in

29:35

your situation. The most valuable thing

29:38

you can do right now is not learn AI in

29:40

the abstract. That's 2024 advice. That's

29:42

table stakes. The valuable thing to

29:45

figure out is where you sit relative to

29:47

the exponential curve and the flat

29:48

curve. Are you operating at the

29:50

capability frontier? Are you testing new

29:52

models regularly? Are you integrating AI

29:54

into your real workflows? Are you

29:56

building evaluation frameworks for your

29:58

domain or are you kind of content at the

30:01

dissipation rate? You're aware that AI

30:02

exists. Maybe you use it occasionally,

30:05

but fundamentally you're working the

30:07

same way you did 2 years ago. The gap

30:09

between those two positions is where

30:11

economic value is concentrating in the

30:13

next 2 or 3 years. And because social

30:15

inertia is so strong, that gap actually

30:18

isn't going to close as quickly as

30:19

people think. The person who spent the

30:21

last year building genuine AI fluency in

30:23

their domain is therefore not just

30:25

learning a tool. They're building an

30:27

asset that compounds. Every model

30:30

improvement makes that asset more

30:32

valuable, not less. Because each new

30:34

capability lands on a foundation of

30:36

practical understanding that takes real

30:38

time with the model to develop. The

30:40

career move right now is to become the

30:43

person in your organization who can walk

30:45

into a room of panicking executives. And

30:47

there's a lot of panicking executives

30:49

right now. and say with genuine

30:51

authority, I've tested this. Here's what

30:53

AI can actually do in our actual

30:55

workflow. Here's what it cannot do. Here

30:57

is the implementation plan. Here's the

30:59

budget and here's the timeline. That

31:01

person does not exist in most

31:03

organizations. The technical people

31:05

understand the models. The business

31:07

people understand the workflows but not

31:09

the technical side. And the consultants,

31:11

they just understand the frameworks and

31:12

talk. But nobody can cross all three.

31:15

And if you can bring all three of those

31:17

together, you have an incredibly

31:18

valuable skill set in 2026. The doom

31:21

narrative is a useful warning. The boom

31:23

narrative is a useful aspiration. We

31:25

should study that, too. Neither is a

31:28

useful plan for you or me or our

31:30

careers. Your plan that that's the one

31:33

that matters. It should be specific. Map

31:35

which of your problems are reasoning

31:38

problems. Point them at that model.

31:39

Which are effort problems? Which are

31:41

coordination problems? test which models

31:43

handle which tasks in your real workflow

31:46

like Toby does. Build the evaluation

31:48

frameworks that let you immediately

31:49

exploit each new model release. You are

31:52

trying to collapse the gap between

31:54

capability and integration in your

31:56

domain because every month that gap

31:59

stays wide is a month you're leaving

32:01

returns on the table. Stop worrying

32:04

about the doomer narrative. Do not pay

32:06

too much attention to whatever the next

32:09

investor-driven stock selloff is. I am

32:12

sure there will be another one. Pay

32:13

attention to the capability gap. Pay

32:16

attention to where AI is going and how

32:19

slowly it's actually getting adopted by

32:21

society. That gap is the greatest

32:24

generational opportunity anyone in the

32:27

workforce is going to see. That is where

32:29

you should be spending your time. Best

32:30

of luck.

Interactive Summary

The video analyzes two prevailing narratives about AI's economic impact: the "doom" scenario, where AI-driven labor displacement leads to economic collapse, and the "boom" scenario, where rapid AI adoption drives immense growth. The speaker argues that both overlook the significant force of "social inertia," encompassing regulatory, organizational, cultural, and trust-related barriers, which dramatically slows down AI's deployment, adoption, and deep integration into the economy. This creates a "capability dissipation gap" – the vast difference between rapidly advancing AI capabilities and the much slower rate at which society incorporates them. This gap, rather than being a problem, represents a massive generational economic opportunity for individuals and businesses that aggressively bridge it through active AI adoption, evaluation, and integration into their workflows, thereby gaining persistent advantages.

Suggested questions

8 ready-made prompts

Recently Distilled

Videos recently processed by our community