HomeVideos

Why $650 Billion in AI Spending ISN'T Enough. The 4 Skills that Survive and What This Means for You.

Now Playing

Why $650 Billion in AI Spending ISN'T Enough. The 4 Skills that Survive and What This Means for You.

Transcript

620 segments

0:00

Google is spending $185 billion on AI

0:03

and it's still not enough. They just

0:06

told investors that's how much they're

0:08

spending on AI Infra, and the stock

0:09

dropped 7%. Not because Wall Street

0:11

thinks the number is too high this time.

0:13

It's because Wall Street is starting to

0:15

realize it might not be high enough. So,

0:17

Alphabet reported Q4 earnings on

0:20

February 4th, the same week a markdown

0:22

file erased $295 billion in enterprise

0:25

software market cap. The earnings

0:27

themselves, they were immaculate from

0:29

Wall Street's perspective. Revenue

0:30

exceeded $400 billion for the first time

0:33

in company history. Earnings per share

0:35

crushed it. Cloud revenue accelerated.

0:37

If you're an investor, it's everything

0:38

you want to see, right? By every

0:40

conventional measure, the company seemed

0:42

to be performing at the peak of its

0:43

powers. And then Sundar Pitchi announced

0:45

the capex number. Somewhere between 175

0:48

and $185 billion in one year in 2026.

0:52

That's roughly double the $91 billion

0:55

Google spent in 2025, which itself was a

0:58

74% increase over 2024. Analysts had

1:01

been expecting around $120 billion.

1:03

Google blew past that expectation by

1:06

50%. CFO Anat Ashkenazi broke down the

1:09

allocation. About 60% of that money pile

1:11

is going to servers and 40% is going on

1:13

data centers and networking equipment.

1:15

Sundar described maintaining a quote

1:17

brutal pace to compete on AI. I think

1:19

that word choice was very deliberate.

1:21

This is not a company making necessarily

1:24

a measured strategic investment,

1:25

although they'll probably portray it

1:27

that way. This is a company sprinting

1:29

because it believes the cost of slowing

1:31

down is existential. Now, you know, the

1:33

stock recovered most of its after hours

1:35

losses by next day's close. This is not

1:37

about the stock per se, but the initial

1:40

7% drop should tell you what the

1:42

market's instinct was before the

1:44

analysts had time to write notes.

1:47

185 billion sounds like too much money.

1:50

It sounds reckless. It sounds like a

1:53

company that has lost discipline and the

1:56

market's instinct is wrong. And the

1:59

speed at which it's becoming obviously

2:00

wrong. That's the real story. You know,

2:03

6 months ago, this was all a bubble. Do

2:05

you remember that? If you rewind to mid

2:07

2025, the dominant narrative in

2:09

financial media was that AI

2:10

infrastructure spending had just

2:13

decoupled from reality. Oldman Socks

2:15

published a widely cited research note

2:17

asking whether big tech was spending too

2:19

much on AI with too little to show for

2:21

it. Sequoia's David Khan wrote his $600

2:24

billion question analysis pointing that

2:26

pointing out that the total revenue of

2:28

all AI companies combined could not

2:30

justify the infrastructure being built.

2:32

Jim Cavell at Goldman called generative

2:34

AI overhyped. The list goes on and on.

2:37

This was the consensus. This was built

2:39

on real numbers. Training runs cost

2:41

hundreds of millions of dollars. Then

2:43

agents happened. Not the concept of

2:46

agents. People have been talking about

2:47

AI agents of course for years, but the

2:49

actual deployment of agents into

2:52

production workflows, consuming

2:53

absolutely massive amounts of inference

2:55

tokens and delivering value that was so

2:57

obvious the market has started to wake

2:59

up and can't ignore it anymore.

3:01

Anthropics Claude Co-work shipped

3:03

plugins that can triage legal contracts,

3:05

that can automate compliance reviews,

3:07

that can generate audit summaries. That

3:09

legal plugin is just 200 lines of

3:12

structured markdown and it wiped 16% off

3:14

Thompson Reuters. Open AAI is in the

3:16

game, too. They launched Frontier, an

3:18

enterprise agent platform, and signed up

3:20

HP, Intuit, Oracle, State Farm, and

3:22

Uber, all as launch customers, not for

3:24

demos, but for production deployment.

3:26

Coding agents at Cursor, Codeex, and

3:28

Cloud Code have crossed from useful

3:31

autocomplete, which was the joke at the

3:33

beginning of last year, to autonomously

3:35

generating thousands of production

3:36

commits in a single year. The agents

3:39

didn't just work. They consumed compute

3:42

at a scale nobody has modeled before.

3:45

Every agent running a contract review is

3:48

making dozens of inference calls. Every

3:50

coding agent generating a thousand

3:51

commits an hour, a real number, by the

3:53

way, is burning tokens continuously

3:56

around the clock at a rate that makes

3:58

chatbot usage look like a rounding

4:00

error. When you multiply that by

4:02

enterprisecale deployment across legal,

4:05

finance, engineering, compliance, the

4:07

inference demand curve just goes

4:09

straight up. It goes vertical. And just

4:11

like that, the narrative of the bubble

4:13

has flipped. Not gradually in weeks. The

4:16

question has stopped being, is AI

4:18

overhyped? It has started being, do we

4:21

have enough compute for what's about to

4:23

happen? $285 billion SAS apocalypse

4:26

wasn't just a repricing of software

4:28

companies. It was the market absorbing

4:30

in real time that AI agents are powerful

4:33

enough to restructure entire industries

4:37

and that the infrastructure to run those

4:38

agents at scale does not exist yet.

4:41

Derek Thompson captured the shift with

4:43

precision. The odds that AI is a bubble

4:45

declined significantly and the odds that

4:47

were quite underbuilt went up. He's

4:50

right. You cannot simultaneously believe

4:52

that AI agents are powerful enough to

4:55

crash enterprise software and also that

4:57

the infrastructure spending to support

4:59

those agents is excessive. You got to

5:01

pick one. I need you to understand how

5:03

big the scale of the bet is to

5:05

understand how wild it is that we might

5:07

be underbuilt. Google's not alone in

5:09

that giant capex spent. That's the first

5:11

thing to understand on spending even

5:13

more. They announced roughly $200

5:15

billion in 2026 capex. Microsoft is

5:18

running at about 145 billion annualized

5:22

metaguided to between 115 and 135

5:25

billion driven by its super intelligence

5:27

labs buildout. Even Oracle, which barely

5:30

registered in cloud infra a couple of

5:32

years ago, is deploying tens of

5:33

billions. Add it all up, the five

5:35

largest tech companies on Earth, are

5:37

going to spend somewhere close to $700

5:39

billion in one year on AI

5:42

infrastructure. Goldman is projecting

5:44

that's going to rise to well over a

5:45

trillion between 2025 and 2027. That's

5:48

probably conservative. These are numbers

5:51

that do not fit neatly into existing

5:53

frameworks for evaluating corporate

5:55

investment. And I think that's why

5:57

market reactions have been so wild.

6:00

Microsoft's capital intensity has

6:02

reached 45% of its revenue. Historically

6:05

absolutely unthinkable for a software

6:07

company. Amazon's capex has already

6:09

exceeded its total annual free cash flow

6:12

and forced them to tap the debt markets.

6:14

Google is about to spend more on

6:15

infrastructure in a single year than the

6:17

entire GDP of Ukraine. The natural

6:20

reaction is that this has to be a

6:22

bubble. That's what people assumed. And

6:23

for 6 months, that reaction was very

6:25

defensible on Wall Street. It isn't

6:27

anymore. And look, I'm not saying the

6:29

bare case was stupid. I'm just saying it

6:32

aged out. OpenAI's annual recurring

6:35

revenue hit 20 billion dollars in 2025.

6:38

Impressive, but that's the largest AI

6:40

company in the world, and its revenue

6:42

represents roughly 3% of the

6:44

infrastructure investment being made on

6:46

its behalf. The math doesn't come close,

6:48

which is what the bears have been

6:50

saying. Not this year, not next year.

6:52

Every previous infrastructure boom has

6:54

looked like this, spending wildly ahead

6:57

of revenue. And everyone has assumed

6:59

that's going to end in tears like it's

7:01

ended in tears before. But the

7:03

conclusion the bears drew died this

7:05

week. The SAS apocalypse is a proof of

7:08

demand. Not projected demand, not

7:11

theoretical demand, but revealed demand

7:14

priced by the market in real time. If if

7:16

AI agents generate $285 billion of

7:20

software cell conviction, we are

7:23

restructuring how enterprise economics

7:25

work in real time. And it's around AI

7:27

agents. And it's not just about market

7:29

reactions. Enthropic went from fewer

7:31

than a thousand business customers two

7:33

years ago to over 300,000 by September

7:37

of 2025, many more now, and they reached

7:39

44% enterprise penetration by January

7:43

2026. Open AAI's revenue has tripled.

7:46

Frier Sarah Frier, their CFO, says

7:49

enterprise now represents roughly 40% of

7:51

the business. And it's no coincidence

7:53

that the day after Google's capex

7:55

announcement, OpenAI launched Frontier,

7:57

an enterprise agent platform, and signed

7:59

up that list of who's who in the

8:01

enterprise business like Uber, like

8:02

Oracle as launch customers. The Bears

8:05

were making the right argument 6 months

8:08

too late. There just isn't space in the

8:11

world right now for bears that can't

8:14

recognize that token burn is going to go

8:16

up by a thousandfold. You know, every

8:19

major economic era begins this way.

8:21

Massive overbuilding of infrastructure,

8:24

investor panic, the infrastructure looks

8:26

like a catastrophic mass allocation of

8:28

capital. And then a few years later,

8:30

somebody figures out what the

8:32

infrastructure is actually for. The

8:34

railroads did this first, right? The

8:36

American railroad mileage doubled in

8:37

just 8 years between 1865 and 1873. And

8:41

that looked like way too much way too

8:43

fast. and five years of depression

8:45

followed because 121 railroads went into

8:48

bankruptcy and took out 18,000

8:51

businesses. But then a guy named Philip

8:54

Armor figured out refrigerated railroad

8:56

cars and suddenly you could ship fresh

8:58

meat from Chicago to New York and then

9:00

to small towns everywhere and suddenly

9:02

you had an application for railroads.

9:04

Fiber optics repeated that same pattern

9:06

a century later. Between 1996 and 2001,

9:09

telecoms issued over $500 billion in

9:12

bonds and laid 90 million miles of

9:14

cable. And then the bubble burst and the

9:17

wreckage was staggering. A trillion

9:18

dollars in debt were written off. 95% of

9:21

installed fiber went dark. And then

9:23

YouTube launched on bandwidth that cost

9:25

almost nothing. And then Netflix pivoted

9:27

from DVDs to streaming. The economy that

9:30

they enabled became the largest in human

9:33

history. the economy that they enabled,

9:35

the streaming, the cloud, the entire

9:37

modern internet, that became the largest

9:39

in human history. And it was all

9:41

underpinned by the commitment to fiber

9:43

in the 1990s. So that's been the

9:45

pattern, right? Massive investment,

9:47

crash, and discovery. But this cycle has

9:50

a structural difference that changes the

9:52

math and nobody's talking about it.

9:54

Railroads were dumb pipes. Fiber was a

9:56

dumb pipe. AI infrastructure is not a

9:59

dumb pipe. Google, Anthropic, OpenAI,

10:02

they're not really selling bandwidth.

10:05

They're not selling storage. They're

10:06

selling intelligence. Every inference

10:09

call is a purchase of cognitive

10:11

capability. The model is the product and

10:14

the infrastructure exists to serve the

10:16

model at scale. When an agent reviews a

10:18

contract or writes code or manages a

10:21

supply chain, the value it delivers

10:23

flows through the model provider's API.

10:25

The infrastructure and the intelligence

10:27

are vertically integrated in a way that

10:29

railroads and fiber never were. And this

10:31

means that companies building AI

10:33

infrastructure are positioned to capture

10:35

value from the applications built over

10:37

the top. Not just hosting fees, but an

10:39

actual share of the cognitive work those

10:42

applications perform. That is a very

10:44

different economic structure than any

10:47

previous infrastructure buildout. It

10:49

doesn't guarantee that any of these

10:51

companies are going to win, of course,

10:52

but it does mean the analogy to telecom

10:55

companies going bankrupt is kind of

10:57

misleading. The model makers are not

11:00

laying dumb cable. They're selling the

11:02

thing that makes all of our computers

11:04

valuable. Now, there's also an important

11:06

distinction in the AI infrastructure

11:08

conversation that most Wall Street

11:10

observers have been missing, and it's

11:11

the key to understanding why the bubble

11:13

to underbuilt narrative has flipped so

11:15

quickly. The first wave of AI infra

11:17

spending from 2023 out to mid 2025 was

11:20

primarily about training. Build those

11:22

massive clusters of GPUs. Train

11:24

foundation models. Training is

11:26

expensive, but it's also bursty, right?

11:28

You need a lot of compute for months and

11:30

then the model's done. The investment is

11:31

very front-loaded. This is the phase a

11:34

lot of the bears were analyzing when

11:35

they called it a bubble. But the phase

11:37

we just entered is about inference. It's

11:39

about running those trained models at

11:42

scale continuously for millions of users

11:44

and frankly millions of AI agents 24

11:47

hours a day. Now, inference is cheaper

11:50

per unit, but it never ever stops.

11:52

Agents change the inference math in a

11:54

way that nobody really priced in outside

11:57

of a few folks who were optimistic in

11:59

San Francisco. A human using Chat GPT,

12:01

they'll generate a modest inference

12:03

workload. an agent is going to generate

12:06

a thousandx a human workload if they're

12:09

reviewing contracts, if they're writing

12:10

code. You there's no way that you can

12:13

get anywhere close to par with a human

12:15

if you're an agent because of the pace

12:17

at which an agent executes. Now multiply

12:20

that thousandx gain by every workflow

12:23

the SAS apocalypse said was about to get

12:25

automated. Think about contract review.

12:28

Think about financial auditing. Think

12:29

about data analysis. Think about CRM

12:32

management and customer service. the

12:34

enterprises signing up for OpenAI's

12:36

Frontier for Cloud Co-work, they're not

12:38

thinking about it as we're deploying one

12:40

agent. They're deploying fleets of

12:42

agents. And that's why the narrative has

12:44

flipped so violently. Wall Street has

12:47

finally figured out that $650 billion or

12:50

$750 billion, whatever the number is

12:52

going to be this year, that's only

12:54

insane if you're building clusters for

12:57

chat bots and you're just training new

12:59

models. That's not how it works right

13:01

now. We are serving models for agents.

13:05

It's an entirely different world. And

13:07

the 6040 split that Google's CFO talked

13:10

about, Google understands this. They're

13:12

not building training clusters anymore.

13:15

They're building inference capacity for

13:17

a world where AI agents are the primary

13:20

consumers of compute. You don't build

13:22

60% servers and 40% data centers if

13:25

you're not in the inference business.

13:26

And even that framing understates how

13:29

big the gap is right now. Fijiimo,

13:31

OpenAI's CEO of applications, said

13:33

something this week that most people

13:34

glossed over. She said, "We spent months

13:37

integrating and we didn't even get what

13:39

we wanted. The CEO of applications at

13:41

the most valuable AI company in the

13:43

world said enterprise AI integration is

13:46

harder than expected. Not because the

13:48

models aren't great, but because the

13:50

infrastructure to connect AI agents to

13:52

enterprise systems is not mature enough.

13:54

The plumbing is not there where it needs

13:56

to be. The connectors aren't there where

13:57

they need to be. the security layers

13:59

aren't there where they need to be.

14:00

Demand is exploding, but it's way out

14:03

running the plumbing. And the plumbing

14:05

is what that 650 to700 billion is trying

14:08

to close. You know, every infrastructure

14:11

inversion has a window usually I don't

14:13

know half a dozen years, 3 to seven

14:15

years, call it, where the infrastructure

14:16

is being built and the companies that

14:18

will eventually use it are just getting

14:20

started. The companies that build during

14:21

that window end up becoming the

14:23

platforms and the companies that wait

14:25

become the tenants. Amazon built AWS

14:27

between 03 and ' 06 and had the dominant

14:30

cloud platform before most enterprises

14:33

even knew they needed one. The companies

14:34

that waited for cloud to prove itself

14:36

ended up paying Amazon's margins for the

14:38

next 20 years. That window is open now

14:41

on AI infrastructure, but the timeline

14:44

is compressed in a way that should

14:45

concern anyone who thinks they can wait.

14:47

Look, railroads took something like two

14:50

decades to overbuild before the economy

14:52

justified them. Fiber took a decade. AWS

14:55

took six. It's compressing. The current

14:57

cycle is moving at roughly 18 months

14:59

because the demand signal does not take

15:01

years to arrive. It arrives fast because

15:04

agents are developing that fast.

15:06

Google's $185 billion spend. It makes

15:08

sense when you understand that

15:10

compression. They're not spending too

15:12

much. They're spending at the pace

15:14

required to build the platform layer

15:17

before somebody else does. The same is

15:19

true for Amazon, for Microsoft, for

15:21

Meta. None of them can afford to wait

15:23

because the lesson of every prior

15:25

infrastructure inversion is that the

15:27

platform builders capture the economics

15:30

of everything built on the top. If you

15:32

miss that window, you're renting someone

15:34

else's infrastructure for the next

15:36

decade. The companies that look like

15:37

they're burning cash in 2026, the big

15:40

five, will look like they were laying

15:42

the foundation when we look back at

15:44

2028. And the companies that showed

15:46

quote discipline by spending less are

15:48

going to end up missing the most

15:49

important infrastructure buildout since

15:51

cloud. So where does this infrastructure

15:53

actually go? Who gets to run on it? The

15:55

answer requires taking the current

15:57

trajectory really seriously. And most of

15:59

us are not doing that because that

16:00

trajectory is profoundly uncomfortable

16:03

to our brains. Code proved to be the

16:05

breakthrough application for agents. And

16:07

the reason is worth understanding

16:09

because it tells you where we're headed.

16:11

Code is the one domain where an agent's

16:13

output is immediately and objectively

16:15

verifiable. You run it and it works or

16:17

it doesn't. That feedback loop is the

16:19

kind of iterative cycle that agents

16:21

excel at. There's no ambiguity, no

16:24

subjectivity. It works or it doesn't.

16:26

And that's why coding agents crossed

16:28

from useful to transformative so

16:30

quickly. Now, today coding agents work

16:32

in bursts. An hour here, a few hours

16:34

there, guided and directed by humans.

16:36

But the trajectory is really clear.

16:38

Context windows are expanding. Working

16:40

memory is multiplying. The ability of an

16:42

agent to hold a code base in its head is

16:45

expanding every few weeks, not years,

16:47

weeks. Opus 4.6 5x working memory versus

16:51

4.5 in just the space of a couple of

16:53

months. If that pace holds and there's

16:55

zero evidence it is decelerating, then

16:57

by the end of the year, we're looking at

16:58

agents that can do months of work. Think

17:01

about what that means for infrastructure

17:03

demand. in agent coding autonomously for

17:06

a month continuously generating and

17:08

testing and refining is consuming

17:10

inference compute at a volume that no

17:13

analyst model has properly accounted

17:15

for. We're just not good at exponentials

17:17

as humans and code is just the domain

17:19

where the feedback loop closed first.

17:21

Legal analysis is next. Contract review

17:24

has really clear success criteria.

17:26

Financial auditing is similar. Medical

17:28

diagnostics is similar. Engineering

17:30

design is similar. Domains where output

17:33

quality can be systematically evaluated

17:35

are domains where agents can cross from

17:38

useful to autonomous faster than people

17:40

are planning for. The infrastructure

17:42

that looks like an overbuild today is

17:44

going to look like it was sized wrong in

17:47

just a year or two here. The agentic era

17:50

is going to make everything we've spent

17:51

so far look like a little down payment

17:53

on what we need to spend. You know

17:55

what's interesting? This pattern is

17:57

fractal. Just as the infrastructure

17:59

inversion pattern plays out at scale

18:01

with these big model makers, it plays

18:03

out for all of us as individuals. And

18:06

the question it forces at each of those

18:08

scales is the same. What do you have

18:11

that's valuable when the infrastructure

18:13

shifts underneath you? Google is

18:15

spending $185 billion because they've

18:17

calculated that the cost of

18:18

underbuilding is existential. Not risky,

18:21

existential. They'd rather be wrong and

18:23

have spent too much than be right and

18:25

have spent too little. Your career works

18:27

the same way. And the question you need

18:29

to answer honestly is what human skills

18:32

survive when agents can code for months.

18:35

When they can review contracts, when

18:37

they can generate production quality

18:39

work at machine speed. I'm going to

18:41

suggest four things. First, everyone

18:43

talks about it, but we're going to get

18:44

into it. Taste. The ability to look at

18:47

what an agent produces and know not just

18:50

analytically, not just by checklist, but

18:52

by a hard one instinct whether it's

18:54

right, whether it's good, whether it

18:57

solves the real problem or a poorly

19:00

framed problem. Agents can generate

19:02

enormous volumes of competent output. We

19:05

will be drowning in competent output

19:07

before long, but they cannot yet tell

19:09

the difference between competent and

19:11

extraordinary reliably. They cannot tell

19:13

the difference reliably between

19:15

technically correct and strategically

19:17

right. The people who can make that

19:19

distinction, who have refined their

19:21

judgment through years of doing the

19:22

work, become exponentially more valuable

19:26

when the cost of generating options

19:28

drops to zero. Taste becomes a filter.

19:30

Number two, exquisite domain judgment,

19:32

not general intelligence. Agents are

19:34

going to have that in abundance. The

19:36

specific, contextual, hard to articulate

19:39

understanding of how a particular domain

19:41

actually works. The lawyer who knows

19:43

which clauses matter in the negotiation,

19:46

not just which clauses need to exist.

19:48

The engineer who knows which

19:50

architectural decisions are going to

19:51

create pain in 18 months or 30 years.

19:53

The executive who knows which market

19:55

signals are noise and which are

19:57

structural. This knowledge is

19:58

accumulated over years and encoded in

20:01

intuition that agents can approximate

20:04

but not yet replicate because it depends

20:06

on experience that is just not in the

20:07

training set. Phenomenal ramp is another

20:10

skill. the ability to learn fast when

20:12

everything is evolving fast, not I took

20:14

a course on AI. It's the kind of

20:17

learning where you're using the tools

20:18

daily, your mental model is updating

20:20

weekly, and you're comfortable operating

20:22

at the frontier of capability, even when

20:24

the frontier moved since last Tuesday.

20:26

In a world where Opus 4.6 can come on

20:29

the scene and everybody will be talking

20:31

about it and Codeex will follow 20

20:33

minutes later and then who knows what

20:35

drops next week, it's the ability to

20:37

absorb change at speed that matters.

20:39

That's a meta skill that makes all the

20:41

other skills usable. The humans who can

20:44

keep up with AI have an edge that just

20:46

keeps compounding. And last but not

20:48

least, we need relentless honesty about

20:50

where value is moving. This is the hard

20:52

one because it requires looking at our

20:54

own work and asking which parts of it

20:56

are really valuable and which parts an

20:58

agent could handle better, cheaper, and

20:59

faster. Most people don't want to do

21:01

this inventory. It can be heartbreaking.

21:04

It can be threatening. It can require

21:05

admitting that some of the skills you

21:08

spent years building, they're

21:09

depreciating so fast it's worthless. But

21:12

the people who do the inventory, who do

21:14

the work, who are honest about which

21:16

parts of our work, taste and judgment

21:18

matter in, and which parts execution and

21:20

process are the only things and which

21:23

parts are just execution and process.

21:25

Those are the ones who can reallocate

21:27

their time toward the things that still

21:29

matter before the market forces them to.

21:31

If you're waiting for AI to settle down

21:34

before investing serious time and

21:35

skills, please don't. You will not come

21:38

back from that bet. You are making the

21:40

same bet as the companies that waited

21:42

for cloud computing to prove itself in

21:44

2008. Stability is not coming. The pace

21:49

is accelerating. It's not slowing down.

21:51

And the gap between I use AI tools and I

21:54

have rebuilt how I work around what AI

21:57

makes possible is really the individual

21:59

version of the gap between we added AI

22:02

features to the product and we built our

22:05

architecture to be agent first. The

22:07

first approach feels productive. The

22:09

second approach is what is actually

22:10

going to change our outcomes. This is

22:12

now an agentic world. This is year one.

22:14

The $185 billion Google is spending is

22:17

not reckless. It's not aggressive. It's

22:19

probably not enough. The market is going

22:21

to look back on 2026 the way we looked

22:24

back at the early AWS data centers, the

22:26

first transcontinental railroad, the

22:28

fiber optic cables lying in the dark

22:30

under the Atlantic. The foundation of

22:32

everything that comes next is being

22:34

built this year. And it's being laid in

22:37

the year that agents proved they were

22:39

real. And that matters as much for you

22:41

and me as it does for those fancy

22:43

companies spending those hundreds of

22:44

billions of dollars. Good luck out

22:46

there. I put together an agent guide for

22:48

this one because because the more we

22:49

practice, the better off we're going to

22:51

be.

Interactive Summary

The video details the massive, unprecedented investment by tech giants like Google into AI infrastructure, totaling hundreds of billions, driven by the rapid emergence and deployment of AI agents. Initially met with market skepticism, this spending is now understood as crucial, shifting from model training to continuous, large-scale inference. Unlike previous infrastructure booms such as railroads or fiber optics, AI infrastructure integrates intelligence, allowing providers to capture value directly from applications built on top. The timeline for this buildout is significantly compressed, forcing companies to quickly become platform builders. In this agentic era, key human skills like taste/judgment, exquisite domain knowledge, rapid learning, and honest value assessment will remain vital as AI agents become increasingly autonomous.

Suggested questions

4 ready-made prompts