HomeVideos

Going Slower Feels Safer, But Your Domain Expertise Won't Save You Anymore. Here's What Will.

Now Playing

Going Slower Feels Safer, But Your Domain Expertise Won't Save You Anymore. Here's What Will.

Transcript

385 segments

0:00

AI is collapsing futures and most of us

0:02

are missing what that really means. We

0:04

think collapsing as in destroying.

0:06

That's not what what I mean here.

0:07

Collapsing as in compressing is what

0:10

people are missing because AI is

0:12

collapsing multiple different dimensions

0:14

of our work lives into a single thread

0:17

pointing to the future. And we're we're

0:19

missing the deeper implications of that.

0:21

The first collapse is horizontal.

0:23

Engineer, product manager, marketer,

0:25

analyst, designer, opsley. These used to

0:27

be distinct career paths with very

0:29

distinct skill sets. They're all

0:31

converging now very quickly into a

0:34

single meta competency orchestrating AI

0:37

agents to get work done. If you cannot

0:40

do that, none of the rest of the domain

0:43

knowledge is going to matter in late

0:44

2026. And yes, I don't want to lose the

0:48

fact that we still have folks who have

0:50

10 years, 15 years experience in these

0:52

individual domains in front-end design

0:54

in being an operational lead in doing

0:57

deep back-end engineering. But you don't

1:00

have value there unless you can do the

1:03

orchestrating AI agents piece certainly

1:05

by late 2026, early 2027. That is how

1:08

fast this space is developing and I

1:10

don't think most of us are ready for it.

1:12

The second collapse is temporal. the

1:15

leverage you thought you could build

1:16

over the next five years. The way we've

1:18

been trained to think about career

1:20

ladders as these steady steps wait two,

1:22

three years next promotion, that

1:23

timeline is compressing into months. The

1:26

rate of AI capability improvement nearly

1:28

doubled in the last year and it's just

1:31

going to keep going faster. Both

1:33

collapses point to one conclusion. Now

1:36

is what matters. Not your 5-year plan,

1:38

not your eventual intention to get up to

1:40

speed on AI because the future keeps

1:43

arriving faster. Preparation means

1:45

engagement. And I'll add one more piece

1:47

here that I think is absolutely true

1:50

across everyone who engages with AI

1:52

productively.

1:54

This is an art you learn by doing. You

1:57

do not get to learn to ride a horse by

2:00

reading a book as a friend of mine

2:02

called out. You do not get to learn to

2:05

swim by sitting in a deck chair and

2:07

watching the ocean. You just got to get

2:09

in. And that's very true of AI because

2:11

it's an experiential technology. Let me

2:13

go a bit deeper onto the differentiation

2:15

between knowledge work roles because I

2:17

think a lot of times when you see oh the

2:19

knowledge work roles are collapsing.

2:21

They're all going to be the same. It

2:22

feels like a big claim. It feels like

2:24

it's overhyped.

2:26

Gartner's predicting that close to half

2:28

of enterprise applications will

2:29

integrate task specific AI agents by the

2:31

end of 2026. That's up from less than 5%

2:34

in 2025. It's absolutely exploding. It's

2:37

an eight-fold increase in just over a

2:39

year. 57% of companies uh as of 2025

2:43

claim to have AI agents in production.

2:45

Now those can have varying degrees of

2:47

competency. They the the direction is

2:49

nothing but it's exploding. So what this

2:52

means is that specific domain AI

2:54

expertise is going to be mediated

2:56

through these universal skills. The

2:58

differentiation is going to be whether

3:01

you can apply your marketing skills,

3:03

your engineering skills, your finance

3:05

skills, whatever it is in an AI

3:07

agentshaped way. Think about what a

3:10

product manager does today versus two

3:12

years ago. The job used to require

3:14

synthesizing customer feedback, writing

3:17

specs, coordinating with engineering,

3:18

managing stakeholders, and now

3:20

increasingly the job involves just

3:22

prompting models to draft spec and using

3:25

AI to analyze customer data. And you're

3:27

often now using agents to update

3:29

tickets. You're using agents to directly

3:31

build in production. Your entire job is

3:34

radically different. And that pattern

3:36

repeats across every function. Legal

3:38

teams using AI to review contracts are

3:40

compressing jobs that took weeks into

3:42

hours. Finance teams can now use clawed

3:45

in Excel to build projections that used

3:47

to take days. Customer success teams can

3:50

run AI agents that handle 80% of initial

3:53

inquiries or 90 or 95. There is going to

3:56

be a fundamental turnover of skills

4:00

across every one of these jobs families.

4:03

What used to be 50 different

4:04

specializations is going to converge

4:06

into variations on a single theme.

4:09

Humans directing AI with good knowledge

4:11

and good software-shaped intent toward

4:14

an outcome. I've talked about software

4:16

shaped intent before. I think it's one

4:17

of the biggest skills we're missing when

4:20

we direct agents. We need to think in

4:22

terms of what agents can deliver within

4:25

the technical ecosystem they occupy.

4:27

Where is the agent's tool set? Where is

4:30

the agent's memory? Where is the agent's

4:31

workflow? When I direct the agent to do

4:33

something, is it going to look

4:35

softwareshaped? As in, is it going to be

4:38

an interface that adequately reads and

4:40

writes data so that I can solve the

4:43

problem? Software is leveraged expressed

4:45

in silicut. Fundamentally, if you know

4:48

how software works, and so much of

4:50

software is just reading and writing

4:52

data and presenting it in a way that's

4:54

useful, if you start to think in those

4:56

terms, you're going to be able to apply

4:58

the specific domain knowledge you have

5:00

in design, in finance, in customer

5:02

success, and you're going to be able to

5:04

use AI agents more effectively. Even if

5:07

your job isn't building software, this

5:09

used to be a product only thing or an

5:10

engineering only thing. The idea that we

5:13

now work with agents is becoming

5:16

universal. And the idea that we have to

5:17

think in software terms is coming out of

5:20

the technical box. It's coming out of

5:21

engineering. It's coming out of product.

5:23

It's coming for all of us. And I want to

5:25

be clear, your expertise doesn't

5:27

disappear here. It just becomes

5:29

foundational rather than differentiating

5:31

by itself. You need to have great domain

5:34

knowledge to direct AI effectively. It's

5:35

part of how seniors compete in a world

5:38

where everyone has access to the same AI

5:40

tools. But you have to be able to

5:42

leverage that through AI. And I think

5:45

most people think of that still in terms

5:47

of their specific domain. We have this

5:49

sort of single lane focus. And what I'm

5:51

calling out is that we have a giant

5:52

bottleneck on skilling. Like all of our

5:54

skills are starting to converge around

5:56

this one gigantic meta skill of driving

5:59

AI agents. The second collapse I want to

6:01

talk about, I mentioned temporal

6:03

collapse. This is really important and

6:05

we keep missing it. career leverage is

6:08

compressing into the present moment

6:10

because AI is accelerating time.

6:12

Consider even just the Sweetbench coding

6:14

benchmark. AI systems could solve 4% of

6:17

problems in 2023 and they've essentially

6:19

solved the entire benchmark 2 years

6:21

later. I I don't know exactly what it's

6:23

going to be when you see this video, but

6:24

it's around 90 95%.

6:26

Sweetbench is saturated and the fact

6:29

that we saturated it is not even the

6:31

most important thing. The doubling time

6:34

to get that number up is shrinking. AI

6:36

progress is accelerating. Traditional

6:39

career planning assumed you had the

6:40

time. Learn a skill, apply it for years,

6:42

build expertise, get promoted,

6:44

eventually learn that expertise and

6:47

figure out how to leverage it in

6:49

leadership. That timeline gave you a

6:51

sense that you could plot out your

6:53

growth over time and get some breathing

6:55

room. You could be strategic about when

6:57

to invest your learning energy. And that

6:59

assumption, if you take it at face value

7:01

like you could in the 2000s and 2010s,

7:04

that's now catastrophically wrong

7:06

because you have to assume a career path

7:08

where AI is gaining speed ever more

7:11

rapidly. And this creates a really tough

7:13

dynamic for career planning. I don't

7:15

want to sugarcoat that. The skills that

7:17

will matter in 2027 are being defined

7:20

now by people engaging now. If you wait

7:23

until the tech settles down, you're

7:25

going to find that the early adopters

7:26

have already built the workflows,

7:28

established the norms, and captured the

7:30

opportunities that you were waiting for.

7:32

They'll have two years of compound

7:34

learning while you're still figuring out

7:35

the basics. So, there is I cannot

7:37

promise you a way to settle down. This

7:39

is a chaotic period. There is no mature

7:42

state to wait for. There is only a

7:44

continuously steepening curve and it's

7:47

going to reward folks who can climb in

7:48

early and go faster. I compare AI to

7:51

riding a bike. If you are going slow on

7:54

a bike, it's really hard to balance and

7:56

you feel like you're never going to

7:58

catch up. But experientially, when you

8:00

go faster on a bike, the steadiness

8:03

increases. The way to balance gets

8:06

easier. And kids have so much trouble

8:08

learning this. They think if they go

8:09

slower, they'll be safer. But they're

8:11

actually safer going faster. And that is

8:14

what you have to learn with AI. You're

8:16

actually safer leaning in and going

8:18

faster than you are going slower because

8:21

slower forces you to constantly think

8:23

about breaking and stopping and slowing

8:25

down and figuring out how you can adjust

8:27

and work this into your existing

8:29

workflow. And I see so many of us acting

8:31

like kids on a bike for the first time.

8:33

We're just trying to figure out how to

8:34

go very slowly. I got to say AI is going

8:37

too fast for that. You got to get on the

8:39

bike and go as quick as you can because

8:40

that's the easiest way to balance. Like

8:42

people ask how I keep up. It's because

8:44

I'm going pretty fast on the bike and it

8:46

feels really steady. The old career

8:48

model assumed your expertise appreciated

8:50

over time. You would learn something

8:52

valuable. It would stay valuable

8:54

gradually. It would compound. The new

8:55

model is really different. Your

8:56

expertise atrophies. It depreciates

8:59

unless you continuously update it. And

9:01

the depreciation rate is accelerating

9:03

because AI progress is going faster. I'm

9:06

not trying to argue for panic here. It's

9:08

an argument for continuous engagement.

9:10

The people who are thriving now are not

9:13

the ones who just go to an AI class and

9:16

master it once and then coast. They're

9:18

the ones who develop the meta skill of

9:20

continuously learning and adapting as

9:22

the tech evolves. The halflife of any

9:25

given piece of specific AI knowledge is

9:27

short and it's getting shorter. The

9:29

halflife of the learning habit around AI

9:33

is getting longer and more durable. If

9:35

you doubt the magnitude of what's

9:36

happening, follow the money. This is the

9:38

biggest capex project in human history.

9:40

Big tech's combined AI capital

9:42

expenditure was close to half a trillion

9:45

dollar in 2025. It's going to be well

9:47

over half a trillion in 2026. And in

9:50

total, the big five, Amazon, Microsoft,

9:53

Google, Meta, Oracle plan to add more or

9:55

less at least $2 trillion in AI related

9:59

assets in the next four years. This is a

10:02

tremendous amount of operational

10:04

investment in what these companies

10:06

believe is the future. The money is

10:08

committed. AI is happening and is going

10:10

to define the next era of computing so

10:12

thoroughly that we have got to

10:15

understand that there is no other way

10:17

there. The only way out and through is

10:19

AI. And that's what I mean when I say

10:21

it's collapsing timelines and

10:23

compressing career trajectories. There

10:25

is no other way through the career path

10:27

that does not include AI. And things get

10:29

uncomfortable at that point. Many people

10:31

are resisting and you don't have time to

10:33

resist. If you tried chat GPT in 2022

10:36

and said it hallucinated and you just

10:37

left it, you don't have time for that

10:39

anymore. You don't have time to say I'll

10:41

wait it until it matures. Like that's an

10:43

that's like sitting by the bike and say

10:44

I'll wait till it gets steadier. It's

10:46

not going to get steadier. You don't

10:48

have time to say my job is immune. I got

10:50

news for you. It's not immune. Anytime

10:52

you are touching a computer, you are

10:54

touching AI. That's how pervasive it's

10:57

going to be in the next year. And to be

10:59

clear, for people who are saying, I want

11:01

to exit the ride. I want to stop a tech

11:04

career. I have respect for that. And I

11:06

know folks who have said, I had my

11:08

career. I think I'm done. I want to open

11:10

a bookshop. I want to go and start doing

11:13

carpentry. That's fine. That's great.

11:16

That is something that you can choose to

11:17

do intentionally. I think that's a much

11:20

more productive choice than trying to

11:22

stay in the industry that is converging

11:24

on AI and trying to resist that. that's

11:27

just not going to go well and it's going

11:28

to make everybody including you kind of

11:30

miserable. And so if you really think AI

11:33

is not for you, I think the best thing

11:34

you can do is pick that alternate career

11:36

path that takes you away from the screen

11:38

because if you're going to stay in

11:39

fields touched by AI, which is

11:41

increasingly everything to do with a

11:42

computer, you're going to have to

11:44

engage. I want to close by giving you

11:46

some encouragement. It is easy to look

11:48

at this and to be doom and gloom. It is

11:51

easy to say, I did not make the choice

11:52

for AI. I would argue none of us did.

11:55

The industry as a whole made that choice

11:57

and we are all living through this

11:59

moment together. We did not choose to

12:01

compress timelines. We did not choose to

12:04

compress career paths. It's happening

12:06

for us. And I have seen over and over

12:09

again that when people recognize that

12:12

and when they choose to say even though

12:14

I didn't get to pick this, I am going to

12:16

choose to engage with AI with curiosity.

12:18

I'm going to choose to engage AI and

12:19

learn to ride the bike. I'm going to

12:21

lean in as far as I can lean in even if

12:23

I'm not quite sure. that is going to get

12:26

you so much farther. It is going to get

12:28

you an accelerated rate of learning.

12:30

You're going to be less overwhelmed.

12:32

Curiosity literally opens up your brain.

12:34

And we need openness to this AI world if

12:38

we want to be able to shape it in a way

12:40

that works for us. And I've seen

12:42

numerous dozens and dozens and dozens of

12:44

examples where people have chosen that

12:47

positive path. They've chosen to lean in

12:49

in widely differing fields. healthcare,

12:52

tech, finance, engineering, product.

12:55

I've even seen folks lean in on like

12:57

small town community building in AI. And

13:00

without exception, that choice to

13:02

positively lean in has taken them

13:05

farther. And so, if I can leave you with

13:06

anything in the middle of a timeline

13:09

that feels like it's increasingly wild

13:11

and unpredictable, it's just an

13:13

invitation to get on the bike with AI.

13:15

You got to go faster. And you've got to

13:17

be able to believe that if you lean in

13:20

and pry, if you jump in and you say,

13:23

"All right, I'm going to I'm going to

13:25

try something new. I'm going to try

13:26

Claude Code." Whatever it is that's new

13:27

to you, right? I'm going to try lovable.

13:29

I'm going to try a different way of

13:30

working with my chatbot. Great. And then

13:32

do the next thing. And then lean in a

13:34

little farther. And then lean a little

13:35

farther. And you're going to go faster

13:36

and faster and faster and faster. And

13:39

it's going to feel steadier over time

13:40

because you're going to pick up how AI

13:43

works across all of these systems in

13:46

your unconscious brain. The patterns

13:48

will start to solidify and you're

13:49

basically learning to work with this new

13:51

piece of technology in a way that feels

13:53

very stable over time. And so if I can

13:55

encourage you with anything is that

13:57

going faster is safer and less scarier

13:59

with AI than going slower. Cheers.

Interactive Summary

The video discusses the concept of AI "collapsing" futures, not in a destructive sense, but by compressing multiple dimensions of our work lives into a single trajectory. This involves two main collapses: horizontal and temporal. The horizontal collapse sees traditional distinct career paths (engineer, marketer, etc.) converging into a meta-competency of orchestrating AI agents. Domain expertise alone will not be sufficient by late 2026; the ability to leverage AI is key. The temporal collapse means career progression timelines, once measured in years, are now compressing into months due to the rapid improvement of AI capabilities. The speaker emphasizes that now is what matters, not long-term plans, and that preparation means active engagement with AI, likening it to learning to ride a horse or swim – it's an experiential process. Gartner predicts a massive increase in enterprise applications integrating AI agents by 2026. The differentiation in future roles will be in applying existing skills (marketing, engineering, finance) in an AI-agent-shaped way, moving from specific tasks to prompting and leveraging AI for analysis and execution. This pattern repeats across all functions, fundamentally changing jobs. The core skill becomes "software-shaped intent," understanding how agents work within their ecosystem (toolset, memory, workflow) and how to interact with data effectively. Expertise doesn't disappear but becomes foundational, requiring continuous updating as AI progress accelerates and expertise depreciates rapidly. The speaker highlights massive investment in AI by major tech companies as evidence of its inevitable impact. Resisting AI or waiting for it to mature is futile; continuous engagement and learning are crucial. The analogy of riding a bike is used: going faster on a bike increases stability, and similarly, embracing AI quickly is safer and more effective than going slow and hesitant. The key is developing the meta-skill of continuous learning and adaptation, as the half-life of specific AI knowledge is short, but the habit of learning is durable. The video encourages leaning into AI with curiosity, viewing it as an opportunity for accelerated learning and adaptation, leading to greater stability and effectiveness.

Suggested questions

5 ready-made prompts