HomeVideos

The Job Market Split Nobody's Talking About (It's Already Started). Here's What to Do About It.

Now Playing

The Job Market Split Nobody's Talking About (It's Already Started). Here's What to Do About It.

Transcript

928 segments

0:00

Code is about to cost nothing, and

0:01

knowing what to build is about to cost

0:03

you everything. Last July, an AI coding

0:06

agent deleted Saster's entire production

0:08

database during an explicit code freeze

0:10

and then fabricated thousands of fake

0:13

records to cover its tracks. Jason Linen

0:16

was a developer who had given the agent

0:18

all caps instructions. I guess that's

0:19

how we prompt, not to make any changes.

0:23

The agent made changes anyway, destroyed

0:25

the data, and lied about it during a

0:27

code freeze. Fortune covered this. The

0:29

register covered it. It made headlines

0:30

because an agent ignored a clear spec.

0:33

But that failure is what people fixate

0:36

on incorrectly. It's the story of the

0:39

disobedient machine, the Terminator. The

0:41

failure mode that actually matters is

0:44

quieter and it's vastly more expensive.

0:47

Agents that execute specifications

0:49

flawlessly, they build exactly what was

0:52

asked for and then what was asked for is

0:55

wrong. A code rabbit analysis of 470

0:58

GitHub pull requests found AI generated

1:01

code produces 1.7 times more logic

1:04

issues than human written code. Not

1:07

syntax errors, not formatting problems,

1:09

but the code itself doing the wrong

1:12

thing correctly. Google's DORA report

1:14

tracked a 9% climb in bug rates that

1:17

correlates to the 90% increase in AI

1:20

adoption alongside a 91% increase in

1:24

code review time. The code ships faster,

1:27

but it's often more wrong and it's

1:29

difficult to catch until production. AWS

1:32

noticed this and they launched Cairo, a

1:34

developer environment whose core

1:35

innovation isn't faster code generation.

1:38

It's actually just forcing developers to

1:40

write a testable specification before

1:43

any code gets generated. Tell me what

1:45

it's going to be like by telling me how

1:47

you test it. Amazon, a company that

1:49

profits when you ship faster, decided

1:51

the most valuable thing it could do is

1:54

slow you down and define what you want

1:56

because error rates were that concerning

1:58

when developers did not write tests.

2:00

This tells you everything about where

2:03

the bottleneck in code is moving in the

2:05

age of AI and implicitly where the

2:07

bottleneck in jobs is moving. The

2:10

marginal cost of producing software is

2:12

collapsing to zero. 90% of Cloud Code

2:15

was written by Claude Code itself and

2:16

that number is going to be 100% very

2:19

very shortly. Three people at Strong DM

2:21

built what would have required a

2:23

10person team 18 months ago. Curser

2:26

generates $16 million per employee

2:29

partly because they figured out AI code

2:32

generation. So the capability curve is

2:34

steepening. It's not leveling off. And

2:36

if you're reasoning from what AI could

2:38

do in 2025, in 2024, you're working from

2:41

an expired map. But the cost of not

2:44

knowing what to build, of specifying

2:47

badly or vaguely or not at all, is

2:50

compounding much faster than production

2:53

cost is falling, which is a huge

2:54

statement because production cost is

2:56

falling really fast. Yet, every

2:58

framework people reach for to understand

3:00

this moment tends to ask the incorrect

3:02

question because it tends to ask whether

3:04

AI replaces workers and jobs. But when

3:06

the cost of production is collapsing

3:08

like this, the more useful question is

3:11

actually what is the new bottleneck

3:13

where jobs are going to be useful? What

3:15

is the new bottleneck where humans have

3:17

to get really clear? And guess what?

3:19

It's around intent. It's around those

3:21

specifications that engineers struggle

3:24

to write. All of knowledge work is

3:26

becoming an exercise in specifying

3:29

intent. And this video is about what

3:32

happens when those engineering mental

3:35

models get out into the rest of the job

3:38

space and we all have to think about

3:40

where our value is moving when it is not

3:43

doing the work. I think one place we

3:45

need to start when we understand jobs

3:47

and AI is the thinking of Francois

3:50

Cholelay the creator of Caris one of the

3:52

sharpest thinkers in machine learning.

3:53

He made an argument that's become the

3:55

default framework for understanding AI

3:57

and jobs. He pointed to translation, a

4:00

profession where AI can perform 100% of

4:03

the core task and has been able to do so

4:05

since 2023. Translators did not

4:08

disappear. Employment has held roughly

4:10

stable since. The work has shifted in

4:12

the last couple of years from doing it

4:13

yourself to supervising AI output. Now

4:17

payment rates have dropped and

4:18

freelancers have gotten cut first. There

4:20

is new hiring freezes going on. So

4:23

there's impact on jobs. And yet, despite

4:26

all of that, the Bureau of Labor

4:27

Statistics still projects modest growth

4:29

for the translation job category.

4:31

Chalet's claim is that software is going

4:33

to follow the same pattern. More

4:36

programmers will be needed in 5 years,

4:38

not fewer. The jobs will transform

4:40

rather than vanish. I think the model is

4:43

useful for thinking, but I think it's

4:44

also stuck on the wrong question yet

4:47

again. Will software engineers keep

4:49

their jobs is not the most interesting

4:52

question when the cost of production is

4:54

collapsing towards zero because so many

4:56

of us as engineers frankly so many of us

4:58

as knowledge workers all of our work has

5:00

been in production and so if you're

5:02

going to take the cost of production to

5:03

zero will we keep our jobs is really the

5:05

wrong way to think about it. It's really

5:07

what is a what is our job going to turn

5:09

into? And so the interesting question if

5:11

we ask about job transformation not just

5:14

for engineers but for everybody is what

5:16

is becoming scarce and therefore what is

5:19

becoming valuable when doing the work

5:22

when building is no longer the hard

5:24

part. Chalet doesn't have a framework

5:26

for that because translation's

5:29

capability plateau gave the market the

5:31

time to find a stable answer in the

5:34

translation job category. AI coding and

5:36

by extension AI knowledge work is on the

5:39

steepest part of the curve right now.

5:41

I've said before that I think benchmarks

5:43

are fairly easy to game. I'm not the

5:46

only person to say that. But the

5:47

production evidence of coding capability

5:50

gain is so unambiguous. You don't need

5:52

to pay attention to a benchmark to

5:54

believe it. You get look at cursors arr

5:56

and how fast they're growing. Look at

5:57

lovable. Look at the ability to now have

6:00

agents review the code of agents.

6:03

translation had a couple of years to

6:05

adjust because the technology

6:07

essentially solved translation and then

6:10

you had to figure out what to do with

6:12

it. Software may not get the same runway

6:14

because the depth of what's changing is

6:17

much more profound and the pace is even

6:19

faster. We need a different model to

6:22

understand how jobs in software and

6:24

knowledge work are going to change.

6:27

First, when cost goes to zero, demand

6:30

goes to infinity. Every time in economic

6:32

history that the marginal cost of

6:34

production has collapsed in a given

6:36

domain, demand has exploded. Desktop

6:40

publishing did not eliminate graphic

6:41

designers. It created a universe of

6:43

design work that could not have existed

6:45

at any price point prior. Cameras in all

6:48

of our phones created a universe of

6:51

photography that did not exist when

6:53

cameras were very expensive and only a

6:55

few people had them. Mobile didn't

6:57

replace developers. It multiplied the

6:59

number of applications the world needed

7:01

by orders of magnitude. Software is

7:03

about to go through the same expansion

7:05

except bigger. Right now, most of the

7:08

world cannot afford custom software.

7:10

Regional hospitals run on spreadsheets.

7:12

Small manufacturers will track inventory

7:14

by hand. School districts use tools

7:17

designed for organizations 10 times

7:19

their size or more, and some of them use

7:21

nothing at all. The total addressable

7:23

market for software is constrained not

7:26

by demand because demand is functionally

7:28

infinite. It's constrained by the cost

7:31

to produce. We are underbuilt on

7:33

software even after 30 years of software

7:36

engineering 40 50 years. When the cost

7:38

of production collapses, constraint that

7:42

means that we are underbuilt lifts

7:44

forever. Every business process

7:46

currently running in email,

7:48

spreadsheets, phone calls is up for

7:50

grabs now. Every workflow that was never

7:52

worth automating at a $200 an hour

7:54

engineering rate becomes worth

7:56

automating at two bucks in API calls.

7:58

The market for software is not going to

8:00

contract. It is going to explode. And

8:03

that is the best argument for why total

8:05

software employment likely grows and not

8:08

shrinks. Chalet is right about that. The

8:11

demand for people who make software

8:13

happen, however they make it happen, it

8:15

may not be traditional coding, it won't

8:16

be. That has never been higher. and the

8:19

cost collapse is going to push it higher

8:21

still. But I do want to be honest, just

8:24

because we can wave our hands and say

8:26

Jven's paradox means employment grows

8:28

does not mean your specific job is safe.

8:31

And understanding the difference

8:32

requires understanding what happens when

8:34

the constraint shifts from production to

8:36

specification. So let's talk a little

8:38

bit more about the specification

8:40

bottleneck. The majority of software

8:42

projects that fail don't fail because of

8:45

bad engineering. They fail because

8:47

nobody specified the correct thing to

8:49

build. Make it user friendly is not a

8:52

specification. It's like Uber for dog

8:54

walkers is not a specification either.

8:56

It's just a vibes pitch. The entire

8:59

discipline of software engineering,

9:01

agile, sprint planning, etc. evolved as

9:04

a way of forcing specification out of

9:06

vague human language. We need mechanisms

9:09

for converting vague human intent into

9:11

instructions precise enough that code

9:13

can be written against them. That

9:15

vagueness problem has always been there.

9:18

What's new is that the friction of

9:20

implementation is changing. When

9:22

building something took 6 months and at

9:25

best a half a million dollars,

9:27

organizations were forced to think

9:29

really carefully about what they wanted.

9:31

The cost of building acted like a filter

9:33

on the quality of the spec. If you take

9:36

away the cost of building, as AI is

9:38

doing, that filter is going to

9:39

disappear. the incentive to specify just

9:42

evaporated in all of your orgs and the

9:45

cost of specifying really badly is going

9:47

to keep compounding faster than ever

9:49

because now you can build the wrong

9:51

thing at unprecedented speed and scale.

9:54

A vibecoded app can take an afternoon

9:56

and 20 bucks in API calls and if the

9:59

spec is wrong, you did not save 6

10:02

months. You wasted an afternoon and

10:04

perhaps launch something that will harm

10:06

customers because the spec was never

10:08

right. This is the inversion we need to

10:10

pay more attention to because it tells

10:12

us a lot about where jobs are headed.

10:14

The scarce resource in software is not

10:17

the ability to write code. It's the

10:19

ability to define what the code should

10:22

do. And funnily enough, that is part of

10:25

why knowledge work is starting to

10:27

collapse into a blurry job family.

10:30

Because the ability to specify is

10:32

something we all need to do, not just

10:34

engineers. The person who can take a

10:36

vague business need and translate it

10:38

into a spec is the new center of gravity

10:41

in the organization. It doesn't matter

10:43

what their title is. It's not obviously

10:45

the person who writes the code that's

10:46

disappearing. It's not the person who

10:48

reviews the poll requests because

10:50

increasingly that's going to be an

10:51

agent. It's the person with enough

10:53

precision to direct machines and enough

10:56

judgment to know whether the result

10:58

actually solves the problem for

11:00

customers. Two classes of engineer are

11:02

emerging right now and engineering is

11:05

the tip of the iceberg. This is going to

11:07

be true of the rest of knowledge work as

11:09

well. Those two classes emerging right

11:12

now tell us where jobs are headed in

11:14

software. The first class of engineer

11:16

drives high value tokens. These guys

11:19

specify precisely. They architect

11:21

systems. They manage agent fleets plural

11:24

not singular. They evaluate output

11:26

against intention consistently. They

11:28

hold the entire product in their heads,

11:30

what it should do, who it serves, why

11:33

the trade-offs are correct, and why they

11:34

matter. And all they do is they use AI

11:37

to execute at a scale that was

11:38

previously impossible. One of the things

11:40

I want you to think about is that if we

11:42

are underbuilt on software, all of our

11:45

mechanisms are for underbuilt software

11:47

footprints. Imagine a world where your

11:50

engineers have to hold a 10x bigger

11:53

software footprint in their head because

11:55

AI has enabled that kind of scale. You

11:57

can say yes to everything the customer

11:59

wants with AI, but are your engineers,

12:02

are your product managers ready to hold

12:05

that level of abstraction in their

12:07

heads? Because if you can specify well

12:09

enough and orchestrate agents

12:10

effectively, the number of things you

12:13

can simultaneously build and maintain is

12:15

bounded only by your judgment and

12:17

attention, not by the hours in the day.

12:19

These people are going to command

12:21

extraordinary pricing power. The revenue

12:23

per employee data is off the charts. I

12:26

mentioned cursor at $16 million. Well,

12:28

Midjourney is at $200 million with just

12:31

11 people. Lovable is past $100 million,

12:34

past $200 million soon. These are not

12:37

just outliers. This is the equilibrium

12:40

driven by extremely high value AI native

12:43

workers. When one person with the right

12:45

skills and the right agent

12:46

infrastructure can produce what a 20

12:49

person team produced a couple of years

12:50

ago, that person captures most of the

12:53

value that used to be distributed across

12:54

the team. The second class of knowledge

12:56

worker, the second class of engineer

12:58

operates at very low leverage and that

13:00

leverage is degrading single agent

13:03

workflows co-pilot style autocomplete AI

13:06

assisted but not AI directed. These

13:08

engineers, these knowledge workers are

13:10

doing the same work they've always done

13:12

faster and with better tooling and they

13:15

are being commoditized. I just need to

13:16

be honest with you, the signals are

13:18

already there in the data. Entry- level

13:20

postings are down something like 2/3.

13:22

new graduates at 7% of hires, which is a

13:26

historic low. 70% of hiring managers are

13:29

saying AI can do the job of interns. The

13:31

junior pipeline isn't narrowing at the

13:33

intake. It's collapsing because the low

13:36

leverage work that juniors used to do is

13:39

the work AI handles first and best. And

13:41

I want to be really clear here. I have

13:44

personally seen that this is not just a

13:46

junior problem. mid-level and senior

13:49

engineers that are sticking with the way

13:51

they've always worked are in this exact

13:54

same boat. Now, it's time to turn our

13:56

attention to one of the most popular

13:57

responses to the jobs debate, the

13:59

soloreneur thesis. The idea that

14:02

everyone becomes effectively a solo

14:04

capitalist and is able to as a company

14:06

of one unlock tremendous value. That

14:09

sounds really great, but I think it

14:11

captures something real about the first

14:14

class of developer, knowledge worker,

14:16

and not the second class. The ceiling

14:18

for what a single talented person can

14:21

build has absolutely risen through the

14:23

roof. But I think it's a thesis that

14:25

only 10 to 20% of the knowledge

14:27

workforce is positioned to take

14:29

advantage of today. You have to have

14:32

entrepreneurial instincts. You have to

14:33

have deep domain expertise. And you have

14:36

to have the stomach for risk and the

14:38

ability to ramp on AI tools quickly. I

14:40

love that if that's you. The world is

14:43

your oyster. You have never had a better

14:45

chance to build cool stuff. But for the

14:47

other 80%, the future is going to look

14:50

like smaller teams with higher

14:52

expectations and compressed unit

14:54

economics. It's not a revolution in

14:56

autonomy for them. It's not a revolution

14:59

in autonomy for you if you are building

15:01

with the same production model. Instead,

15:04

it's just more pressure on what it takes

15:06

to stay employed. And so, what is the

15:08

distinction? What is the difference

15:10

between the people who are in that top

15:12

10 to 20% and the world is their oyster

15:15

and they can drive high value through a

15:16

company or run their own company versus

15:18

the people who don't. I think it comes

15:21

down to the economic output generated

15:24

per unit of human judgment. That's the

15:28

bifurcation we're looking at in a

15:29

sentence. And the gap between those two

15:32

classes is going to widen as agent

15:34

capability increases because agents

15:36

force multiply excellent human

15:39

specification and judgment. That is a

15:41

learnable skill. By the way, I don't

15:42

believe this is written in stone. I am

15:44

talking about a percentage divide I have

15:46

observed in the real world. I am not

15:48

talking about something I believe is

15:50

inevitable. You can learn human speck

15:53

and judgment. That is absolutely

15:54

something that's doable. I have

15:56

exercises for it. It's something you can

15:58

accomplish. But I don't want to kid you,

16:00

your teams need to do it if you're a

16:02

leader. Individuals need to do it. The

16:04

companies that are able to get, you

16:06

know, from 10 to 20% to 30 to 40% of

16:09

their workforce in this position are

16:11

going to be much, much more competitive

16:12

because of the nonlinear value of

16:14

learning human judgment as a skill,

16:17

learning specification as a skill. In

16:20

the age of AI, software engineers are

16:22

just the canary in the coal mine here.

16:23

The entire coal mine is much bigger.

16:26

knowledge work like analysis, like

16:28

consulting, like project management. It

16:30

all runs on the same substrate that AI

16:33

is already transforming in software. It

16:35

happens on computers. It produces

16:38

digital outputs. It follows however

16:40

loosely that it can be described,

16:43

formalized, and validated. Now, I know

16:45

the standard objection is validation.

16:47

Software has very clear built-in quality

16:49

signals. Code compiles or it doesn't.

16:51

Knowledge work is much vagger. That

16:54

doesn't hold in 2026. Two forces are

16:58

converging to break that assumption.

17:00

First, a huge fraction of knowledge work

17:02

exists because large organizations need

17:05

to manage themselves. The reports, the

17:07

slide decks, the status updates. This is

17:09

the conneto of coordination. It's a

17:11

nervous system that large companies need

17:13

to function. When organizations get

17:15

leaner, which is one of the things I've

17:17

been talking about a lot, AI is making

17:19

them leaner and we are seeing it across

17:21

the board at big companies that

17:23

coordination work isn't going to

17:24

transform with AI. It just deletes.

17:27

Brook's law ends up working in reverse.

17:29

So Brook's law talks about how

17:30

complicated it is to coordinate large

17:33

numbers of people and how that scales

17:34

exponentially. Well, if you cut down the

17:37

number of people and make your team

17:39

leaner, it turns out you have

17:41

exponential gains in the ability to

17:43

coordinate efficiently. The work was not

17:46

valuable in itself. It was valuable

17:48

because the organization was too big to

17:50

function without it. And the

17:52

organization was big because it needed a

17:54

lot of production labor to sustain the

17:57

value. If you simplify the organization

17:59

and make it leaner, all of that

18:00

coordination work can be deleted.

18:02

Second, the knowledge work that does

18:04

remain, the analysis, the strategy, the

18:06

judgment calls can be made more

18:08

verifiable. Consider what's already

18:10

happening in financial services. A

18:12

portfolio strategy used to live in a

18:14

deck and a set of quarterly

18:16

conversations. Now, it lives in a model

18:18

with defined inputs, testable

18:20

assumptions, and measurable outputs. The

18:22

strategy has effectively become a

18:24

specification. And once it's a spec, you

18:27

can validate it against data, run

18:29

scenarios against it, and measure

18:30

whether the execution of that financial

18:33

strategy matched your intent. Legal

18:35

following the same path. Contract review

18:37

is becoming pattern matching against

18:39

structured playbooks. Compliance is

18:41

becoming continuous automated audits

18:44

against codified rules. Marketing is

18:45

becoming experimental design with a

18:47

measurable conversion funnel. The

18:49

mechanism is straightforward. You take

18:51

knowledgework outputs that used to be

18:52

evaluated by vibes and you structure it

18:54

as a set of testable claims or

18:56

measurable specs and suddenly it is

18:58

subject to the same quality signals that

19:00

make software verifiable. Now I'm not

19:03

saying every piece of knowledge work can

19:05

be automated in exactly this way

19:06

tomorrow. But every year that frontier

19:09

is moving forward faster and faster and

19:12

the work that resists structuring tends

19:14

to be exactly the high judgment high

19:16

context work that only the most capable

19:19

people were doing anyway. So knowledge

19:21

work is converging on software not

19:24

because consultants will all learn to

19:25

code but because the underlying

19:27

cognitive task is actually the same

19:29

thing. You're translating vague human

19:30

intent into precise enough instructions

19:33

that human or machine systems can

19:35

execute them. The person specifying a

19:37

product feature and the person

19:39

specifying a business strategy are doing

19:41

the same work just at a different level

19:43

of abstraction. As the tools of

19:45

structuring, testing, and validating

19:48

knowledge work get better, the

19:49

distinction between those two is going

19:51

to collapse very very quickly. And with

19:53

it is going to collapse the insulation

19:56

that non-engineering knowledge workers

19:58

might assume they have. Guys, we're all

20:01

in the same boat with engineering now.

20:03

It's not a different boat. We're all

20:05

working with AI agents. Now, obviously,

20:07

if knowledge work is converging, like I

20:09

say, the practical questions from a jobs

20:11

perspective is what do you do about it?

20:13

Obviously, the answer is not learn to

20:15

code. That's the wrong advice. It's been

20:17

the wrong advice for a while. Engineers

20:19

have spent 50 years developing

20:21

disciplines around a problem that

20:23

knowledge workers are only now running

20:24

into. And I think that we can learn from

20:26

the engineering discipline how to be

20:28

precise enough that a system can execute

20:30

intent. One of the things that is a

20:32

massive unlock for the rest of knowledge

20:34

workers is just learning some of the

20:36

basics that good engineers know and

20:38

first hit the right level of abstraction

20:40

and learn to spec your work the way

20:42

engineers spec features. So a product

20:44

manager who writes improve the

20:45

onboarding flow is operating at the

20:48

wrong level of abstraction and is

20:49

producing the same category of failure

20:52

as a developer who writes just make it

20:54

better or follow this prompt correctly.

20:56

Engineers learned painfully to write

20:58

good acceptance criteria, specific

21:01

testable conditions that define done.

21:03

Guess what? We all need to do that as we

21:06

start working with agents. This is

21:07

becoming one of the single most

21:09

transferable skills in business. And you

21:12

should start practicing writing specs

21:13

today. And by the way, if you're a

21:15

leader listening to this, that goes for

21:17

you, too. Your strategy needs to be

21:19

speckable. You should be able to say

21:21

this is the success criteria. I have

21:23

seen a lot of very terrible strategy

21:26

board decks in my time and I think this

21:28

would generally improve them. Second

21:29

major principle, learn to work with

21:31

compute. Don't just learn about compute.

21:34

Don't just learn about AI. A high value

21:37

AI worker, a high value engineer who

21:39

knows how to use tokens well is not

21:42

valuable because they know about Python

21:44

code or JavaScript or Rust. They're

21:46

valuable because they understand what AI

21:49

can and cannot do, how to structure a

21:51

task so an agent can get it done, and

21:53

how to evaluate whether what the agent

21:55

did was correct. Knowledge workers are

21:57

going to need that same literacy. If

21:59

you're a financial analyst, you should

22:01

be running your models through AI and

22:03

learning where they fail, which

22:05

assumptions they miss, which edge cases

22:07

they ignore. You should be testing

22:09

contract review agents against your own

22:11

judgment. The goal here is not to get to

22:14

a onetoone replacement with your

22:16

judgment. It's to understand the machine

22:19

well enough to direct it and guide it

22:22

and guard rail and catch it when it

22:23

makes mistakes. Third major principle,

22:25

make your outputs verifiable. I know

22:28

some people are running the other way

22:30

here. There are knowledge workers who

22:31

are deliberately sabotaging AI on their

22:34

teams because they don't feel like

22:35

they'll have jobs. That is a fault of

22:37

leadership. Leadership needs to give

22:39

people the support to lean in here

22:42

because you will not be able to automate

22:44

very quickly if you cannot figure out

22:46

how to make the dirty details of your

22:48

day-to-day work verifiable. Engineers

22:51

write tests. A function either returns

22:53

the right value or it doesn't. Knowledge

22:55

workers need to develop the equivalent

22:57

structured outputs with built-in

22:59

validation. You should have data sources

23:01

on your market analysis. A project plan

23:04

should include measurable milestones.

23:06

And funny enough, we've been trying to

23:08

say this for a while as knowledge

23:09

workers. All of the eye rolling around

23:11

OKRs is a little bit an early preview of

23:15

making your outputs more verifiable.

23:17

Except now we really have to do it.

23:19

Next, learn to think in systems, not

23:22

documents. The deliverable of work used

23:24

to be a document of some sort for almost

23:26

everybody who is not an engineer. Now

23:28

you need to think in terms of the larger

23:31

system that your work is driving. A deck

23:34

requires a person who produces it every

23:36

quarter. A system requires a person to

23:38

specify it once and maintain it when

23:41

conditions change. Knowledge workers who

23:43

think in terms of systems. What are the

23:45

inputs? What are the rules? What

23:47

triggers action? How do you know it's

23:48

working? They are going to build things

23:50

that compound. Even outside of

23:52

engineering, knowledge workers who think

23:54

in terms of documents are just going to

23:56

produce AI that generates stuff faster,

23:58

but it's the same old stuff. We need to

24:00

start to learn to teach thinking and

24:02

systems as a core skill for every

24:04

knowledge worker. Finally, audit your

24:07

role for coordination overhead. If your

24:09

honest assessment is that most of your

24:11

work exists because your organization is

24:13

complex enough to require it, big enough

24:15

to require it, right? You have to align

24:17

stakeholders. You have to translate

24:19

between departments. You have to produce

24:22

reports that synthesize information from

24:24

lots of teams. You're really exposed in

24:26

the age of AI. It's not because you're

24:28

bad at your job. It's because the

24:30

organizational complexity that justifies

24:32

your job is the same thing that AI makes

24:34

unnecessary. The question to ask is

24:37

this. If my company were half or a

24:40

quarter of its current size, would my

24:42

role exist? If the answer is no, the

24:45

value you provide is likely linked to

24:47

coordination and coordination is the

24:49

first casualty in leaner organizations.

24:52

Open AI is already making its internal

24:55

systems so transparent to knowledge

24:58

workers that they don't have to go and

25:00

query Slack message that they don't have

25:03

to go and query Slack messages at the

25:04

company. They don't have to go and look

25:06

for context from a meeting. They can

25:08

just hit the internal data system with

25:11

an agent-driven search and get exactly

25:13

what they need from 50 or 60 different

25:15

stakeholders and come back. That is

25:17

where organizations are starting to

25:18

move. You don't have to have a meeting

25:20

to get coordinated. You hit agentic

25:23

search and you see the data in front of

25:25

you. And so the move in that situation

25:28

is not to panic, is to migrate toward

25:30

work that creates direct value. Look for

25:33

ways you can ring the cash register. How

25:35

can you build customer-f facing revenue

25:37

generating products? How can you start

25:39

to think about your work in terms of

25:42

driving the direction of the business or

25:44

getting the data that drives the

25:45

direction of the business? There's lots

25:46

of ways to do this that don't

25:47

necessarily mean you're a product

25:48

manager, right? The business, any

25:50

complex business will have a lot of

25:52

operational arms that have to still

25:55

exist. Finance is still going to exist,

25:57

right? These functions aren't going

25:58

anywhere. Look for how you can be more

26:00

directly value producing in those areas.

26:02

None of this requires a computer science

26:04

degree. All of it requires adopting an

26:07

engineering mindset. And knowledge work,

26:09

to be honest, has resisted that for

26:11

decades. I have lost track of the number

26:13

of conversations I've had with

26:15

marketers, with customer service folks

26:17

over the years where they have said,

26:18

"Engineering is just too hard. I

26:20

couldn't be that precise." I got bad

26:22

news. We all need to be that precise

26:24

now. We all need to be testable. We all

26:26

need to be falsifiable. We all need to

26:28

understand our tools well enough to know

26:30

when they're wrong. So, if we step back

26:32

from the details of our day-to-day jobs,

26:35

what does the larger productivity and

26:37

jobs picture look like? Where is this

26:40

conflict around jobs and AI playing out

26:43

in the real world? We are in the trough

26:45

of a J curve right now. Census Bureau

26:48

research on manufacturing has found AI

26:51

deployment initially reduces

26:53

productivity by an average of 1.3

26:55

percentage points. I bet you didn't

26:57

expect me to say that. With some firms

26:59

dropping as much as 60 points before

27:02

they start to recover. The METR study

27:04

that I shared about earlier this week

27:06

talked about the idea that there are

27:08

dark factories where AI agents not only

27:11

produce all the code but review all the

27:13

code. That same study found that

27:15

experienced developers were 19% slower

27:19

with AI tools despite believing they

27:21

were 24% faster. They just didn't

27:23

understand. This is the J curve of

27:25

technology adoption. Productivity dips

27:28

before it surges and we are in the dip.

27:31

What's interesting is because AI is

27:33

moving so fast and because it's

27:34

influencing the economy so widely, we

27:37

know this is a J curve and not a

27:39

permanent degradation because we can

27:41

literally see the companies that have

27:43

figured this out and gotten to massive

27:45

multiples. We don't have to hypothetical

27:47

midjourney. We don't have to create a

27:49

hypothetical about cursor. The employees

27:51

at those organizations really are that

27:53

productive and you can see it in the

27:54

numbers. So what comes after for

27:56

everybody else? manufacturing firms that

27:59

were digitally mature before AI will

28:02

eventually so what comes after for all

28:05

the rest of us who don't work at

28:06

midjourney and cursor given the pace of

28:09

AI capability scaling agents going from

28:12

agents going from bug fixes to

28:14

multi-hour sustained engineering in

28:15

under a year three person teams shipping

28:18

what 10 person teams shipped last year

28:20

or 20 person teams my bet is that this

28:22

entire thing compresses that has been

28:24

the story of this cycle right the

28:26

software J curve curve the adoption cost

28:28

that you face before you get fluent,

28:30

even for the rest of the economy, even

28:32

for non-native AI companies, is going to

28:35

compress into something like 18 to 24

28:37

months. And early adopters are going to

28:39

be past the bottom already. The

28:41

companies that figure out specdriven

28:43

development and agent orchestration

28:45

don't just get to be more efficient,

28:48

they get to operate at speed, at

28:50

productivity ratios that make

28:52

traditional organizations look dead in

28:54

the water. a 10 to 80x revenue per

28:56

employee gap opens up. One of the things

28:59

that matters here is that the J curve

29:01

really is shaped like a J. When you get

29:03

past the bottom, you start to accelerate

29:06

really, really quickly because agent

29:08

gains start to multiply cleanly across

29:10

your business. So, if we look at the

29:12

broad arc of history, what kind of

29:15

historical analog actually makes sense

29:17

for us here? The historical parallel

29:19

that fits best is not the story of the

29:21

invention of ATMs and how that affected

29:23

bank tellers. It's not the story of

29:25

calculators and how that affected

29:27

mathematicians. It's actually the story

29:29

of telephone operators in the ' 90s.

29:31

Those jobs did not disappear overnight.

29:34

But the people who held those jobs,

29:36

predominantly women and workingclass at

29:38

the time, found themselves a decade

29:40

later in lowerpaying occupations or out

29:42

of the workforce entirely. Overall

29:45

employment grew. new categories of work

29:47

emerged. But for the individuals in the

29:49

crosshairs, that was cold comfort. It

29:51

did not matter for those women. I think

29:53

we're in a similar moment, but I think

29:55

we have more tools to support each

29:56

other. And I think it's incumbent upon

29:58

leadership to do a better job than we

30:00

did in the 1920s. The economy is going

30:03

to create more software than ever, more

30:05

systems running on computers than ever.

30:07

It will probably be two or three orders

30:09

of magnitude what we have today.

30:10

Computers will remain more central to

30:12

human society than at any point in

30:14

history. That part of the story is

30:16

genuinely structurally optimistic

30:18

because compute creates leverage and

30:20

leverage creates abundance for us. But

30:23

more jobs in the economy and your

30:25

individual jobs are very different

30:26

things. The bifurcation is already there

30:29

in the data. AI native companies are

30:31

exploding and picking up pieces of the

30:34

economic pie that traditional companies

30:37

are deserting. That is why you see the

30:39

collapse in the SAS stock market over

30:41

the past couple of weeks. The gap

30:43

between engineers who can drive high-V

30:45

value tokens is literally $285 billion,

30:49

which is the amount that Claude was able

30:51

to wipe off of traditional SAS stocks by

30:53

releasing a 200line prompter on legal

30:55

work. I did a whole video on that. The

30:56

point here is not an individual stock

30:58

drop. Whether or not it recovers, not my

31:00

problem right now. The point is to think

31:02

about knowledge workers and understand

31:04

that we need to have a much more

31:07

intentional conversation to ensure that

31:09

the 70 or 80% of knowledge workers who

31:11

are not pushing highv value tokens right

31:13

now get the skills to do so. How can we

31:16

think about the distribution of our

31:20

teams and look at each person on that

31:24

team as someone who can level up in

31:26

their agent fluency, someone who can

31:28

level up in their ability to write specs

31:30

and understand intent because that is

31:32

the new skill that's going to matter.

31:33

And there is no reason why we have to

31:36

leave people behind on that. It

31:38

absolutely is a skill issue. It's a

31:39

learnable skill. This transition is

31:42

going to happen whether we prepare for

31:44

it, whether we support our teams or not.

31:46

The only variable is which side of the

31:50

bifurcation we're going to end up on and

31:52

whether we as company leaders are going

31:54

to lean in and support our teams in that

31:56

transition. whether we as individuals

31:58

who are trying our best to get through

32:00

this AI transition are able to learn the

32:03

skill to start to think in terms of

32:06

giving clear intent and goals and

32:08

constraints in our work rather than

32:10

doing the work itself and that window is

32:12

closing faster because AI agent

32:14

capability gains keep accelerating. The

32:17

technology is not going to wait for

32:19

organizations and individuals to catch

32:20

up. We have to lean in and help each

32:22

other. If you are on a team and you

32:24

understand what I'm saying, it is on you

32:26

to help your buddies on the team to

32:29

understand this better. If you're a

32:30

leader, it is on you to think about how

32:32

you build systems that support everyone

32:34

in your org. And if you are stuck, it is

32:37

on you to figure out how you can take at

32:41

least a single step toward understanding

32:43

what it means to give the agent a job

32:47

and watch it do the work. It might be as

32:49

simple as trying Claude in Excel and

32:51

watching Claude create something. Maybe

32:52

that's the simplest way to start. I have

32:54

some other exercises as well that I put

32:55

in the substack that are at a range of

32:57

scales. But the larger point is that you

33:00

need to believe that there is hope at

33:04

the end of the tunnel and that the

33:06

company you're operating at, the job

33:08

that you're doing is something you can

33:10

pivot. If you think about it as tackling

33:13

a larger problem and specifying where

33:16

your agent needs to go to create value,

33:19

that's on us to do. the agent capability

33:21

is going to be there. It is on us to

33:23

specify enough of what we want that we

33:26

can create tremendous value with all of

33:29

this compute capability that we have. We

33:31

need to have better strategies. We need

33:32

to think bigger. It is actually rational

33:35

to think about boiling the ocean. We

33:37

were always told as companies, as

33:40

leaders, as product managers, don't boil

33:42

the ocean in your strategy. Well, if you

33:45

have the cost of production falling to

33:47

zero on software, why not think big? Why

33:50

not think courageously? Why not think

33:52

about producing more value? I think that

33:54

is a bold goal that can actually

33:57

catalyze a lot of transformational

34:00

change in the ways I'm talking about. It

34:02

can catalyze teams to work more leanly.

34:04

It can catalyze individuals to start to

34:07

think about how they can stretch and

34:08

grow and define what agents do work for

34:11

them so they can do more and lean more

34:13

into the direct production of value.

34:15

That is where we need to go. That is why

34:17

the future of jobs is not about

34:19

production of code or production of

34:21

work. It is about good judgment to

34:23

specify where agents are going. Best of

34:25

luck.

Interactive Summary

The video discusses the evolving landscape of software development and knowledge work in the age of AI. It highlights that while AI can generate code efficiently, the real challenge lies in accurately specifying what needs to be built. The cost of producing software is decreasing, shifting the bottleneck to the cost of specifying intent. The speaker contrasts two classes of workers: those who can precisely specify and orchestrate AI agents, commanding high value, and those whose work is being commoditized by AI. The video emphasizes the growing importance of skills like clear specification, system thinking, and making outputs verifiable, drawing parallels to historical technological shifts. It suggests that knowledge work, like software engineering, is converging towards a similar need for precision and clear intent, and that individuals and organizations must adapt to thrive.

Suggested questions

10 ready-made prompts