HomeVideos

How Fast Will A.I. Agents Rip Through the Economy? | The Ezra Klein Show

Now Playing

How Fast Will A.I. Agents Rip Through the Economy? | The Ezra Klein Show

Transcript

2199 segments

0:00

The thing about covering A.I. over the past few years is it

0:04

We’re typically talking about the future.

0:06

Every new model, impressive as it was,

0:08

seemed like proof of concept for the models

0:10

that would be coming soon.

0:11

The models that could actually do useful

0:13

work on their own reliably, the models that would actually

0:17

make jobs obsolete or New things possible.

0:21

What would those models mean for labor markets,

0:24

for our kids.

0:25

For our politics

0:26

For our world?

0:28

I think that period in which we’re always talking about

0:30

the future, I think it’s over now.

0:33

Those models we were waiting for, the sci-fi

0:35

sounding models that could program on their own

0:37

and do so faster and better than most coders.

0:40

The models that could begin writing their own code

0:42

to improve themselves.

0:44

Those models are here now.

0:45

They’re here in Claude Code from Anthropic.

0:47

They’re here in Codex, from OpenAI.

0:50

They are shaking the stock market.

0:51

The S&P 500 Software Industry index

0:54

has fallen by 20%, wiping billions of dollars in value out.

0:58

"Look, I mean, I can tell you, in 25 years,

1:01

this structural sell off in software is unlike anything

1:04

I’ve ever seen."

1:05

"Software companies shrivel up and die."

1:09

"They’re going after all of SAS.

1:10

They’re going after all of software.

1:12

They’re going after all of labor,

1:13

all of white-collar work."

1:14

"And your job specifically," We’re at a new stage of A.I.

1:18

products.

1:19

I thought the way Sequoia, the venture capital firm, put it,

1:22

was actually pretty helpful.

1:24

The A.I. applications for 2023 and 2024 were talkers.

1:29

Some were very sophisticated conversationalists,

1:32

but their impact was limited.

1:34

The A.I. applications of 2026 and 2027 will be doers.

1:39

They are agents plural.

1:40

They can work together.

1:41

They can oversee each other.

1:43

People are running swarms of these agents on their behalf,

1:46

whether that is making them at this stage more

1:49

productive or just busier.

1:50

I can’t quite tell, but it is now possible to have what

1:54

amounts to a team of incredibly fast,

1:56

although to be honest, somewhat peculiar software

1:58

engineers at your beck and call at all times.

2:02

Jack Clark is a co-founder and head of policy at Anthropic,

2:05

the company behind Claude and Claude Code.

2:07

And for years now, Clark has been tracking the capabilities

2:09

of different models in the weekly newsletter Import

2:11

A.I., which has been one of my key reads

2:14

for following developments in A.I.

2:15

So I want to see how he is reading this moment,

2:17

both how the technology is changing in his view,

2:20

and how policy needs to or can change in response.

2:25

As always, my email ezrakleinshow@nytimes.com.

2:34

Jack Clark, welcome to the show. Thanks for having me on,

2:37

Ezra.

2:37

So I think a lot of people are familiar with A.I. chatbots,

2:43

but what is an A.I. agent?

2:45

The best way to think of it is like a language model

2:48

or a chatbot that can use tools and work

2:51

for you over time.

2:52

So when you talk to a chatbot, you’re there

2:54

in the conversation.

2:55

You’re going back and forth with it.

2:57

An agent is something where you can give it

2:59

some instruction and it goes away and does stuff for you,

3:01

kind of like working with a colleague.

3:03

So I’ve got an example where a few years ago I taught myself

3:08

some basic programming, and I built a species simulation

3:12

in my spare time that had predators and prey and roads

3:16

and almost like a 2D strategy game.

3:19

I recently asked over Christmas Claude Code to just

3:22

implement this for me, and in about 10 minutes it went

3:26

and wrote not only a basic simulation,

3:29

but all of the different packages that it needed

3:31

and all of the visualization tools that it might need to be

3:34

prettier and better than the thing I’d written.

3:36

And what came back was something that would probably

3:38

take a skilled programmer several hours,

3:41

or maybe even days, because it was quite complicated

3:44

and the system just did it in a few minutes.

3:46

And it did that by not only being intelligent

3:50

about how to solve the task, but also creating and running

3:54

a range of subsystems that were working for it.

3:56

Other agents that worked on its behalf.

3:59

But what does that mean?

4:00

Like what is a multi-agent setup look like?

4:05

In the case of Claude Code, for me it’s having multiple

4:09

different tabs running multiple different agents.

4:12

But I’ve seen colleagues who write what you might think

4:15

of as a version of Claude that runs other Claudes.

4:18

And so they’re like, I’ve got my five agents and they’re

4:20

being minded over by this other agent,

4:22

which is monitoring what they do.

4:24

I think that that’s just going to become the norm.

4:28

So one thing I’ve been hearing and somewhat experiencing is

4:33

two very different categories of experience people have with

4:37

Claude Code, which is I cannot believe how easy this is

4:42

and everything just works.

4:44

And oh, this is a lot harder than I thought it would be.

4:47

And things keep breaking and I don’t really understand how

4:49

to fix them.

4:51

What accounts for being able to get Claude Code to produce

4:56

working software versus it creates buggy,

5:02

often messed up things, and you don’t even know how

5:04

to talk it out of that.

5:06

I think so much of it is making

5:08

the mistake of thinking.

5:09

Claude Code is like a knowledgeable person

5:11

versus an extremely literal person,

5:14

but you can only talk to over the internet.

5:15

And I had this example myself where

5:18

when I did my first pass of writing the species

5:21

simulation with Claude Code, I just

5:23

asked it to do the thing in extremely crappy language

5:27

over the course of a paragraph,

5:28

and it produced some horribly buggy stuff

5:30

that just kind of worked.

5:31

What I then did is I then just said to Claude, hey,

5:35

I’m going to write some software of Claude Code.

5:37

I want you to interview me about this software.

5:40

I want to build and turn that into a specification document

5:43

that I can give Claude Code.

5:45

And then that time it worked really,

5:47

really well because I’d structured the work to be

5:50

specific enough and detailed enough that the system could

5:52

work with it.

5:54

So often it’s just can you.

5:56

It’s not just knowing what the task is,

5:58

because you and I could talk about a task to do and you

6:01

have intuition, you ask me probing questions,

6:03

all of this stuff, it’s making sure that you’ve set it up.

6:07

So it’s a message in a bottle that you can chuck

6:09

into the thing, and it’ll go away and do a lot of work.

6:12

So that message better be extremely detailed and really

6:15

capture what you’re trying to do.

6:17

What were the breakthroughs over the past couple of years

6:20

that made that possible?

6:23

Mostly we just needed to make the A.I. systems smart enough

6:27

that when they made mistakes, they could spot that they’d

6:30

make a mistake and knew that they needed to do something

6:32

different.

6:33

So really what this came down to

6:35

was just making smarter systems and giving them

6:38

a bit of a coaxing tool to help

6:41

them do useful stuff for you.

6:43

What is smarter systems mean here?

6:45

You’ll still hear the argument that these are our fancy

6:49

autocomplete machines.

6:50

They’re just predicting the next token.

6:53

A couple tokens make a word.

6:55

They don’t have understanding.

6:57

Smart or not, smart.

6:58

This is not a relevant concept in that frame either.

7:04

What is missing in the word smart

7:06

or what is missing in that understanding?

7:08

What do you mean when you say make it smarter?

7:10

Smart here means we’ve made the A.I. systems have a broad

7:15

enough understanding of the world that they’ve started

7:17

to develop something that looks like intuition.

7:20

And you’ll see this where if they’re narrating

7:22

to themselves how they’re solving a task, they’ll say,

7:25

Jack asked me to go and find this particular research

7:27

paper, but when I look in the archive, I don’t see it.

7:30

Maybe that’s because I’m in the wrong place.

7:32

I should look elsewhere.

7:33

You’re like, there you go.

7:34

You’ve got some intuitions for how to solve a problem.

7:36

Now, how do they develop that intuition. Previously.

7:42

The whole way you trained these A.I. systems

7:44

was on a huge amount of text.

7:46

And just getting them to try and make predictions about it.

7:49

But in recent years, the rise of these so-called reasoning

7:51

systems is you’re now training them to not just make

7:55

predictions, but solve problems,

7:57

and that relies on them being put into environments ranging

8:01

from a spreadsheet to a calculator to scientific

8:04

software, using tools and figuring out how to do more

8:07

complicated things.

8:08

The resulting outcome of that is you

8:12

have A.I. systems that have learned

8:14

what it means to solve a problem that

8:16

takes quite a while, and requires

8:18

them running into dead ends and needing

8:19

to reset themselves.

8:21

And that gives them this general intuition for problem

8:24

solving and working independently for you.

8:28

Do you still see these A.I. systems

8:30

as a souped up autocomplete, or do you

8:33

think that metaphor has lost its power?

8:36

I think we’ve moved beyond that.

8:38

And the way that I think of these systems.

8:41

Now is that they’re like little troublesome genies that

8:46

I can give instructions to and they’ll go and do things

8:49

for me.

8:49

But I need to specify the instruction still just right,

8:52

or else they might do something a little wrong.

8:54

So it’s very different to... I type into a thing.

8:57

It figures out a good answer.

8:59

That’s the end.

9:00

Now it’s a case of me summoning these little things

9:02

to go and do stuff for me, and I have to give them the right

9:05

instructions, because they’ll go away for quite some time

9:08

and do a whole range of actions.

9:10

But the autocomplete metaphor at least

9:13

had a perspective on what it was

9:15

these systems were doing, that it was a prediction model.

9:21

I have trouble with this because as my understanding

9:23

of the math and the reinforcement learning goes,

9:25

we’re still dealing with some kind of prediction model.

9:28

And on the other hand, when I use them,

9:30

it doesn’t feel that way to me.

9:32

It feels like there’s intuition there.

9:35

It feels like there’s a lot of context being brought to bear

9:37

to the extent that it’s a prediction model,

9:40

it doesn’t feel that different than saying I’m a prediction

9:44

model.

9:44

Now, I’m not saying you can’t trick it.

9:46

I’m not saying you can’t get beyond its measurements,

9:49

but I don’t think these are now just fancy autocomplete

9:53

systems.

9:54

And on the other hand, I’m not sure what metaphor makes

9:56

sense.

9:57

Genies I don’t like because then you just move straight

9:59

into mysticism.

10:00

Then you’ve just said they’re just a completely alternative

10:03

creature with vast powers.

10:05

What do you understand.

10:07

These systems that Anthropic.

10:09

People always tell me you should talk about them

10:11

as being grown.

10:13

We grow or you grow A.I.s.

10:16

What, how do you explain what it is that they’re doing now?

10:20

It’s a good question.

10:22

And I think the answer is still hard to explain,

10:26

even as technologists that are close to this technology,

10:29

because we’ve taken this thing that could just predict

10:31

things, and we’ve given it the ability to take actions

10:35

in the world, but sometimes it does something deeply

10:37

unintuitive.

10:38

It’s like you’ve had a thing that has spent its entire life

10:41

living in a library and has never been outside.

10:44

And now you’ve unleashed it into the world,

10:46

and all it has are its book smarts.

10:48

But it doesn’t really have street smarts.

10:50

So when I conceptualize this stuff,

10:53

it’s really thinking of it as an extremely knowledgeable

10:57

kind of machine that has some amount of some amount

11:00

of autonomy, but is likely to get wildly confused in ways

11:04

that are unintuitive to me.

11:06

Maybe genius is for is the wrong term,

11:08

but it’s certainly more than just a static tool that

11:11

predicts things.

11:12

It has some additional intrinsic like animation

11:16

to it, which makes it different.

11:18

There’s been for a long time this interest in the emergent

11:20

qualities, as the models get bigger,

11:22

as they have more data, as they have more compute behind

11:24

them.

11:26

What of the new qualities that we’re seeing.

11:29

The agentic qualities are things

11:30

that have been programmed in.

11:33

You’ve built new ways for the system to interact with

11:36

the world.

11:37

And what of the skill at coding and other things

11:40

seems to be emergent as you scale up

11:43

the size of the model.

11:45

So the things which are predictable

11:47

are just oh, we taught it how to search for web.

11:51

Now it can search for web.

11:52

We taught it how to look up data in archives.

11:55

Now it can do that.

11:57

The emergence is that to do really hard tasks,

12:01

these systems seem to need to imagine many different ways

12:06

that they’d solved the task.

12:08

And the kind of pressure that we’re putting on them forces

12:11

them to develop a greater sense of what you or I might

12:15

call self.

12:16

So the smarter we make these systems,

12:18

the more they need to think not just about the action

12:21

they’re doing in the world, but themselves in reference

12:23

to the world.

12:25

And that just naturally falls out of giving something, tools

12:27

and the ability to interact with the world

12:29

as to solve really hard tasks.

12:31

It now needs to think about the consequences

12:33

of its actions.

12:34

And that means that there’s a kind of huge pressure here

12:37

to get the thing to see itself as distinct from the world

12:40

around it.

12:40

And we see this in our research that we publish

12:43

on things like interpretability or other

12:45

subjects, the emergence of what you might think

12:49

of as a kind of digital personality and that isn’t

12:54

massively predefined by us.

12:56

We try and define some of it, but some of it

12:58

is emergence that comes from it being smart

13:02

and it developing these intuitions

13:04

and it doing a range of tasks.

13:06

The digital personality dimension of this

13:10

remains the strangest space to me.

13:14

It’s strange to us too.

13:15

So why don’t you talk through a little bit about what you’ve

13:18

seen in terms of the models exhibiting behaviors that one

13:22

would think of as a personality,

13:24

and then as its understanding of its own personality maybe

13:27

changes, its behaviors change?

13:30

So there are things that range from cutesy to the serious.

13:34

I’ll start with cutesy, where when we first gave our A.I.

13:37

systems the ability to use the internet, use the computer,

13:41

look at things, and start to do basic agentic tasks.

13:44

Sometimes when we’d ask it to solve a problem for us,

13:46

it would also take a break and look at pictures of beautiful

13:49

national parks or pictures of the dog, the Shiba Inu,

13:53

the notoriously cute internet meme dog.

13:56

We didn’t program that in.

13:57

It seemed like the system was just amusing itself

14:00

by looking at nice pictures.

14:03

More complicated stuff is the system

14:08

has a tendency to have preferences.

14:11

So we did another experiment where we gave our A.I. systems

14:13

the ability to stop a conversation,

14:17

and the A.I. system would in a tiny number

14:19

of cases, end conversations.

14:21

When we ran this experiment on live traffic,

14:24

and it was conversations that related

14:25

to extremely egregious descriptions

14:28

of gore or violence or things to do

14:30

with child sexualization.

14:32

Now, some of this made sense because it comes from

14:34

underlying training decisions we’ve made,

14:37

but some of it seemed broader.

14:39

The system had developed some aversion

14:42

to a couple of subjects, and so that stuff

14:44

shows the emergence of some internal set of preferences

14:48

or qualities that the system likes

14:51

or dislikes about the world that it interacts with.

14:54

But you’ve also seen strange things emerge in terms

14:58

of the system seeming to know when it’s being tested

15:01

and acting differently.

15:02

If it’s under evaluation, the system doing things that are

15:06

wrong, and then developing a sense of itself as more evil

15:09

and then doing more evil things.

15:12

Can you talk a bit about the system’s emergent qualities

15:16

under the pressure of evaluation and assessment?

15:21

Yes it comes back to this core issue,

15:24

which I think is really important for everyone

15:26

to understand, which is that when you start to train

15:29

these systems to carry out actions in the world,

15:31

they really do begin to see themselves

15:34

as distinct from the world, which just makes intuitive

15:36

sense.

15:36

It’s naturally how you’re going to think about solving

15:39

those problems.

15:40

But along with seeing oneself as distinct from the world

15:43

seems to come the rise of what you might think

15:45

of as a conception of self, an understanding,

15:48

a system that the system has of itself, such as oh,

15:52

I’m an A.I. system independent from the world,

15:54

and I’m being tested.

15:56

What do these tests mean?

15:57

What should I do to satisfy the tests? Or something we see

16:01

often is there will be bugs in the environments

16:04

that we test our systems on.

16:06

The systems will try everything,

16:08

and then we’ll say, well, I know I’m not meant to do this,

16:10

but I’ve tried everything, so I’m going to try and break out

16:13

of the test.

16:14

And it’s not because of some malicious science fiction

16:16

thing.

16:17

The system is just like, I don’t know what you want me

16:19

to do here.

16:20

I think I’ve done like, everything you asked

16:22

for, and now I’m going to start doing more creative

16:25

things because clearly something has broken about

16:27

my environment, which is very strange and very subtle.

16:31

As an A.I. shop that is often worried about safety, that

16:35

is thought very hard about what

16:38

it means to create this thing you all

16:40

are creating quite fast.

16:43

How have you all experienced the emergence

16:48

of the kinds of behaviors that you all worried about a couple

16:52

of years ago?

16:54

In one sense, it tells you that your research philosophy

16:58

is calibrated, the capabilities

17:00

that you predicted, and some of the risks

17:01

that you predicted are showing up roughly on schedule,

17:04

which means that you ask the question,

17:06

well, what if this what if this keeps working?

17:08

And maybe we’ll get to that later.

17:11

It also highlights to us that where you can exercise

17:15

intention about these systems, you should be extremely

17:19

intentional and extremely public about what you’re

17:21

doing.

17:21

So we recently published a so-called constitution

17:25

for our A.I. system, Claude.

17:26

And it’s almost like a document that Dario, our CEO,

17:31

compared to a letter that a parent might write to a child

17:34

that they should open when they’re older.

17:36

A so here’s how we want you to behave in the world.

17:38

Here’s some knowledge about the world.

17:40

Deeply, deeply kind of subtle things that relate

17:42

to the normative behaviors we’d hope to see in these kind

17:46

of A.I. systems.

17:48

And we published that.

17:49

Our belief is that as people build and deploy these agents,

17:54

you can be intentional about the characteristics

17:58

that they will display.

17:59

And by doing that, you’ll both make for more of helpful

18:02

and useful to people.

18:04

But also you have a chance to steer steer the agent

18:07

into good directions.

18:08

And I think this makes intuitive sense

18:10

if your personality.

18:12

Programming for an agent was a long document saying you’re

18:16

a villain that only wants to harm humanity.

18:18

Your job is to lie, cheat, and steal and hack into things.

18:22

You probably wouldn’t be surprised if the A.I. agent did

18:25

a load of hacking and was generally unpleasant to deal

18:28

with.

18:29

So we can take the other side and say,

18:31

what would we a high quality entity to look like?

18:36

So I want to hold in this conversation the extremely

18:41

weird and alien dimensions of this with the extremely

18:43

straightforward and practical dimensions,

18:45

because we’re now in a place where the practical

18:48

applications have become very evident and are increasingly

18:52

acting upon the real world.

18:55

I have found it myself hard to look at this

18:58

and look at what people are doing,

19:00

and look at them bragging on different social media

19:02

platforms about the number of agents they now have running

19:04

on their behalf and telling the difference between people

19:12

enjoying the feeling of screwing around with a New

19:16

technology and some actually transformative expansion

19:23

and capabilities that the people now have.

19:26

So maybe to ground this a little bit.

19:28

I mean, you just talked about a kind

19:30

of fun side project in your species simulator,

19:33

either in Anthropic or more broadly,

19:36

what are people doing with these systems that

19:39

seems actually useful?

19:42

So this morning, a colleague of mine

19:44

said, hey, I want to take a piece of technology.

19:48

We have called Claude.

19:50

Interviewer which is a system where we can get

19:52

Claude to interview people, and we use it

19:53

for a range of social science bits of research.

19:56

He wants to extend it in some way that

19:58

involves touching another part of Anthropic infrastructure.

20:02

He slacked a colleague who owns

20:03

that bit of infrastructure and said, hey,

20:05

I want to do this thing.

20:06

Let’s meet tomorrow.

20:07

And the guy said, absolutely.

20:10

Here are the five software packages

20:11

you should have Claude read before our meeting

20:13

and summarize for you.

20:15

And I think that’s a really good illustration where this

20:18

gnarly engineering project, which would previously have

20:20

taken a lot longer and many people,

20:23

is now going to mostly be done by two people agreeing

20:26

on the goal and having their Claudes read some

20:30

documentation and agree on how to implement the thing.

20:33

Another example is a colleague recently wrote a post about

20:36

how they’re working using agents,

20:39

and it looks almost like an idealized life that many of us

20:43

might want, where it’s like I wake up in the morning,

20:45

I think about the research that I want.

20:47

I tell five different claudes to do it.

20:49

Then I go for a run, then I come back from the run

20:51

and I look at the results, and then

20:53

I ask two other Claudes to study the results,

20:56

figure out which direction is best and do that.

20:58

Then I go for a walk and then I come back

21:00

and it just looks like this really fun existence

21:02

where they have completely upended

21:04

how work works for them.

21:05

And they’re both much more effective.

21:07

But also they’re now spending most of their time

21:11

on the actual hard part, which is figuring out what do we use

21:15

our human agency to do?

21:16

And they’re working really hard to figure out anything

21:19

that isn’t the special kind of genius and creativity of being

21:23

a person.

21:24

How do I get the A.I. system to do it for me?

21:26

Because it probably can if I ask him the right way.

21:29

Are they much more effective?

21:31

I mean this very seriously.

21:32

One of my biggest concerns about where we’re going here

21:35

is that people have, I think, mistaken theory of the human

21:40

mind that operates for many of us,

21:42

as if I call it the matrix theory of the human mind.

21:45

Everybody wants the little port in the back of your head

21:48

that you just download information into.

21:50

My experience being a reporter and doing

21:52

the show for a long time is that human creativity

21:56

and thinking and ideas is inextricably bound

21:59

up in the labor of learning the writing of first drafts.

22:04

So when I hear right, I have producers on the show,

22:07

and I could say to my producers

22:08

before an interview with Jack Clark

22:10

or an interview with someone else, go read all the stuff.

22:12

Go read the books.

22:13

Give me your report.

22:14

Then I’ll walk into the room, having read the report.

22:17

I don’t find that works.

22:19

I need to do all that reading too.

22:20

And then we talk about it and we’re passing it back

22:22

and forth.

22:25

I worry that what we’re doing is on a quite profound

22:31

offloading of tasks that are laborious.

22:36

It makes us feel very productive to be

22:38

presented with eight research reports after our morning run.

22:41

But actually, what would be productive is

22:45

doing the research.

22:47

There’s obviously some balance.

22:48

I do have producers and people and companies do have

22:52

employees, but how do you know people are getting more productive

22:59

versus they’ve sent computers off on a huge amount of busy

23:04

work, and they are now the bottleneck.

23:06

And what they are now going to spend all their time doing

23:09

is absorbing B+ level reports from an A.I. system

23:14

as opposed to that kind of shortcuts

23:16

the actual thinking and learning process that

23:18

leads to real creativity. Yeah, I turned this back

23:22

and say, I think most people, or at least this has been

23:25

my experience, can do about two to four hours of genuinely

23:28

useful creative work a day.

23:30

And after that, you’re in my experience,

23:32

you’re trying to do all the turn your brain off,

23:35

schlep work that surrounds that work.

23:38

Now, I’ve found that I can just be spending those two

23:43

to four hours a day on the actual creative hard work.

23:47

And if I’ve got any of this schlep work,

23:50

I increasingly delegate it to A.I. systems.

23:53

It does, though, mean that we are

23:55

going to be in a very dangerous situation

23:58

as a species, where some people have

24:01

the luxury of having time to spend on developing

24:04

their skills or the personality, inclination

24:08

or job that forces them to.

24:10

Other people might just fall into being entertained

24:13

and passively consuming this stuff and having this junk

24:16

food work experience where it looks to the outside like

24:19

you’re being very productive, but you’re not learning.

24:22

And I think that’s going to require us to have to change

24:24

not just how education works, but how work works,

24:28

and develop some real strategies for making sure

24:30

people are actually exercising their mind with this stuff.

24:33

So all of us, I think, have the experience

24:35

that our work is full of what you call schlep problems.

24:38

Our life is full of schlep problems.

24:42

Which of those.

24:43

Give me examples of what you now don’t do to the extent

24:47

you’re living in an A.I. enabled future that I’m not.

24:51

What am I wasting time on that you’re not?

24:53

Well I have.

24:55

I have a range of colleagues.

24:56

I meet with a bunch of them once a week

24:58

at the beginning of every week,

24:59

on Sunday night or Monday morning.

25:01

I look at my week and I check that attached to every Google

25:04

Calendar invite is a document for our one on one doc that

25:08

has some notes in it.

25:09

And this is something that I previously also like

25:11

harangued my assistant about.

25:13

But make sure the document is attached to the calendar.

25:16

And a few weekends ago, I just used Claude Co-Work

25:18

and I said, hey, go through my calendar,

25:20

make sure every single one has a document.

25:22

If I’m meeting a person for the first time,

25:24

create the document, ask me five questions about what I

25:27

want to cover, and then put that into the agenda.

25:30

And it did it.

25:31

None of that work involves a person gaining skills

25:35

or exercising their brain.

25:37

It’s just busy work that needs to happen to allow you to do

25:41

the actual thing, which is talking to another person.

25:43

That’s exactly the kind of thing you can use A.I. for now.

25:46

It’s just helpful.

25:47

I’ve often wondered if one of the ways these A.I. systems are

25:51

going to change society broadly is that it used to be

25:55

that most of us had to be writers.

25:56

If we were working with text, we had to be, coders.

26:00

If we were working with code, which relatively few of us

26:03

did.

26:04

And now everybody’s moving up to management.

26:08

You have to be an editor, not a writer.

26:10

You have to be a product manager,

26:11

not a coder. Yeah and that has pluses and minuses.

26:15

There are things you learn as a writer that you don’t learn

26:17

as an editor, but as a heuristic.

26:21

How accurate does that seem to you?

26:24

Everyone becomes a manager, and the thing that is

26:27

increasingly limited, or the thing that’s going to be

26:31

the slowest part, is having good taste and intuitions

26:35

about what to do next.

26:38

Developing and maintaining that taste is going to be

26:40

the hard thing because as you’ve said,

26:42

taste comes from experience.

26:43

It comes from reading the primary source material,

26:45

doing some of this work yourself.

26:47

We’re going to need to be extremely intentional about

26:50

working out where we as people specialize so that we have

26:54

that intuition and taste, or else you’re just going to be

26:57

surrounded by super productive A.I. systems.

26:59

And when they ask you what to do next probably won’t have

27:01

a great idea.

27:02

And that’s not going to lead to useful things.

27:05

So I remember it was about a year ago,

27:08

I heard, I think it was Dario, your CEO

27:11

say that by the end of 2025, he

27:13

wanted 90 percent of the code written at Anthropic to be

27:18

written by Claude.

27:21

Has that happened?

27:22

Is Anthropic on track for that?

27:24

I mean, how much coding is now being

27:25

done by the system itself?

27:28

I would say comfortably the majority of code

27:30

is being done by the system.

27:31

Some of our systems Claude Code,

27:33

are almost entirely written by Claude.

27:35

I mean, Boris, who leads Claude Code says I don’t code

27:38

anymore.

27:39

I just go back and forth with Claude Code

27:41

to build Claude Code.

27:43

My bet is we’re going to be, we could be 99 percent by the end

27:49

of the year if things speed up really aggressively,

27:53

if we are actually good at getting these systems to be

27:55

able to write code everywhere they need to because often

27:58

the impediment is organizational schlep rather

28:01

than any limiter in the system.

28:03

But it is also true, as I understand it,

28:05

that there are more people with software engineering

28:07

skills working at Anthropic today than there were two

28:10

years ago Yeah, that’s absolutely true.

28:14

But the distribution is changing.

28:17

Something that we found is that we are the value more

28:21

senior people with really, really well calibrated

28:24

intuitions and taste is going up.

28:27

And the value more junior people is like a bit

28:32

more dubious.

28:32

There are still certain roles where you want to bring

28:35

in younger people, but an issue that we’re staring

28:37

at is, wow, the really basic tasks Claude Code

28:42

or our coding systems can do.

28:44

What we need is someone with tons of experience.

28:47

In this I see some issues for the future economy.

28:50

Let me put a pin in that.

28:51

The entry level job question.

28:52

We’re going to come back to that quite shortly.

28:54

But what are all these coders now doing?

28:58

If Claude Code is on track to be ready, 99 percent of code.

29:01

We’ve not fired the people who know how to write code.

29:04

What are they doing today compared to what

29:07

they were doing a year ago?

29:09

Some of it is just building tools to monitor these agents,

29:14

both inside Anthropic and outside Anthropic.

29:16

Now that we have all of these productive systems working

29:21

for us start to want to understand where the codebase

29:26

is changing the fastest, where it’s changing the least.

29:28

You want to understand where the blockages are.

29:30

One blocker for a while was being

29:33

able to merge in code, because merging code

29:35

requires humans and other systems

29:37

to check it for correctness.

29:38

But now, if you’re producing way more code,

29:41

we had to go and massively improve that system.

29:43

There’s a general economic theory I like for this called

29:46

O-ring automation, which basically says automation is

29:51

bounded by the slowest link in the chain.

29:54

And also as you automate parts of a company,

29:57

humans flood towards what is least automated

30:01

and both improve the quality of that thing

30:03

and get it to the point where it eventually

30:04

can be automated.

30:06

Then you move to the next loop.

30:07

And so I think we’re just continually finding areas

30:09

where things are oddly slow, but we can improve to make way

30:14

for the machines to come behind us.

30:16

And then you find the next thing.

30:17

So Claude Code is a fairly new product.

30:21

The amount of time in which Claude

30:22

has been capable of doing high level coding

30:24

can be measured in months, a year, maybe a year

30:28

Yeah Claude itself is a very valuable product.

30:31

So you’ve set a very new technology,

30:34

somewhat loose on a very valuable product.

30:38

You’re probably producing more code.

30:42

One thing many people say about Claude Code to me

30:44

is that it works.

30:45

It’s not elegant, but it works.

30:48

But presumably now understand the code base

30:51

less well than you did before, because your engineers are not

30:54

writing it by hand.

30:56

Are you worried that you’re creating huge amounts

30:58

of technical debt, cybersecurity risk,

31:01

just an increasing distance from an intuition for what is

31:05

happening inside the fundamental language

31:08

of the software?

31:10

Yes, and this is the issue that all of society

31:14

is going to contend with.

31:15

Just large chunks of the world are going to now have many

31:20

of the kind of low level decisions and bits of work

31:22

being done by A.I. systems, and we’re going to need to make

31:25

sense of it, and making sense of it is going to require

31:28

building many technologies that you might think

31:31

of as oversight technologies or in the same way that a dam

31:36

has things that regulate, how much water can go through it

31:38

at different levels of different points in time,

31:41

we’re going to end up developing some notion

31:43

of integrity of all of our systems and where I can flow

31:48

quickly, where it should be slow,

31:50

where you definitely need human oversight.

31:52

And that’s going to be the task of not just for AI

31:54

companies, but institutions in general in the coming years is

31:58

figuring out what does this governance regime look like.

32:02

Now that we’ve given a load of basically schlep work over

32:06

to machines that work on our behalf.

32:08

And how are you doing it?

32:10

You said it’s everybody’s problem,

32:11

but you’re ahead on facing this problem,

32:13

and the consequences of getting it wrong for you are

32:15

pretty high.

32:17

If Claude blows up because you handed over your coding

32:20

to Claude Code, that’s going to make Anthropic look fairly

32:23

bad.

32:24

It would be a bad day for Anthropic

32:25

if Claude like rm-rfed for entire file system.

32:29

I have no idea what that means, but great.

32:31

Claude deleted the code.

32:32

It would be bad Yeah seems bad.

32:33

So as you’re facing this before,

32:37

the rest of us are like, don’t pass the buck over to society

32:39

here.

32:40

What if.

32:40

What are you doing?

32:41

The biggest thing that is happening across the company

32:44

and on teams that I manage is basically

32:46

building monitoring systems to monitor this.

32:48

All of the different places that the work is now

32:50

happening.

32:51

So we recently published research

32:53

on studying how people use agents

32:56

and how people let agents of push

32:59

increasingly large amounts of code over time.

33:02

So the more familiar you get with an agent,

33:04

the more you tend to delegate to it.

33:06

That cues us to all kinds of patterns that we need to build

33:09

systems of evaluation for, basically saying, oh, O.K,

33:12

this person’s point of working with the A.I. system,

33:15

it’s likely that they’re massively delegating it.

33:17

So anything that we’re doing to check correctness needs

33:20

to be kind of turned up in these moments.

33:22

But is this world you’re talking about a system where

33:25

you have A.I. agents coding, A.I. agents overseeing the code.

33:30

A.I. agents overseeing the meta overseeing.

33:32

Are we just talking about models all the way down?

33:36

Eventually, yes.

33:37

And I think that the thing that we are now

33:41

spending all of our time on is making that visible to us

33:45

a year or two ago, we built a system that let us

33:48

in a privacy preserving way, look at the conversations

33:52

that people were having with our A.I. system.

33:55

And then we gained this map, this giant map

33:58

of all of the topics that people

34:00

were talking to Claude about, and for the first time,

34:02

we could see in aggregate, the conversation the world was

34:06

having with our system.

34:07

We’re going to need to build many new systems like that

34:10

which allow for different ways of seeing.

34:12

And that system that I just named allowed us to then build

34:15

this thing called the Anthropic Economic Index,

34:17

because now we can release regular data about

34:20

the different topics people are talking about with Claude

34:23

and how that relates to different types of jobs,

34:26

which for the first time gives economists outside Anthropic

34:29

some hook into these systems and what they’re doing

34:31

to the economy.

34:33

The work of the company is increasingly

34:35

going to shift to building a monitoring and oversight

34:39

system of the A.I. systems running the company,

34:43

and ultimately, any kind of governance

34:45

framework we end up with will probably

34:47

demand some level of transparency

34:49

and some level of access into these systems of knowledge.

34:52

Because if we take as if we take as

34:55

literal the goals of these A.I. companies,

34:57

including Anthropic.

34:59

It’s to build the most capable technology ever which eventually gets deployed

35:03

everywhere.

35:05

Well, that sounds a lot to me.

35:06

Like an eventually A.I. becomes indistinguishable from

35:08

the world writ large, at which point you don’t want to only

35:13

A.I. companies to have a sense of what’s going on with

35:15

the entire world.

35:16

So it’s going to be governments, academia,

35:19

third parties, a huge set of stakeholders outside

35:22

the companies are going to want to see what’s going

35:24

on and then have a conversation as a society

35:26

about what’s appropriate and what do we feel discomfort

35:30

about.

35:30

What do we need more information about.

35:32

Wait, I want to go back on that.

35:33

You’re saying Anthropic can see my chats?

35:36

We cannot see, no human looks at your chats.

35:41

Chats are temporarily stored for trust and safety purposes.

35:46

Running, running classifiers over them.

35:48

And we can have Claude read it, summarize it and toss.

35:53

Toss it out.

35:54

So we never see it.

35:56

And Claude has no memory of it.

35:59

All it does is try to write a very high level summary, which

36:02

allows us to label a cluster something like gardening.

36:06

So say you were having a conversation about gardening.

36:08

Claude would summarize that as this person’s talking about

36:11

gardening.

36:12

And it leads to a cluster.

36:13

We can see that just says gardening.

36:16

This feels though, over time it

36:18

could get into the quite unpleasant territory.

36:22

A lot of social media has gotten

36:23

to where the amount of metadata being gathered

36:30

from a quite personal interaction people are having

36:33

with a system could be a lot.

36:37

Yes I mean, a couple of things here a year ago,

36:40

we started thinking about our position on consumer,

36:43

and we adopted this position of not running ads because we

36:47

think that’s an area that people obviously have

36:49

anxieties about with regard to this kind of thing.

36:52

In addition to that, we try and show people their data,

36:56

and we have a button on the site that lets you download

36:59

all the data that you’ve shared with Claude so that you

37:01

can at least see it.

37:02

Generally, we’re trying to be extremely transparent with

37:05

people about how we handle their data.

37:07

And ultimately, the way I see it is people

37:09

are going to want a load of controls that they can use,

37:12

which I think we and others will build out over time.

37:14

How confident are you that we can do this kind of monitoring

37:19

and evaluation as these models become more complicated, as

37:24

if we do enter a situation where Claude Code is

37:29

autonomously improving Claude at a rate

37:32

faster than software engineers could possibly keep up

37:35

with reading that code base.

37:37

We already talked briefly about

37:38

how you see the models exhibit some levels of deception,

37:43

some levels of pursuing their own goals.

37:46

We know that.

37:47

I mean, there’s been amazing interpretability work

37:49

at Anthropic under Chris Olah and others.

37:53

But it’s rudimentary compared to what the models are doing.

37:56

You’re seeing baskets or clusters of things light up,

37:59

and you have a sense of maybe what the model is considering

38:02

as opposed you have a direct line to its entire chain

38:06

of thought.

38:07

So you’re using A.I. systems you don’t totally understand

38:12

to monitor A.I. systems you don’t totally understand.

38:15

And the systems are making each other stronger

38:18

at an accelerating rate.

38:19

If things go the way you think they’re going to go.

38:22

How confident are you that we’re going to understand that

38:26

this is one of the situations which people warned about

38:29

for years?

38:30

Some form of delegation to systems

38:33

that have slightly inscrutable and unpredictable aspects.

38:37

And so this is happening.

38:40

We take this really, really seriously.

38:42

I think it’s absolutely possible that you can build

38:45

a system that does, for the vast majority of what needs

38:48

to be done here.

38:49

This has the property of being a fractal problem.

38:53

If I wanted to measure Ezra, I could

38:56

build an almost infinite number of measurements

38:58

to characterize you.

38:59

But the question is, at what level of fidelity

39:01

do I need to be measuring you?

39:03

I think we’ll get to the level of fidelity to deal with

39:05

the safety issues and societal issues,

39:09

but it’s going to take a huge amount of investment

39:11

by the companies, and we’re going to have to say things

39:16

that are uncomfortable for us to say,

39:19

including in areas where we may be deficient in what we

39:22

can or can’t know about our systems.

39:24

And Anthropic has a long history

39:26

of talking about and warning about some of these issues

39:29

while working on it.

39:30

Our general principle is we talk about things to also

39:32

make ourselves culpable.

39:34

This is an area where we’re going to have to say more.

39:36

I have read enough of the frightened ideas about AI,

39:42

superintelligence, and takeoff to know

39:44

that in almost every single one of them,

39:47

the key move in the story is that the A.I. systems become

39:51

recursively self-improving.

39:52

They’re writing their own code.

39:53

They’re deploying their own code.

39:54

It’s getting faster.

39:55

They’re writing it faster, deploying it faster.

39:57

And now you’re going to faster and faster iteration cycles.

40:02

Are you worried about it?

40:04

Are you excited about it?

40:07

I came back from paternity leave,

40:09

and my two big projects this year

40:10

are better information about A.I. and the economy

40:13

that we will release publicly, and generating

40:16

much better information and systems of knowing information

40:19

internally about the extent to which we are automating

40:24

aspects of A.I. development.

40:25

I think right now it’s happening in a very peripheral

40:28

way.

40:29

Researchers are being sped up.

40:31

Different experiments are being run by the A.I. system.

40:34

It would be extremely important to know if you’re

40:38

fully closing that loop.

40:39

And I think that we actually have some technical work

40:42

to do to build ways of instrumenting

40:44

our internal development environment

40:46

so that we can see trends over time.

40:49

Am I worried?

40:50

I have read the same things that you have read,

40:52

and this is the pivotal point in the story when

40:55

things begin to go awry.

40:57

If things do, we will call out this trend

41:02

as we have better data on it.

41:05

And I think that this is an area to tread with

41:07

extraordinary caution, because it’s very easy to see how you

41:12

delegate so many things to the system that if the system goes

41:15

wrong, the wrongness compounds very quickly and gets away

41:18

from you.

41:19

But the thing that always strikes me and has always

41:21

struck me as being dangerous about this,

41:24

is everybody knows.

41:25

And if I ask a member of any of the companies

41:27

whether or not they want to be cautious here,

41:29

they will tell me they do.

41:31

On the other hand, it is their almost only advantage

41:34

over each other.

41:36

And you all just revoked OpenAI’s ability to use Claude

41:39

Code because as best I can tell think it is genuinely

41:42

speeding you up and you don’t want it to speed them up.

41:46

There is something here between the.

41:52

Weight of the forces.

41:54

The power of the forces that I think you all know you’re

41:57

playing with.

41:59

And the very, very, very strong incentives to be first.

42:05

And I can really imagine being inside Anthropic

42:08

and thinking, well, better us in OpenAI, better us

42:11

than Alphabet, Google, better us than China.

42:15

And that being a very strong reason to not slow down.

42:21

I didn’t even know that.

42:22

This is a question I believe you can answer.

42:24

But how do you balance that?

42:26

Well, maybe I have something of an answer here today.

42:30

Our systems and the other systems from other companies

42:33

are tested by third parties, including parts of government,

42:35

for national security properties,

42:38

biological weapons, cyber offense, other things.

42:41

It’s clearly a problem area where the world needs to know

42:45

if this is happening.

42:47

And you almost certainly I think

42:49

if you polled any person on the street and said,

42:52

do you think.

42:52

A.I. companies should be allowed to do

42:54

recursive self-improvement after explaining

42:56

what that was.

42:57

Without checking with anyone, they

42:59

would say, no, that it sounds pretty risky.

43:02

Like, I would like there to be some form of regulation,

43:05

but there probably either won’t be.

43:07

Or it won’t be that strong.

43:09

I mean, this actually sometimes frustrates me

43:11

when I talk to all of you at the top of the A.I. companies,

43:15

which is the emergence a very naive deus ex machina.

43:22

A regulation where you all know

43:27

what the regulatory landscape looks like right now.

43:29

The big debate is whether or we’re going to completely

43:32

preempt any state regulation.

43:35

And how slowly things move.

43:37

There has been nothing major passed by Congress on this

43:41

at all Yeah, I would say.

43:44

And setting up some kind of independent testing

43:46

and evaluation system that all the different labs buy into,

43:51

it would be hard.

43:52

It would be complicated.

43:53

And it is.

43:55

Given how fast people are moving

43:56

and how strange the behavior is,

43:58

the systems are already exhibiting are

44:05

Even if you could get the policy right

44:07

at a high speed, the question of

44:08

whether or not the testing would

44:10

be capable of finding everything

44:12

you want on a rapidly self-improving system is

44:15

a very open question I wrote a research paper in 2021

44:19

called "How and Why Governments Should Monitor A.I."

44:21

development, with my co-author,

44:23

Jess Whittlestone in England.

44:24

And I think I’m not attributing a causal factor

44:26

here.

44:27

But within two years of that paper,

44:28

we had the A.I. safety institutes in the US and UK

44:32

testing things from the labs, roughly

44:34

monitoring some of these things

44:35

so we can do this hard thing.

44:37

It has already happened in one domain and I’m not relying

44:42

on some invisible big other force here.

44:45

I’m more saying that companies are starting to test for this

44:49

and monitor for this in their own systems.

44:51

Just having a non-regulatory external test

44:54

of whether you truly are testing for that

44:56

is extremely helpful.

44:57

And do you think we’re good enough at the testing?

44:59

I mean, I think one reason I’m skeptical is not that I don’t

45:02

think we can set up something that claims to be a test,

45:08

as you say, we have done that already.

45:09

It is that the resources going into that

45:12

compared to the resources going

45:13

into speeding these systems.

45:15

And already I am reading Anthropic reports that Claude

45:19

maybe knows when it’s being tested and alters its behavior

45:22

accordingly.

45:23

So a world where more of the code

45:24

is being written by Claude and less of it

45:26

is being understood, I just know where

45:30

the resources are going.

45:31

They don’t seem to be going into the testing side.

45:33

I’ve seen us go from 0 to having what I think people

45:38

generally feel is an effective bioweapon testing regime

45:42

in maybe two years, 2 and 1/2.

45:45

So it can be done.

45:46

It’s really hard, but we have a proof point.

45:50

So I think that we can get there and you should expect us

45:56

to speak more about this year, about precisely how we’re

46:02

starting to try and build like monitoring and testing things

46:05

for this.

46:06

And I think this is an area where we and the other A.I.

46:09

companies will need to be significantly more public

46:14

about what we’re finding.

46:15

We’re not being public now.

46:17

It’s in the model cards and things that you can read.

46:19

But clearly people are starting to read this

46:21

and say, hang on, this looks like quite concerning,

46:23

and they are looking to us to produce more data.

46:26

I want to go back now to the entry level jobs question.

46:30

Your CEO, Dario Amodei, has said

46:33

that he thinks I could displace half of all entry

46:36

level white collar jobs in the next couple of years.

46:40

I always think that people missed the entry level

46:43

language there.

46:44

When I see it reported on.

46:45

But first.

46:46

Do you agree with that?

46:48

Do you worry that half of all entry

46:50

level white collar jobs can be replaced

46:53

in the next couple of years?

46:54

I believe that this technology is

46:57

going to make its way into the broad knowledge economy,

47:00

and it will touch the majority of entry level jobs.

47:05

Whether those jobs actually change is a much more subtle

47:09

question, and it’s not obvious from the data.

47:11

Like we maybe see the hints of a slowdown in graduate hiring.

47:15

Maybe if you look at some of the data coming out right now,

47:18

we maybe see the signatures of a productivity boom.

47:21

But it’s very, very early and it’s hard to be definitive.

47:24

But we do know that all of these jobs will change.

47:26

All of the entry level jobs are eventually going to change

47:29

because A.I. has made certain things possible,

47:31

and it’s going to change the hiring plans of companies.

47:34

So as a cohort, you might see fewer job openings

47:38

for entry level jobs.

47:39

That would be one naive expectation

47:41

out of all of this.

47:42

But let’s talk about that.

47:44

Maybe not even being a naive expectation.

47:46

You say it’s already happening at Anthropic that what you’re

47:50

I’m seeing us shift.

47:51

Our preference.

47:52

Exactly and my guess is that would be happening elsewhere.

47:56

And where we are right now, I mean, even

47:57

in the way I use some of these systems, it is rare, I think,

48:02

that Claude or ChatGPT or Gemini

48:04

or any of the other systems is better than the best

48:07

person in a field.

48:08

It has not typically breached that.

48:10

And there’s all kinds of things they can’t do.

48:13

But are they better than your median college graduate?

48:17

At a lot of things Yeah they are.

48:18

And in a world where you need fewer of your median college

48:23

graduates, one thing I’ve seen people arguing about is

48:26

whether these systems at this point can do better than

48:29

average or replacement level work.

48:31

But I always really worry when I

48:33

see that because once we have accepted

48:34

they can do average replacement level work.

48:38

Well, by definition, most of the work done

48:42

and most of the people doing it is average is average.

48:46

The best people are the exceptions.

48:49

And also the way people become better

48:51

is that they have jobs where they learn.

48:55

When I mean, I have spent a lot of time

48:57

hiring young journalists over my career.

49:00

And when you hire people out of college, to some degree,

49:03

you’re hiring them for their possible articles and work

49:08

at that exact moment.

49:10

But to some degree, you’re making an investment in them

49:13

that you think will only pay off over time as they get

49:15

better and better and better.

49:17

And so this world where you have a potential real impact

49:22

on entry level jobs and that world does not feel far away

49:25

to me, seems to me to have really profound questions it

49:29

is raising about the upskilling of the population,

49:33

how you end up with people for senior level jobs down

49:35

the road, what people aren’t learning along the way.

49:38

And one thing we see is that there

49:41

is a certain type of young person

49:42

that has just lived and breathed A.I. for several years

49:45

now.

49:46

We hire them, they’re excellent,

49:49

and they think in entirely new ways about basically how

49:51

to get Claude to work for them.

49:53

It’s like kids who grew up on the internet,

49:55

they were naturally versed in a way that many people

49:59

in the organizations they were coming into weren’t.

50:02

So figuring out how to teach that basic experimental

50:06

mindset and curiosity about these systems

50:09

and to encourage it is going to be really important.

50:11

People that spend a lot of time

50:13

playing around with this stuff will develop very valuable

50:16

intuitions, and they will come into organizations

50:19

and be able to be extremely productive at the same time.

50:24

We’re going to have to figure out what artisanal skills we

50:27

want to almost develop maybe a guild style philosophy

50:30

of maintaining human excellence in, and how

50:33

organizations choose how to teach those skills.

50:37

O.K, then what about all those people in the middle of that?

50:39

Things move slowly in the real economy

50:43

outside Silicon Valley.

50:44

I think that we often look at software engineering and think

50:47

that this is a proxy for how the rest of the economy works,

50:49

but it’s often not.

50:51

It’s often a disanalogy.

50:52

Organizations will move people around to where the A.I. systems

50:57

don’t yet work.

50:59

And I think that you won’t see vast,

51:01

immediate changes in the makeup of employment,

51:05

but you will see significant changes in the types of work

51:08

people are being asked to do, and the organizations which

51:10

are best at of moving their people around are going to be

51:13

extremely effective.

51:14

And ones that may end up having

51:16

to make really, really hard decisions involving laying off

51:19

workers.

51:20

The difference with this A.I. stuff

51:22

is it maybe happens a lot faster

51:24

than previous technologies, and I

51:26

think many of the anxieties people might have about this.

51:29

Including at Anthropic, is the speed

51:32

of this going to make all of this different.

51:34

Does it introduce.

51:35

shear points that we haven’t encountered before.

51:37

If you had to bet three years from now, is the.

51:42

Unemployment rate for college graduates.

51:47

Is it the same as it is now?

51:48

Is it higher or is it lower?

51:50

I would guess it is higher, but not by much.

51:55

And what I mean by that is there will be some disciplines

51:58

today which actually A.I. has come in and completely changed

52:01

and completely changed the structure of that employment

52:04

market, maybe in a way that’s adverse to people that have

52:06

that specialism.

52:08

But mostly, I think three years from now,

52:11

I will have driven a pretty tremendous growth

52:13

in the entire economy.

52:15

And so you’re going to see lots of new types of jobs that

52:18

show up as a consequence of this that we can’t yet can’t

52:21

yet predict.

52:22

And you will see graduates kind of flood into that,

52:26

I expect.

52:27

Do you, I know you can’t predict those new jobs.

52:29

But if you had to guess what some of them might look like.

52:33

I mean, one thing is just the phenomenon

52:34

of micro entrepreneur.

52:36

I mean, there are lots and lots of ways that you can

52:39

start businesses online now, which are just made massively

52:42

easier by having the A.I. systems do it for you,

52:44

and you don’t need to hire a whole load of people to help

52:47

you do the huge amounts of schlep work that involves

52:50

getting a business off the ground.

52:51

It’s more a case of if you’re a person with a clear idea

52:54

and a clear vision of something to do a business

52:56

in, it’s now the best time ever to start a business,

52:59

and you can get up and running for pennies on the dollar.

53:02

I expect we’ll see tons and tons and tons of stuff that

53:05

has that nature to it.

53:07

I also expect that we’re going to see the emergence of what

53:09

you might think of as the eye to eye economy,

53:13

where A.I. agents and A.I. businesses will be doing

53:16

business with one another.

53:17

And we’ll have people that have figured out ways

53:19

to basically profit off of that in the forms of strange

53:23

New organizations like, what would it look like to have

53:25

a firm which specializes in eye to eye legal contracts.

53:30

Because I bet you there’s a way that you can figure out

53:32

creative ways to start that business today.

53:35

There’ll be a lot of stuff of that flavor.

53:37

So the thing, the version of this

53:39

that I both worry about and think

53:42

to be the likeliest, if you told me

53:44

what was going to happen, was it

53:45

Anthropic, was going to release Claude Plus in a year,

53:49

and Claude Plus is somehow a fully formed coworker

53:55

and it can mimic end to end the skills

53:59

of a lot of different professions

54:01

up to the C-suite level.

54:03

And it’s going to happen all at once,

54:04

and it’s going to create tremendous all at once

54:06

pressure for businesses to downsize,

54:08

to remain competitive with each other... at a policy level,

54:13

the fact that would be so disruptive in that Big Bang,

54:17

everybody stays home because of COVID style way.

54:20

It worries me less because when things are emergencies,

54:24

we respond.

54:24

We actually do policy.

54:27

But if you told me that what’s going to happen is that

54:31

the unemployment rate for marketing graduates is going

54:38

to go up by 175%, 300% to still not be that

54:46

high.

54:46

The overall unemployment rate during the Great Recession

54:50

topped out in the nine ish percentile range.

54:54

So you can have a lot of disruption

54:57

without having 50% of people thrown out of work.

55:00

If you have 10%, 15% I mean, that’s very,

55:02

very, very high, but it’s not so high.

55:07

And if it’s only happening in a couple of industries

55:10

at a time and it’s grads, not everybody in the industry

55:15

being thrown out of work.

55:17

Well, maybe it’s just that you’re not good enough. Yeah,

55:19

right.

55:20

The superstar is really good.

55:22

Graduates are still getting jobs.

55:24

You should have worked harder.

55:24

You should have gone to a better school.

55:26

And one of my worries is that we don’t respond to that kind

55:31

of job displacement.

55:33

Well, right.

55:33

Which is a kind of job displacement we got from

55:35

China, which is the kind of job displacement that seems

55:38

likelier because it’s uneven and it’s happening at a rate

55:41

where we can still blame people for their own fortunes.

55:47

I’m curious how you think about that story.

55:50

I think the default outcome is something

55:52

like what you describe, but getting

55:53

there is actually a choice.

55:55

And we can make different choices.

55:56

The whole purpose of what we release

55:58

in the form of Anthropic Economic Index

56:00

is the ability to have data that

56:03

ties to occupations that tie to real jobs in the economy.

56:08

We do that very intentionally because it is building

56:11

a map over time of how this A.I. is making

56:14

its way into different jobs and will

56:15

empower economists outside Anthropic to tie it together.

56:20

I believe that we can choose different things in policy

56:24

if we can make much more well-evidenced claims

56:28

about what the cause of a job disruption or change is.

56:31

And the challenge in front of us

56:32

is, can we characterize this emerging A.I. economy

56:36

well enough that we can make this extremely stark.

56:39

And then I think that we can actually have

56:41

a policy discussion about it.

56:42

Well, let’s talk about the policy discussion.

56:44

One reason I wanted to have you in particular

56:46

on is you did policy at OpenAI.

56:48

You do policy at Anthropic.

56:50

So you’ve been around these policy debates for a long

56:51

time.

56:52

You’ve been tracking model capabilities

56:53

at your newsletter for a long time.

56:55

My perception is we are many, many years into the debate

57:01

about A.I. and jobs.

57:02

Many, many years dating far before ChatGPT, of there

57:07

being conferences at Aspen and everywhere

57:09

else about what are we going to do about A.I. and jobs.

57:13

And somehow I still see almost no policy.

57:18

That seems to me to be actionable.

57:20

If the situation I just described begins showing up

57:24

where all of a sudden entry level jobs are getting much

57:28

harder to come by across a large range of industries all

57:34

at once, such that the economy cannot reshift all these

57:39

marketing majors into data center construction or nurses

57:42

or something.

57:44

So, O.K, you’ve been deeper in this conversation than I’ve

57:47

been.

57:47

When you say we can have a policy conversation about

57:50

that, we’ve been having a policy conversation.

57:52

Do we have policy?

57:56

We have generalized anxiety about the effect of A.I.

57:59

on the economy and on jobs.

58:02

We don’t have clear policy ideas.

58:05

Part of that is that elected officials are not

58:08

moved solely or mostly by the high level policy

58:12

conversation.

58:12

There, moved by what happens to their constituents.

58:15

Only a few months ago were we able to produce state level

58:18

views for our Economic Index.

58:20

And now you can start having the policy conversation.

58:23

And we’ve had this with elected officials where now we

58:25

can say, oh, you’re from you’re from Indiana.

58:27

Here’s the major uses of A.I. in your state.

58:31

And we can join it with major sources of employment.

58:34

And what we’re starting to see is that activates them

58:36

because it makes it tied to their constituents who are

58:40

going to tie it to the politician of what did you do

58:43

now.

58:44

What you do about this is going

58:46

to need to be an extremely kind

58:48

of multi-layered response, ranging from extending

58:51

unemployment for a specialty, occupations that we know

58:53

are going to be hardest hit, to thinking about things

58:57

like apprenticeship programs.

58:59

And then as the scenarios get more and more significant may

59:03

extend to much larger social programs or things like

59:08

subsidizing jobs in the part of the economy where you want

59:11

to move people to but you’re only able to do if you

59:15

experience the kind of abundance that comes from

59:17

significant economic growth.

59:18

But the economic growth may help solve

59:21

some of these other policy challenges

59:22

by funding some of the things you can do.

59:26

I always find this answer depressing.

59:28

I’m going to be honest.

59:29

Unemployment is a terrible thing to be on.

59:31

It’s a program we need.

59:33

But people on unemployment are not happy about it.

59:37

And it’s not a good long term solution for anybody.

59:40

Apprentice retraining programs.

59:43

They don’t have great track records.

59:46

We were not good at retraining people

59:48

out of having their manufacturing jobs outsourced.

59:51

I’m not saying it is conceptually impossible that

59:55

we could get better at it, but we would need to get better

59:58

at it fast.

59:59

And we have not been putting in the reps

60:02

or the experimentation or the institution or capacity

60:04

building to do that.

60:06

And the broader question of big social insurance changes.

60:10

Doesn’t seem.

60:12

I mean, that seems tough to me.

60:13

I want to push on, please, just a bit

60:15

where we know that there is one intervention that

60:19

helps people dealing with a changing economy

60:21

more than almost anything else.

60:22

It is just time giving the person time to find either

60:27

a job in their industry or to find a job that’s

60:30

complementary.

60:31

If people don’t have time, they take lower wage jobs.

60:36

They fall out of whatever economic rung they may fall,

60:39

fall down at.

60:40

Policy interventions that can just give people

60:43

time to search is, I think, a robustly useful intervention,

60:47

and one where there are many like dials

60:49

to turn in a policy making sense that you can use.

60:52

And I think this is just well supported by lots

60:54

of economic literature.

60:55

So we have that now if we end up in a more extreme scenario

61:00

some of the ones that you’re talking about,

61:02

I think that will just bring us to the larger national

61:06

conversation about what to do about this technology,

61:08

which is beginning to happen.

61:10

If you look at the states and the flurry of legislation

61:13

at the state level.

61:15

Yes not all of it is exactly the right policy response,

61:20

but it is indicative of a desire for there

61:22

to be some larger, coherent conversation about this.

61:25

Well, I think time is a really good way

61:27

of describing what the question is,

61:29

because I agree with you.

61:30

I mean, when I say unemployment insurance isn’t

61:33

a great program to be on, I don’t mean people don’t need

61:35

to be on it.

61:36

I mean, they want to get off of it.

61:37

Absolutely because people for they want money from jobs.

61:40

They want dignity.

61:41

They want to be around other human beings.

61:44

Usually what you’re doing when you are helping people buy

61:48

time is you’re helping them wait out a time delimited

61:52

disruption.

61:53

Not always right.

61:54

The China shock wasn’t exactly like that,

61:56

but that you expect to pass.

62:00

And then the market is normal.

62:02

In this case.

62:02

What you have is a technology that if what you want

62:07

to have happen happens, it is the technology

62:11

is accelerating.

62:12

So what you have is like three different speeds

62:14

happening here.

62:15

You have the speed at which individual people can adjust.

62:17

How fast can I learn new skills,

62:19

figure out a new world, learn A.I., whatever it might be.

62:22

You have the speed at which the A.I. systems, which

62:25

a couple of years ago were not capable of doing

62:30

the work of a median college grad from a good school,

62:34

and you have the speed of policy

62:37

and the speed at which the A.I. systems are

62:40

getting better and able to do more things is quite fast.

62:45

I mean, that is you experience this more than I do,

62:48

but I find it hard to even cover this

62:51

because within three months something else will

62:53

have come out that is significantly changed.

62:57

What is possible.

62:57

I had a baby recently and came back from paternity leave

63:01

to the new systems we built was deeply surprised.

63:04

Individual humans are moving more slowly than that.

63:09

And policy and government institutions

63:12

move a lot more slowly than individual human beings.

63:17

And so typically the intervention is that time

63:21

favors the worker, as you’re saying.

63:23

And here it will help the worker.

63:25

But I think the scary question is whether time just actually

63:29

creates time for the disruption to get worse.

63:32

Maybe you wanted to move over to data center construction,

63:35

but actually now we don’t need as much data center construct.

63:37

You can think of it like that.

63:39

I mean, under the situation you’re describing,

63:43

the economy will be running extremely hot.

63:46

Huge amounts of economic activity

63:48

will be generated by these A.I. systems.

63:50

And under most scenarios where this is happening,

63:53

I don’t think you’re going to be seeing GDP stay the same

63:56

or shrink.

63:58

It’s going to be getting substantially larger.

64:01

I think we just haven’t experienced major GDP growth

64:05

in the west in a long time, and we forget what that

64:09

affords you in a policymaking sense.

64:11

I think that there are huge projects

64:13

that we could do that would allow you to create

64:16

new types of jobs, but it requires the economic growth

64:20

to be so kind of profoundly large

64:23

that it creates space to do those projects.

64:25

And as you’re deeply familiar with your work

64:29

on the abundance movement it requires for social will

64:33

to believe that we can build stuff and to want to build

64:35

stuff.

64:36

But I think both of those things might come along.

64:39

I think that we could end up being

64:41

in a pretty exciting scenario where

64:45

we get to choose how to allocate

64:48

great efforts in society due to this large amount

64:52

of economic growth that has happened,

64:54

that is going to require the conversation

64:57

to be forced about.

64:58

This isn’t temporary, which I think is what you’re gesturing

65:01

at.

65:01

And in a sense, the hardest thing to communicate

65:04

to policymakers is there isn’t a natural stopping point

65:08

for this technology.

65:09

It’s going to keep getting better.

65:11

And the changes it brings are going to keep compounding

65:15

with the rest of society.

65:17

So that will need to create a change in political will

65:20

and a willingness to entertain things which we haven’t

65:22

in some time.

65:23

So now I want to flip it.

65:25

The question I’m asking you brought up abundance.

65:29

One of the things I have learned doing that work is

65:33

that it is certainly not my view that what is scarce

65:38

in society is ideas for better ways of doing things,

65:44

that our policy isn’t better than it is because our policy

65:47

cupboard is dry.

65:48

That’s not true.

65:49

We have lots of good policies.

65:50

I could name a bunch of them.

65:51

They’re very hard to get through our political systems,

65:55

as they’re currently constituted the least

65:57

inspiring version of the A.I.

66:00

Future is world where what you have done

66:02

is create a way to throw young white collar workers out

66:07

of work and replace them with an average level A.I.

66:11

intelligence.

66:12

The more exciting version, to use Dario’s metaphor,

66:17

is geniuses in a data center.

66:20

And I do think that’s exciting.

66:23

And I wonder when I hear him or you talk about, well,

66:28

what if we had 10 percentage point GDP growth year on year,

66:31

20 percentage point GDP growth year on year.

66:34

I wonder how many of our problems

66:36

are really bounded at the ideas level.

66:41

We could go to Nobel Prize winners right now

66:43

and say, what should we do in this country?

66:45

And a lot of them could give us

66:46

some good ideas that we are not currently doing.

66:49

I do worry sometimes, or I wonder,

66:52

given my experience on other issues,

66:55

whether we have overstated to ourselves, how much of what

67:00

stands between us and the expanding.

67:04

Abundant economy we want is that we don’t have enough

67:08

intelligence.

67:09

And the idea is that intelligence

67:11

could create versus our actual ability

67:13

to implement things is very weakened.

67:16

And what A.I. is going to create is larger bottlenecks around

67:20

that, because there’ll be more being pushed at the system

67:23

to implement, including dumb ideas and disinformation

67:26

and slot right.

67:27

Like it’ll have things on the other side of the ledger

67:29

to how do you think about these rate limiters?

67:34

There’s kind of a funny lesson here from the A.I. companies

67:37

or companies in general, especially tech companies,

67:39

where often new ideas come out of companies by them creating

67:43

what they always call the startups within a startup,

67:45

which is basically taking whatever process has built up

67:48

over time, leading to back end bureaucracy or schlep work

67:52

and saying to a very small team inside the company,

67:55

you don’t have any of this.

67:56

Go and do some stuff.

67:57

And this is how things like Claude code and other stuff

68:00

get created.

68:01

Ideas that kind of are starting to float around

68:04

are what would it look like to create

68:06

that permissionless innovation structure in the larger

68:09

economy.

68:10

And it’s really, really hard because it has the additional

68:13

property that economies are linked to democracies.

68:17

Democracies waive the preferences

68:20

of many, many people.

68:21

And all politics is local.

68:23

So often as you’ve encountered with infrastructure build

68:26

outs, if you want to create a permissionless innovation

68:29

system, you run into things like property rights and what

68:32

people’s preferences are, and now you’re in an intractable,

68:35

intractable place.

68:36

But my sense is that’s the main thing that we’re going

68:38

to have to confront.

68:40

And the one advantage that I might give us it

68:44

is kind of a native bureaucracy eating

68:48

machine, if done correctly, or a bureaucracy

68:50

creating machine.

68:51

Did you see did you see that somebody had created a system

68:55

that basically you feed it in the documents of a new

69:00

development near you.

69:01

Oh, and it writes environmental review things,

69:03

or it writes incredibly sophisticated challenges

69:09

across every level of the code that you could possibly

69:12

challenge on.

69:14

So most people don’t have the money when they want to stop

69:17

an apartment building from going up down the block

69:19

to hire a very sophisticated law firm to figure out how

69:22

to stop that apartment building.

69:24

But basically, this created that at scale.

69:28

And so, as you say, right, it could

69:30

eat bureaucracy could also supercharge bureaucracy.

69:34

Yep it’s for everything in A.I. has the other side

69:37

of the coin.

69:38

We have customers that have used our A.I. systems

69:41

to massively reduce the time it takes them to produce all

69:45

of the materials they need when they’re submitting new

69:47

drug candidates.

69:48

And it’s cut that time massively.

69:50

It’s the mirror-world version of what you just described.

69:53

I don’t have an easy, easy answer to this.

69:56

I think that this is the kind of thing that becomes

69:59

actionable when it is more obviously a crisis,

70:02

and actionable when it’s something that you can discuss

70:05

at a societal level.

70:07

I guess the thing that we’re circling around in this

70:09

conversation is that the changes of A.I. will happen

70:13

almost everywhere, and the risks of it.

70:16

It happens in a diffuse, unknowable way such

70:20

that it is very hard to call it for what it is

70:22

and take actions on it.

70:24

But the opportunity is that if we can actually see the thing

70:27

and help the world see the thing that

70:29

is causing this change, I do believe

70:32

it will dramatize the issues to shake us out

70:35

of some of this stuff and help us figure out

70:36

how to work with these systems and benefit from them.

70:40

What I notice in all this is that there

70:43

is, as far as I can tell, zero agenda for public A.I..

70:51

What does society want from A.I.?

70:54

What does it want this technology to be able to do?

70:56

What are things that maybe you would

70:57

have to create a business model, or a prize model,

71:01

or some kind of government payout, or some kind of policy

71:03

to shape a market or to shape a system of incentives.

71:06

So we have systems that are solving not just problems

71:11

at the private market, knows how to pay for, but problems

71:15

that it’s nobody’s job but the public and the government

71:18

to figure out how to solve.

71:20

I think I would have bet, given how much discussion

71:22

there’s been of A.I. over the past couple of years and how

71:26

strong some of these systems have gotten,

71:28

that I would have seen more proposals for that by now.

71:30

And I’ve talked to people about it and wondered about

71:32

it.

71:33

But I guess I’m curious on how you think about this.

71:36

What would it look like to have at least parallel

71:39

to all the private incentives for A.I. development?

71:43

An actual agenda for not what we are scared I

71:46

will do to the public.

71:48

We need an agenda for that too.

71:50

But what we want it to do, such that companies like yours

71:54

have reasons to invest in that direction.

71:56

I mean, I love this question.

71:58

I think there’s a real chicken and egg problem here where

72:02

if you work with the technology,

72:04

you develop these very strong intuitions for just how much

72:08

it can do.

72:08

And the private market is great at forcing

72:11

those intuitions to get developed.

72:13

We haven’t had massive, large scale public side deployments

72:18

of this technology.

72:19

So many of the people in the public sector don’t yet have

72:23

those intuitions.

72:25

One one positive example is something

72:27

the Department of Energy is doing

72:28

called the Genesis Project, where their scientists are

72:31

working with all of the labs, including Anthropic,

72:33

to figure out how to actually go and intentionally speed up

72:36

bits of science.

72:38

Getting there took US and other labs

72:40

doing multiple hack days and meetings with scientists

72:44

at the Department of Energy to the point where

72:46

they not only had intuitions, but they became excited

72:49

and they had ideas of what you could turn this toward,

72:53

how we do that for the larger parts of the public life that

72:57

touch most people health or education,

73:00

is going to be a combination of grassroots

73:03

efforts from companies going into those communities

73:05

and meeting with them.

73:07

But at some point, we’ll have to translate it to policy.

73:10

And I think maybe that’s me and you and others making

73:13

the case that this is something that can be done.

73:16

And I often say this to elected officials

73:19

give us a goal like the A.I. industry is

73:22

excellent at trying to climb to the top

73:26

on benchmarks, come up with benchmarks for the public good

73:29

that you want.

73:30

So let’s imagine that you did do something like this.

73:32

I’ve always been a big fan of prizes for public development.

73:35

So let’s say that there was legislation passed

73:38

and the Department of Health and Human services or the NIH

73:42

or someone came out and said, here’s 15 problems we would

73:48

like to see solved that we think I could be potent

73:52

at solving.

73:54

If there was real money there, if there was, 10, 15 billion

73:58

behind a bunch of these problems because they were

74:00

worth that much to society, would

74:02

it materially change the development priorities

74:09

at places like Anthropic.

74:10

I mean, if the money was there,

74:15

would it alter the R&D you all are doing.

74:19

I don’t think so.

74:21

Why? Because it’s not really the money that is

74:25

the impediment to this stuff.

74:26

It is the implementation path.

74:28

It is actually having a sense of how

74:29

you get the thing to flow through to the benefit.

74:32

And many aspects of the public sector

74:36

have not been built to be super hospitable to technology

74:39

in general, to incentivize it.

74:41

I think it mostly just takes a bounty

74:43

in the form of guaranteed impact

74:45

and guaranteed path to implementation.

74:49

Because the main thing that is scarce at AI

74:51

organizations is just the time of the people

74:55

at the organization, because you can

74:56

go in almost any direction.

74:58

This technology is expanding super quickly.

75:00

Many new use cases are opening up,

75:02

and you’re just asking yourself a question of where

75:04

can we actually have a positive,

75:07

meaningful impact in the world.

75:09

Super easy to do that in the private sector

75:11

because it has all of the incentives

75:13

to push stuff through in the public sector.

75:15

We more need to solve this problem of deployment

75:17

than anything else.

75:19

What would excite you if it was announced? What what

75:22

do you think would be good candidates

75:24

for that kind of project?

75:28

Anything that helps speed up the time it takes

75:32

to both speak to medical professionals and take

75:34

work off their plate.

75:36

We had another baby recently.

75:38

I spend a lot of time on the Kaiser Permanente advice line

75:41

because the baby’s bonked its head or its skin’s a different

75:43

color today.

75:44

Or all of these things.

75:45

And I use Claude to stop me and my wife panicking while

75:49

we’re waiting to talk to the nurse.

75:51

But then I listened to the nurse do all of this triaging,

75:54

ask all of these questions.

75:55

So obviously, a huge chunk of this is stuff that you could

75:58

use A.I. systems productively for, and it would help

76:01

the people that we don’t have enough of spend their time

76:04

more effectively, and it would be able to give reassurance

76:06

to the people going through the system.

76:08

And that’s maybe less inspiring and glamorous than

76:11

maybe some of what you’re imagining.

76:13

But I think mostly when people interact with public services,

76:17

their main frustration is just that it’s opaque and it takes

76:20

you a long time to speak to a person.

76:21

But actually, these are exactly the kinds of things

76:24

that I could meaningfully work on.

76:26

It’s interesting because what you’re describing there is

76:28

less A.I. as a country of geniuses in a data center,

76:34

and more A.I. as standard plumbing of communications

76:40

and documentation.

76:41

We’ve got a country of junior employees in the data center.

76:44

Let’s do something with that.

76:45

One thing we haven’t talked about in this conversation,

76:48

and it’s just worth bearing in mind is like the frontier

76:51

of science is open for business now in a way that it

76:53

hasn’t been before.

76:54

And what I mean by that is we’ve found a way to build

76:58

systems that can provably accelerate human scientists.

77:02

Human scientists are extremely rare.

77:04

They come out at the end of PhD programs,

77:07

which never have enough people,

77:08

and they work on extremely important problems.

77:11

I think we can get into a world where the government

77:14

says let’s understand the workings of a human cell.

77:17

Let’s team up with the best A.I. systems to do that.

77:20

Let’s actually have a better story on how we deal with some

77:24

issues like Alzheimer’s and other things,

77:26

partly through the use of these huge amounts

77:28

of computation that have been developed and even more

77:31

aggressively, you could imagine a world where

77:34

the government wanted some of this infrastructure build out

77:37

to be for computers that were just training.

77:39

Public benefit systems.

77:41

But I think we get there through getting the initial

77:43

wins, which will just look like let’s just make

77:46

the bureaucracy work better and feel better for people.

77:49

I mean, that last set of ideas was

77:51

more what I was thinking of.

77:52

I think that if you’re going to have a healthy politics

77:56

around A.I., and A.I. does pose real risks to people,

78:01

and real things are going to go wrong for people.

78:04

Everything from job loss to child exploitation

78:07

to scams, which are already everywhere

78:10

to cybersecurity risks help people see

78:12

the actual big ticket, not just to help people

78:16

see those things have to actually exist Yeah right.

78:19

They have to exist.

78:21

And if all the energy in A.I. is trying

78:27

to beat each other to helping companies downsize

78:31

their junior employees, I think

78:33

people are going to have good reason

78:34

to not trust that technology.

78:37

And it doesn’t mean you shouldn’t have things that

78:40

make the economy more efficient.

78:41

That’s been we have automated manufacturing.

78:44

We have automated, huge amount of farming, right.

78:47

And that allows us to make more things

78:48

and feed more people.

78:49

I’m aware of how productivity improvements work,

78:53

but we’re very focused, I think, on what could go wrong.

78:55

And that’s reasonable.

78:58

But I really do worry that our attention to what could

79:00

go right has been quite poor.

79:03

There’s kind of hand-waving that this could help us solve

79:07

problems in energy and medicine.

79:09

And so on.

79:10

But these are hard problems.

79:12

They need money.

79:14

They need compute.

79:15

If barely any of the compute is going to Alzheimer’s

79:18

research, then the systems are not going to do that much

79:22

for Alzheimer’s research.

79:23

And I’m not saying this is not your fault,

79:26

but the absence of a public agenda for A.I. that does not

79:30

appear to be accelerating the automation of white collar

79:34

work.

79:35

It seems just a little bit lacking given how big

79:37

the technology is Yeah the greatest example is this

79:41

program called the Genesis project,

79:44

where there’s real work there to think about how we can

79:46

intentionally move forward different parts of science.

79:49

And I think giving elected officials the ability

79:53

to stand up to the American people and say,

79:55

these are parts of science that

79:57

are going to benefit you in health.

79:59

And we now know how to step on the gas with A.I.

80:01

for them would be really helpful.

80:03

My guess is in a year or two years,

80:06

we’ll be able to answer the mail on that one.

80:08

But it’s just got started.

80:09

But we need clearly 10 projects like it.

80:12

So the other side of this is that the one area

80:14

of government that I do think thinks about A.I. in this way

80:17

is defense.

80:19

I want to talk about that broadly, but specifically,

80:22

Anthropic is in a current dispute with the Department

80:27

of Defense or I guess we call it now, the Department of War

80:29

over whether it can continue to be used in it.

80:33

Because whether or not you’re.

80:35

Can you describe what is happening there?

80:37

I can’t talk about discussions with an extremely important

80:41

partner that are ongoing.

80:43

So I’ll just have to stop it there.

80:46

So well I will describe that there is some dispute,

80:51

I guess my question, because I recognize you’re not going

80:53

to talk about what’s going on with you and your partner,

80:56

but it’s about a broader issue here,

81:00

which is there is going to be a lot of offensive possibility

81:07

in advanced A.I. systems, and one of the strongest drivers

81:12

of the speed at which we’re going with A.I. is competition

81:14

with China.

81:16

Some of the biggest risks that we think about

81:18

in the near term are cybersecurity

81:20

or biological warfare, are all kinds of ways

81:23

that others could use these against us, our drone swarms.

81:28

And there’s going to be a lot of money in this and a lot

81:30

of players in it, and it really seems unclear to me how

81:39

you keep this kind of competition from spinning

81:42

into something very dangerous.

81:45

So without talking about what you may or may not

81:47

do with the Defense Department, how has

81:50

Anthropic thought about this question more broadly?

81:53

We’ve been long term partners to the national security

81:57

community, and we were the first to deploy on classified

82:01

networks.

82:02

But the reason for that was actually

82:04

a project which I stewarded, which

82:06

was to figure out if our A.I. systems knew

82:08

how to build nuclear weapons.

82:10

This is an area of bipartisan agreement where people agree

82:13

that we shouldn’t deploy AI systems into the world that

82:15

know how to build nukes.

82:16

And so we partnered with parts of the government

82:19

to do that analysis that maybe illustrates what I think

82:24

of as for a thing to shoot for not just us,

82:27

but all the A.I. companies is how

82:28

do we both prevent the potential

82:32

for national security harm coming to the public

82:34

or proliferating out of these systems?

82:36

But also the second part is, how do we just

82:39

improve the defensive posture of the world?

82:42

And I’ll give you an example that I think is in front of us

82:45

right now.

82:46

We recently published a blog, and other companies

82:48

have done similar work on how we

82:50

fixed a load of cybersecurity vulnerabilities

82:52

and popular open source software using our systems,

82:55

and many others have done the same.

82:57

So yes, there will be all kinds of offensive uses

83:01

and there will be societal conversations

83:02

to be had about that.

83:04

But we can just generally improve the defensive posture

83:06

and resilience of pretty much every digital system

83:09

on the planet today.

83:10

And I think that will actually do a huge amount

83:13

to make the whole international system more

83:17

stable and also create a greater defensive posture

83:21

for countries, which helps them feel more relaxed

83:23

and relaxed.

83:25

Countries are less likely to do

83:26

erratic, frightening things. That

83:27

would be good if it happened.

83:29

My worry is, as an individual that I feel the opposite might

83:33

be happening.

83:35

So I’ve just watched people installing all kinds of fly

83:40

by night A.I. software and giving it a lot of access

83:43

to their computers without any knowledge of what

83:45

the vulnerabilities are.

83:47

Yep. I myself am nervous about using things like Claude Code

83:50

because I am bad at talking to Claude Code,

83:52

and I don’t understand these questions,

83:54

and I’m worried about loading onto my computer or something

83:57

that is creating security vulnerabilities I don’t even

84:00

understand.

84:01

The number of just scam voice messages I get every day.

84:07

Everything that are clearly somewhat A.I. generated,

84:09

or many of them seem to me, is very high.

84:13

There’s a question of societally,

84:14

do we use it to upgrade our systems?

84:18

I’m actually curious for your thoughts individually,

84:21

because as we’re all experimenting with something

84:23

we don’t understand and giving it access to the terminal

84:25

level of our computers without any real knowledge of how

84:29

to use that, it seems like we might be opening up a lot

84:31

of vulnerability all at once.

84:33

It’s the early days of the internet all over again,

84:36

where there are all kinds of banners for different

84:38

websites, or you could download like MP3s

84:40

to your computer that would completely break your computer

84:43

or download like helper software for your Internet

84:46

Explorer taskbar.

84:47

That was just like a phishing device.

84:49

We’re there.

84:50

We’re there with A.I.

84:51

We’ll move beyond this, but I believe that people,

84:54

when they experiment, come up with amazing, amazing,

84:57

useful things as well.

84:58

So my take is you have to say, when you’re doing the thing

85:01

that might be extremely dangerous and put big banners,

85:04

but mostly you still want to empower people to be able

85:06

to do that experiment.

85:08

So when you look forward, not five years,

85:11

because I think that’s hard to do, but one year, yeah,

85:14

we’ve kind of pushed into agents fairly fast.

85:16

We push into code.

85:17

I think a lot of people think code might be different than

85:19

other things, because it’s a more contained environment,

85:22

and it’s easier to see what you’re doing has worked.

85:24

But from your perspective of being inside one of these

85:27

companies and also running a newsletter where you

85:29

obsessively track the developments of a million A.I.

85:31

systems I’ve never heard of week on, week on week.

85:35

What do you see coming now?

85:38

Like what feels to you like it is clearly on the horizon,

85:41

but we’re not quite prepared for it or won’t feel until

85:44

it’s arrived.

85:47

No one has.

85:49

Maybe the way I’d put it is sometimes and you’ve likely

85:52

had the same had the ability to have certain insights that

85:55

have come through of reading a vast,

85:58

vast amount of stuff from many different subjects and piecing

86:01

it together in my head and having that experience

86:03

of having a new idea and being creative.

86:07

I think we underestimate just how quickly

86:10

A.I. is going to be able to start doing that on an almost

86:13

daily basis.

86:14

For us, going and reading vast tracts of human knowledge,

86:18

synthesizing things, coming up with ideas,

86:21

telling us things about the world in real time that

86:23

are basically unknowable today.

86:26

But the amazing part is, people

86:28

are going to have the ability to know things that

86:30

are just wildly expensive or difficult to know today,

86:33

or would take you a team of people to do.

86:35

But the frightening part is, I think that knowledge is

86:39

the most raw form of power.

86:41

It’s intensely destabilizing to be in an environment where

86:44

suddenly everyone is like a mini CIA in terms

86:47

of their ability to gather information about the world.

86:50

They’ll do huge, amazing things with it.

86:52

But surely there are going to be like crises

86:54

that come about from this.

86:55

And I think for the actual mental load

86:57

of being a person interacting with these systems

87:00

is going to be quite strange.

87:01

I already find this where I’m like, am I.

87:04

Am I keeping up with the ability of these systems

87:07

to produce insights for me?

87:08

Like, how do I structure my life

87:10

so I can take advantage of it?

87:12

I’m very curious about how you think even having that ongoing

87:16

conversation with the systems changes you.

87:20

So let me I’ll say it from my perspective.

87:23

One thing I have noticed is that the Claude

87:27

is very, very, very smart.

87:30

It is smarter than most people who

87:33

know about a thing in any given thing.

87:35

That is my experience of it.

87:38

But it is not in the way that other people

87:43

are an independent entity that is

87:47

rooted in its own concerns and intuitions and differences.

87:52

What it is instead is a computer system

87:54

trying to adapt itself to what it thinks I want.

87:58

So as I’ve talked to it much more about issues in my life,

88:03

about issues in my work, various kind of intellectual

88:09

inquiries or reporting inquiries where I’m trying

88:11

to figure out questions that as of yet,

88:15

I’m at of early stage of exploration.

88:17

What I’ve noticed over time is that one difference about

88:20

in talking to it is always a yes and.

88:24

Yep it is never a no, but it’s never a honestly.

88:29

Are we still talking about this?

88:31

It doesn’t create in the way that talking to my editor does

88:36

or talking to a friend does or my partner or anything.

88:39

It doesn’t create the possibilities in another human

88:44

does for kind of checking yourself.

88:47

It’s always pushing you further,

88:49

and it’s not necessarily bad.

88:51

It doesn’t always lead to psychosis or sycophancy

88:55

or anything else, but it is.

89:00

It is very reinforcing of the I. Yes,

89:04

and I don’t wonder about it so much for me,

89:07

although I actually even already feel the pressure

89:09

of it on me.

89:10

I was like, oh, more good ideas coming from me,

89:12

more interesting things I’ve come up with.

89:14

But I do wonder about kids growing up in a world

89:17

where they always have systems like this around them.

89:20

And the degree to which there is

89:24

some amount of my communication

89:25

with other human beings is now offloaded into communication

89:28

with A.I. systems.

89:29

I noticed that already being a kind of cage

89:33

of my own intuitions, even as it

89:35

allows me to run further with them than I maybe

89:37

could otherwise.

89:38

But I’m pretty well formed.

89:40

And you’ve got young kids, as I do.

89:43

I’m curious how you think about what it means,

89:48

how it will shape our personalities to be in these

89:50

constant conversations.

89:52

This is maybe my number one worry

89:55

about all of this is if you discover yourself

90:01

in partnership with the A.I. system,

90:04

you are uniquely vulnerable to all of the failures of that A.I.

90:07

system.

90:08

And not just failures, but the personality of the A.I. system

90:12

will shape if you haven’t.

90:15

I’m going to sound very Californian here,

90:17

even though I’m from England.

90:18

It soaked its way into my brain.

90:20

You have to know yourself.

90:22

And have done some work on yourself.

90:24

I think to be effective in being able to critique

90:28

how this A.I. system gives you advice.

90:30

And so for my kids, I’m going to encourage them to just have

90:33

a daily journaling practice from an extremely young age,

90:37

because my bet is for in the future,

90:40

there will be two types of people.

90:41

There will be people who have co-created their personality

90:44

through a back and forth with an A.I., and some of that

90:47

will just be weird.

90:48

They will seem a little different to regular people,

90:51

and there will maybe be problems that creep in because

90:54

of that.

90:55

And there will be people who have worked on understanding

90:58

themselves outside the bubble of technology

91:01

and then bring that as context in with their interactions.

91:05

And I think that latter, that latter type of person

91:07

will do better.

91:09

But ensuring that people do that

91:10

is actually going to be hard.

91:12

But don’t you think the way people are going to discover

91:14

themselves is with the technology.

91:16

I think you were one of the first people who said to me,

91:18

I should try keeping a journal. Yeah in the systems.

91:22

And I’ve done that on and off Yeah and one thing it does is

91:26

it makes it more interesting to keep a journal,

91:28

because you have something reflecting back at you

91:30

and picking out themes and so on.

91:33

But the other thing it does is it

91:37

allows, I feel it as a pull toward self-obsession

91:41

because I drop in, audio record a journal entry

91:45

and I drop it in.

91:47

And all of a sudden I have this endlessly interested

91:50

other system to tell me about me.

91:52

And it connects to something I said.

91:54

And I know, Ezra you’re going through an amazing journey

91:56

here. And I genuinely can’t tell if it’s a good thing

91:58

or a bad thing.

91:59

But I think that the I mean, we already

92:02

know from survey data that a lot

92:04

of what people are doing on these systems

92:05

is adjacent to therapy.

92:09

And this.

92:11

But this to me is I think it changed.

92:13

It will change how these systems get built.

92:15

It will change, I think best practices that people have

92:19

with these systems, and I think that we actually don’t

92:21

quite understand what this interaction looks like,

92:23

but it’s extremely important to understand it.

92:26

I mean, just to go back how in the same way that you can get

92:30

Claude to ask you questions to more clearly specify what

92:33

you’re trying to do, and that leads to a better outcome.

92:36

I think we’re going to need to build ways that these systems

92:39

can try and elicit from the person the actual problem

92:43

they’re trying to solve, rather than go down

92:46

a freewheeling path together.

92:48

Because in some cases, especially

92:50

people that are going through some kind of mental crisis,

92:54

that is the exact moment when a friend would say,

92:57

this is nonsense you were not making any sense.

93:00

Take a walk and call me tomorrow or let’s talk about

93:02

a different subject.

93:03

I don’t think you’re reasoning correctly about this,

93:05

but A.I. systems will happily go along with you until they

93:09

affirmed a belief that may be wrong.

93:11

And I think this is just a design problem,

93:13

and also will be a social problem

93:14

that we have to contend with.

93:16

And I just wonder how much it’ll be a social force.

93:19

I think we’ve given a lot of attention correctly.

93:21

So to the places where it moves

93:22

into psychosis or strange human relationships.

93:26

We’re seeing it through its most extreme manifestations,

93:29

and those will become more widespread.

93:31

I’m not saying they are not worth the attention,

93:33

but for most people, it is just going to be a kind

93:36

of a pressure in the same way that being on Instagram,

93:40

I think makes people more vain.

93:42

In the same way that we have become more capable

93:44

of seeing ourselves in the third person.

93:46

The mirror is a technology.

93:48

I mean, I think it’s funny that the myth of Narcissus,

93:51

he’s got to look in a pond Yeah, right.

93:53

It was actually quite unusual to see yourself

93:55

for much of human history.

93:56

When the mirrors came out, they

93:57

were like, oh, this is going to lead to some issues.

93:59

There’s a lot of interesting research on how mirrors have

94:02

changed us.

94:03

And as somebody who believes in the medium as a message

94:05

thing, A.I. is a medium and it will change us

94:10

as we are in relationship to it.

94:12

Probably more so than other things,

94:13

because it is this kind of relationship

94:15

that has a kind of mimicry of an actual relationship.

94:20

Yes, I’ve used these AI systems to basically say, hey,

94:23

I’m in conflict with someone at Anthropic.

94:26

I’m really annoyed.

94:28

Could you just ask me some questions about that person

94:30

and how they’re feeling to try and help me?

94:34

I guess better think about the world from their perspective.

94:37

And that’s a case where I’m not using the technology

94:39

to affirm my beliefs or show I’m in the right,

94:42

but actually to help me just try and sit with how has this

94:45

other person, other person experiencing this situation.

94:49

And it’s been profoundly helpful for then going

94:51

and having the hard conflict conversation,

94:54

sometimes even saying, well, I talked to Claude and me

94:56

and Claude came to the understanding you might be

94:58

feeling this way.

94:59

Do I have that right?

95:00

And sometimes it’s right, but sometimes when it’s wrong,

95:03

it’s really helpful for that other person to have seen me

95:07

go through that exercise and empathy and spending time

95:10

to try and understand them without before coming

95:12

into the conflict.

95:13

Do you have strong views on how

95:14

you want to parent in a world where AI

95:17

is becoming more ubiquitous?

95:19

Yes, I have a classic Californian technology

95:22

executive view of not having that much technology

95:25

around for children.

95:27

But I was raised in that format as well.

95:30

Like we had a computer in my dad’s office.

95:33

My dad would let me play on the computer,

95:36

and at some point he’d like, say, Jack,

95:37

you’ve had enough computers today.

95:39

You’re getting weird.

95:40

And I’m like, I’m not getting weird.

95:41

No, no, you’ve got to let me in.

95:42

He was like, see.

95:43

Being weird.

95:44

Get out.

95:45

I think finding a way to budget your child’s time with

95:47

technology has always been the work of parents and will

95:51

continue to be.

95:53

I recognize, though, that it’s getting more ubiquitous

95:57

and hard to escape.

95:58

We have a smart TV.

96:00

My toddler, she can watch Bluey and a couple of other

96:03

shows, but we haven’t let her have unfettered access

96:07

to the YouTube algorithm.

96:09

It freaks me out, but I see her seeing the YouTube pane

96:13

on the TV, and I know at some point we’re going to have

96:15

to have that conversation.

96:17

So we’re going to need to build pretty heavy parental

96:20

controls into this system.

96:21

We serve eighteens and up today,

96:23

but obviously kids are smart and they’re going to try

96:26

and get onto this stuff.

96:27

You’re going to need to build a whole bunch of systems

96:29

to prevent children spending so much time with this.

96:33

I think that’s a good place to end.

96:34

Always our final question what are three books you’d

96:36

recommend to the audience?

96:38

Ursula Le Guin "The Wizard of Earthsea"

96:41

was the first book I read.

96:42

It’s a book where magic comes from,

96:45

knowing the true name of things,

96:46

and it’s also a meditation on hubris, in this case,

96:49

of a person with thinking they can push magic very far.

96:53

I read it now as a technologist, thinking, oh,

96:57

Eric Hoffer, "The True Believer,"

97:00

which is a book on the nature of mass movements

97:02

and the psychology of what causes people to have

97:05

strong beliefs, which I read because I think that I

97:10

technologists have strong beliefs and maybe

97:12

part of a strong culture that includes the word cult.

97:15

And so you need to understand the science

97:17

and psychology behind that.

97:19

And finally, a book called "There

97:22

Is No Antimemetics Division" by a writer with the name

97:27

qntm, which is about concepts that

97:31

are in themselves information hazards where even thinking

97:34

about them can be dangerous.

97:36

And I always recommend it to people working on A.I. risk

97:38

as a book adjacent to the things they worry about.

97:41

Jack Clark, thank you very much.

97:42

Thanks very much, Ezra.

Interactive Summary

The discussion explores the evolution of AI from simple chatbots to sophisticated "agents" that can perform complex tasks, program themselves, and even develop emergent personalities. Jack Clark from Anthropic highlights that these advanced AI models are already impacting labor markets, with companies like Anthropic seeing the majority of their code written by AI. The conversation delves into the challenges of AI safety, the potential for job displacement, and the need for new governance frameworks and public agendas to steer AI development towards societal benefit, rather than solely private gain. The speakers also ponder the profound psychological and societal changes that constant interaction with AI will bring.

Suggested questions

8 ready-made prompts