HomeVideos

An AI Expert Warning: 6 People Are (Quietly) Deciding Humanity’s Future! We Must Act Now!

Now Playing

An AI Expert Warning: 6 People Are (Quietly) Deciding Humanity’s Future! We Must Act Now!

Transcript

3186 segments

0:00

In October, over 850 experts, including

0:02

yourself and other leaders like Richard

0:04

Branson and Jeffrey Hinton, signed a

0:06

statement to ban AI super intelligence

0:08

as you guys raised concerns of potential

0:10

human extinction.

0:11

>> Because unless we figure out how do we

0:14

guarantee that the AI systems are safe,

0:17

we're toast.

0:18

>> And you've been so influential on the

0:19

subject of AI, you wrote the textbook

0:21

that many of the CEOs who are building

0:23

some of the AI companies now would have

0:24

studied on the subject of AI. Yeah.

0:26

>> So, do you have any regrets? Um,

0:31

>> Professor Stuart Russell has been named

0:33

one of Time magazine's most influential

0:35

voices in AI.

0:36

>> After spending over 50 years

0:38

researching, teaching, and finding ways

0:40

to design

0:41

>> AI in such a way that

0:42

>> humans maintain control,

0:44

>> you talk about this gorilla problem as a

0:46

way to understand AI in the context of

0:48

humans.

0:48

>> Yeah. So, a few million years ago, the

0:50

human line branched off from the gorilla

0:52

line in evolution, and now the gorillas

0:53

have no say in whether they continue to

0:55

exist because we are much smarter than

0:57

they are. So intelligence is actually

0:58

the single most important factor to

1:00

control planet Earth.

1:01

>> Yep.

1:01

>> But we're in the process of making

1:02

something more intelligent than us.

1:04

>> Exactly.

1:05

>> Why don't people stop then?

1:06

>> Well, one of the reasons is something

1:08

called the Midas touch. So King Midas is

1:10

this legendary king who asked the gods,

1:12

can everything I touch turn to gold? And

1:14

we think of the Midas touch as being a

1:15

good thing, but he goes to drink some

1:17

water, the water has turned to gold. And

1:19

he goes to comfort his daughter, his

1:20

daughter turns to gold. So he dies in

1:22

misery and starvation. So this applies

1:24

to our current situation in two ways.

1:26

One is that greed is driving these

1:28

companies to pursue technology with the

1:30

probabilities of extinction being worse

1:32

than playing Russian roulette. And

1:34

that's even according to the people

1:35

developing the technology without our

1:37

permission. And people are just fooling

1:39

themselves if they think it's naturally

1:41

going to be controllable.

1:43

So, you know, after 50 years, I could

1:45

retire, but instead I'm working 80 or

1:47

100 hours a week trying to move things

1:49

in the right direction. So, if you had a

1:51

button in front of you which would stop

1:53

all progress in artificial intelligence,

1:55

would you press it?

1:58

>> Not yet. I think there's still a decent

2:00

chance they guarantee safety. And I can

2:02

explain more of what that is.

2:07

>> I see messages all the time in the

2:08

comments section that some of you didn't

2:10

realize you didn't subscribe. So, if you

2:12

could do me a favor and double check if

2:13

you're a subscriber to this channel,

2:14

that would be tremendously appreciated.

2:16

It's the simple, it's the free thing

2:18

that anybody that watches this show

2:19

frequently can do to help us here to

2:21

keep everything going in this show in

2:23

the trajectory it's on. So, please do

2:25

double check if you've subscribed and uh

2:27

thank you so much because in a strange

2:28

way you are you're part of our history

2:30

and you're on this journey with us and I

2:32

appreciate you for that. So, yeah, thank

2:34

you.

2:41

Professor Stuart Russell, OBBE. A lot of

2:45

people have been talking about AI for

2:46

the last couple of years. It appears

2:49

you've this really shocked me. It

2:50

appears you've been talking about AI for

2:52

most of your life.

2:53

>> Well, I started doing AI in high school

2:56

um back in England, but then I did my

2:59

PhD starting in ' 82 at Stanford. I

3:02

joined the faculty of Berkeley in ' 86.

3:06

So I'm in my 40th year as a professor at

3:08

Berkeley. The main thing that the AI

3:10

community is familiar with in my work uh

3:14

is a textbook that I wrote.

3:16

>> Is this the textbook that most students

3:20

who study AI are likely learning from?

3:23

>> Yeah.

3:24

>> So you wrote the textbook on artificial

3:26

intelligence 31

3:29

years ago. You actually start probably

3:32

started writing it because it's so

3:33

bloody big in the year that I was born.

3:35

So I was born in 92.

3:36

>> Uh yeah, took me about two years.

3:38

>> Me and your book are the same age, which

3:40

just is wonderful

3:43

way for me to understand just how long

3:44

you've been talking about this and how

3:47

long you've been writing about this. And

3:49

actually, it's interesting that many of

3:51

the CEOs who are building some of the AI

3:54

companies now probably learned from your

3:56

textbook. you had a conversation with

3:59

somebody who said that in order for

4:01

people to get the message that we're

4:03

going to be talking about today, there

4:05

would have to be a catastrophe for

4:07

people to wake up. Can you give me

4:10

context on that conversation and a gist

4:12

of who you had this conversation with?

4:14

>> Uh, so it was with one of the CEOs of uh

4:18

a leading AI company. He sees two

4:21

possibilities as do I which is um

4:25

either we have a small or let's say

4:28

small scale disaster of the same scale

4:31

as Chernobyl

4:33

>> the nuclear meltdown in Ukraine.

4:34

>> Yeah. So this uh nuclear plant blew up

4:37

in 1986

4:39

killed uh a fair number of people

4:42

directly and

4:44

maybe tens of thousands of people

4:45

indirectly through uh radiation. recent

4:49

cost estimates more than a trillion

4:51

dollars.

4:53

So that would wake people up. That would

4:58

get the governments to regulate. He's

5:00

talked to the governments and they won't

5:01

do it. So he looked at this Chernobyl

5:06

scale disaster as the best case scenario

5:09

because then the governments would

5:10

regulate and require AI systems to be

5:14

built. And is this CEO building an AI

5:18

company?

5:19

>> He runs one of the leading AI companies.

5:22

>> And even he thinks that the only way

5:24

that people will wake up is if there's a

5:26

Chernobyl level nuclear disaster.

5:28

>> Uh yeah, not wouldn't have to be a

5:29

nuclear disaster. It would be either an

5:32

AI system that's being misused

5:35

by someone, for example, to engineer a

5:37

pandemic or an AI system that does

5:40

something itself, such as crashing our

5:43

financial system or our communication

5:45

systems. The alternative is a much worse

5:47

disaster where we just lose control

5:50

altogether. You have had lots of

5:52

conversations with lots of people in the

5:54

world of AI, both people that are, you

5:56

know, have built the technology, have

5:58

studied and researched the technology or

6:00

the CEOs and founders that are currently

6:02

in the AI race. What are some of the the

6:05

interesting sentiments that the general

6:07

public wouldn't believe that you hear

6:10

privately about their perspectives?

6:14

Because I find that so fascinating. I've

6:15

had some private conversations with

6:18

people very close to these tech

6:19

companies and the shocking

6:21

sentiment that I was exposed to was that

6:24

they are aware of the risks often but

6:26

they don't feel like there's anything

6:27

that can be done so they're carrying on

6:29

which is feels like a bit of a paradox

6:31

to me like

6:31

>> yes it's it's

6:33

it must be a very difficult position to

6:36

be in in a sense right you're you're

6:38

doing something that you know has a good

6:41

chance of bringing an end to life on

6:44

including that of yourself and your own

6:47

family.

6:48

They feel

6:50

that they can't escape this race, right?

6:54

If they, you know, if a CEO of one of

6:56

those companies was to say, you know,

6:58

we're

6:59

we're not going to do this anymore, they

7:01

would just be replaced

7:04

because the investors are putting their

7:06

money up because they want to create AGI

7:10

and reap the benefits of it. So, it's a

7:13

strange situation where every at least

7:16

all the ones I've spoken to, I haven't

7:18

spoken to Sam Wolman about this, but you

7:21

know, Sam Wolman

7:23

even before

7:25

becoming CEO of Open AI said that

7:29

creating superhuman intelligence is the

7:32

biggest risk to human existence that

7:35

there is. My worst fears are that we

7:38

cause significant we the field the

7:40

technology the industry cause

7:41

significant harm to the world.

7:43

>> You know Elon Musk is also on record

7:45

saying this. So uh Dario Ammedday

7:48

estimates up to a 25% risk of

7:50

extinction.

7:52

>> Was there a particular moment when you

7:53

realized that

7:56

the CEOs are well aware of the

7:58

extinction level risks? I mean, they all

8:01

signed a statement in May of 23

8:05

uh called it's called the extinction

8:07

statement. It basically says AGI is an

8:10

extinction risk at the same level as

8:12

nuclear war and pandemics.

8:15

But I don't think they feel it in their

8:17

gut. You know, imagine that you were one

8:20

of the nuclear physicists. You know, I

8:24

guess you've seen Oppenheimer, right?

8:26

you're there, you're watching that first

8:27

nuclear explosion.

8:30

How how would that make you feel about

8:35

the potential impact of nuclear war on

8:37

the human race? Right? I I think you

8:40

would probably become a pacifist and say

8:43

this weapon is so terrible, we have got

8:45

to find a way to uh keep it under

8:49

control. We are not there yet

8:53

with the people making these decisions

8:55

and certainly not with the governments,

8:58

right? You know

9:00

what policy makers do is they, you know,

9:03

they listen to experts. They keep their

9:06

finger in the wind. You got some

9:09

experts, you know, dangling $50 billion

9:12

checks and saying, "Oh, you know, all

9:15

that doomer stuff, it's just fringe

9:17

nonsense. don't worry about it. Take my

9:19

$50 billion check. You know, on the

9:22

other side, you've got very

9:23

well-meaning, brilliant scientists like

9:25

like Jeff Hinton saying, actually, no,

9:28

this is the end of the human race. But

9:30

Jeff doesn't have a $50 billion check.

9:34

So the view is the only way to stop the

9:36

race is if governments intervene

9:40

and say okay we don't we don't want this

9:43

race to go ahead until we can be sure

9:47

that it's going ahead in absolute

9:50

safety.

9:53

>> Closing off on your career journey, you

9:55

got a you received an OB from Queen

9:57

Elizabeth.

9:58

>> Uh yes.

9:59

>> And what was the listed reason for that

10:00

for the award? uh contributions to

10:03

artificial intelligence research

10:05

>> and you've been listed as a Time

10:07

magazine most influential person in in

10:10

AI several years in a row including this

10:13

year in 2025.

10:15

>> Y

10:16

>> now there's two terms here that are

10:18

central to the things we're going to

10:19

discuss. One of them is AI and the other

10:20

is AGI.

10:22

In my muggle interpretation of that,

10:24

it's artificial general intelligence is

10:27

when the system, the computer, whatever

10:29

it might be, the technology has

10:31

generalized intelligence, which means

10:33

that it could theoretically see,

10:35

understand

10:37

um the world. It knows everything. It

10:40

can understand everything in the the

10:42

world as well as or better than a human

10:44

being.

10:45

>> Y

10:46

>> can do it.

10:46

>> And I think take action as well. I mean

10:48

some some people say oh you know AGI

10:51

doesn't have to have a body but a good

10:54

chunk of our intelligence actually is

10:56

about managing our body about perceiving

10:58

the real environment and acting on it

11:01

moving grasping and so on. So I think

11:04

that's part of intelligence and and AGI

11:07

systems should be able to operate robots

11:10

successfully.

11:12

But there's often a misunderstanding,

11:13

right, that people say, well, if it

11:14

doesn't have a robot body, then it can't

11:17

actually do anything. But then if you

11:19

remember,

11:20

most of us don't do things with our

11:23

bodies.

11:25

Some people do,

11:28

brick layers, painters, gardeners,

11:30

chefs, um, but people who do podcasts,

11:35

you're doing it with your mind, right?

11:37

you're doing it with your ability to to

11:40

produce language. Uh, you know, Adolf

11:43

Hitler didn't do it with his body.

11:46

He did it by producing language.

11:49

>> Hope you're not comparing us.

11:52

But

11:54

but uh you know so even an AGI that has

11:58

no body uh it actually has more access

12:01

to the human race than Adolf Hitler ever

12:04

did because it can send emails and texts

12:08

to

12:10

what threearters of the world's

12:11

population directly. It can it also

12:15

speaks all of their languages

12:17

and it can devote 24 hours a day to each

12:21

individual person on earth to convince

12:24

them of to do whatever it wants them to

12:26

do.

12:27

>> And our whole society runs now on the

12:28

internet. I mean if there's an issue

12:30

with the internet, everything breaks

12:31

down in society. Airplanes become

12:33

grounded and we'll have electricity is

12:35

running off as internet systems.

12:38

So I mean my entire life it seems to run

12:40

off the internet now.

12:42

>> Yeah. water supplies. So, so this is one

12:45

of the roots by which AI systems could

12:48

bring about a medium-sized catastrophe

12:52

is by basically shutting down our life

12:55

support systems.

12:58

>> Do you believe that at some point in the

13:01

coming decades we'll arrive at a point

13:04

of AGI where these systems are generally

13:07

intelligent? Uh yes, I think it's

13:10

virtually certain

13:12

unless something else intervenes like a

13:15

nuclear war or or we may refrain from

13:19

doing it. But I think it will be

13:21

extraordinarily difficult uh for us to

13:24

refrain.

13:25

>> When I look down the list of predictions

13:27

from the top 10 AI CEOs on when AGI will

13:30

arrive, you've got Sam Alman who's the

13:33

founder of OpenAI/ChatGBT

13:35

um says before 2030. Demis at DeepMind

13:39

says 2030 to 2035.

13:43

Jensen from Nvidia says around five

13:46

years. Daario at Anthropic says 2026 to

13:50

2027. Powerful AI close to AGI. Elon

13:53

says in the 2020s. Um and go down the

13:56

list of all of them and they're all

13:58

saying relatively within 5 years.

14:00

>> I actually think it'll take longer. I

14:03

don't think you can make a prediction

14:06

based on engineering

14:09

um in the sense that yes, we could make

14:14

machines 10 times bigger and 10 times

14:16

faster,

14:17

but that's probably not the reason why

14:20

we don't have AGI, right? In fact, I

14:24

think we have far more computing power

14:27

than we need for AGI. maybe a thousand

14:31

times more than we need. The reason we

14:34

don't have AGI is because we don't

14:35

understand how to make it properly. Um

14:39

what we've seized upon

14:42

is one particular technology called the

14:46

language model. And we observed that as

14:49

you make language models bigger, they

14:52

produce text language that's more

14:55

coherent and sounds more intelligent.

14:58

And so mostly what's been happening in

15:01

the last few years is just okay let's

15:03

keep doing that because one thing

15:06

companies are very good at unlike

15:08

universities is spending money. They

15:11

have spent gargantuan amounts of money

15:15

and they're going to spend even more

15:17

gargantuan amounts of money. I mean you

15:20

know we mentioned nuclear weapons. So

15:22

the Manhattan project

15:24

uh in World War II to develop nuclear

15:27

weapons, its budget in 2025

15:32

was about 20 odd billion dollars. The

15:37

budget for AGI is going to be a trillion

15:41

dollars next year. So 50 times bigger

15:44

than the Manhattan project. Humans have

15:46

a remarkable history of figuring things

15:49

out when they galvanize towards a shared

15:51

objective.

15:53

You know, thinking about the moon

15:54

landings or whatever it else it might be

15:57

through history. And the thing that

15:59

makes this feel all quite inevitable to

16:01

me is just the sheer volume of money

16:03

being invested into it. I've never seen

16:05

anything like it in my life.

16:06

>> Well, there's never been anything like

16:07

this in history. Is this the biggest

16:09

technology project in human history by

16:12

orders of magnitude? And there doesn't

16:14

seem to be anybody

16:16

that is pausing to ask the questions

16:20

about safety. It doesn't it doesn't even

16:22

appear that there's room for that in

16:23

such a race. I think that's right. To

16:27

varying extents, each of these companies

16:29

has a division that focuses on safety.

16:33

Does that division have any sway? Can

16:35

they tell the other divisions, no, you

16:37

can't release that system? Not really.

16:41

Um

16:42

I think some of the companies do take it

16:44

more seriously. Anthropic

16:47

uh does. I think Google DeepMind even

16:50

there I think the commercial imperative

16:54

to be at the forefront is absolutely

16:57

vital. If a company is perceived as

17:03

you know falling behind and not likely

17:07

to be competitive, not likely to be the

17:09

one to reach AGI first, then people will

17:13

move their money elsewhere very quickly.

17:16

>> And we saw some quite high-profile

17:17

departures from company like companies

17:19

like OpenAI. Um, I know a chap called

17:22

Yan Leak left who was working on AI

17:27

safety at OpenAI and he said that the

17:30

reason for his leaving was that safety

17:32

culture and processes processes have

17:34

taken a backseat to shiny products at

17:36

OpenAI and he gradually lost trust in

17:38

leadership but also Ilia Sutskysa

17:42

>> Ilia Sutska yeah so he was the

17:45

>> co-founder co-founder and chief

17:46

scientist for a while and then

17:48

>> yeah so he and Yan Lea are the main

17:51

safety people. Um,

17:54

and so when they say OpenAI doesn't care

17:58

about safety,

18:00

that's pretty concerning.

18:02

>> I've heard you talk about this gorilla

18:04

problem.

18:06

What is the gorilla problem as a way to

18:08

understand AI in the context of humans?

18:11

>> So, so the gorilla problem is is the

18:14

problem that gorillas face with respect

18:17

to humans.

18:19

So you can imagine that you know a few

18:21

million years ago the the human line

18:23

branched off from the gorilla line in

18:26

evolution. Uh and now the gorillas are

18:28

looking at the human line and saying

18:30

yeah was that a good idea

18:33

and they have no um they have no say in

18:37

whether they continue to exist

18:39

>> because we have a we are much smarter

18:41

than they are. if we chose to, we could

18:43

make them extinct in in a couple of

18:45

weeks and there's nothing they can do

18:48

about it.

18:50

So that's the gorilla problem, right?

18:51

Just the the problem a species faces

18:56

when there's another species that's much

18:58

more capable.

19:00

>> And so this says that intelligence is

19:02

actually the single most important

19:03

factor to control planet Earth. Yes.

19:06

Intelligence is the ability to bring

19:08

about

19:10

what you want in the world.

19:12

>> And we're in the process of making

19:13

something more intelligent than us.

19:15

>> Exactly.

19:16

>> Which suggests that maybe we become the

19:19

gorillas.

19:20

>> Exactly. Yeah.

19:21

>> Is that is there any fault in the

19:22

reasoning there? Because it seems to

19:24

make such perfect sense to me. But

19:28

if it Why doesn't Why don't people stop

19:30

then? cuz it it seems like a crazy thing

19:33

to want to

19:34

>> because they think that uh if they

19:37

create this technology, it will have

19:40

enormous economic value. They'll be able

19:42

to use it to replace all the human

19:45

workers in the world uh to develop new

19:50

uh products, drugs,

19:52

um forms of entertainment, any anything

19:55

that has economic value, you could use

19:57

AGI to to create it. And and maybe it's

20:01

just an irresistible thing in itself,

20:04

right? I think we as humans place so

20:09

much store on our intelligence. You

20:11

know, you know, how we

20:15

think about, you know, what is the

20:16

pinnacle of human achievement?

20:19

If we had AGI, we could go way higher

20:24

than that. So it it's very seductive for

20:27

people to want to create this technology

20:31

and I think people are just fooling

20:34

themselves if they think it's naturally

20:38

going to be controllable.

20:40

I mean the question is

20:43

how are you going to retain power

20:44

forever

20:46

over entities more powerful than

20:48

yourself?

20:50

>> Pull the plug out. People say that

20:52

sometimes in the comment section when we

20:54

talk about AI, they said, "Well, I'll

20:55

just pull a plug out."

20:56

>> Yeah, it's it's sort of funny. In fact,

20:58

you know, yeah, reading the comment

20:59

sections in newspapers, whenever there's

21:02

an AI article,

21:04

there'll be people who say, "Oh, you can

21:07

just pull the plug out, right?" As if a

21:08

super intelligent machine would never

21:10

have thought of that one. Don't forget

21:12

who's watched all those films where they

21:14

did try to pull the plug out. Another

21:16

thing they said, well, you know, as long

21:17

as it's not conscious,

21:20

then it doesn't matter. It won't ever do

21:22

anything.

21:25

Um, which is

21:29

completely off the point because, you

21:32

know, I I don't think the gorillas are

21:34

sitting there saying, "Oh, yeah, you

21:36

know, if only those humans hadn't been

21:38

conscious, everything would have be

21:40

fine,

21:41

>> right?" No, of course not. What would

21:43

make gorillas go extinct is the things

21:45

that humans do, right? How we behave,

21:48

our ability to act successfully

21:51

in the world. So when I play chess

21:54

against my iPhone and I lose, right, I

21:58

don't I don't think, oh, well, I'm

22:01

losing because it's conscious, right?

22:02

No, I'm just losing because it's better

22:04

than I am at at in that little world uh

22:08

moving the bits around uh to to get what

22:10

it wants. and and so consciousness has

22:14

nothing to do with it, right? Competence

22:16

is the thing we're concerned about. So I

22:19

think the only hope is can we

22:22

simultaneously

22:25

build machines that are more intelligent

22:27

than us but guarantee

22:31

that they will always act in our best

22:35

interest.

22:36

So throwing that question to you, can we

22:38

build machines that are more intelligent

22:40

than us that will also always act in our

22:42

best interests?

22:44

It sounds like a bit of a uh

22:46

contradiction to some degree because

22:49

it's kind of like me saying I've got a

22:51

French bulldog called Pablo that's uh 9

22:54

years old

22:55

>> and it's like saying that he could be

22:57

more intelligent than me yet I still

22:59

walk him and decide when he gets fed. I

23:02

think if he was more intelligent than me

23:03

he would be walking me. I'd be on the

23:05

leash.

23:06

>> That's the That's the trick, right? Can

23:08

we make AI systems whose only purpose is

23:12

to further human interests? And I think

23:15

the answer is yes.

23:18

And this is actually what I've been

23:19

working on. So I I think one part of my

23:22

career that I didn't mention is is sort

23:25

of having this epiphany uh while I was

23:28

on sabbatical in Paris. This was 2013 or

23:32

so. just realizing that further progress

23:37

in the capabilities of AI

23:40

uh you know if if we succeeded in

23:43

creating real superhuman intelligence

23:46

that it was potentially a catastrophe

23:49

and so I pretty much switched my focus

23:53

to work on how do we make it so that

23:55

it's guaranteed to be safe. Are you

23:57

somewhat troubled by

24:01

everything that's going on at the moment

24:02

with

24:04

with AI and how it's progressing?

24:06

Because you strike me as someone that's

24:08

somewhat troubled under the surface by

24:11

the way things are moving forward and

24:14

the speed in which they're moving

24:15

forward.

24:16

>> That's an understatement. I'm appalled

24:20

actually by the lack of attention to

24:24

safety. I mean, imagine if someone's

24:26

building a nuclear power station in your

24:29

neighborhood

24:32

and you go along to the chief engineer

24:33

and you say, "Okay, these nuclear thing,

24:35

I've heard that they can actually

24:38

explode, right? There was this nuclear

24:39

explosion that happened in Hiroshima, so

24:43

I'm a bit worried about this. You know,

24:45

what steps are you taking to make sure

24:47

that we don't have a nuclear explosion

24:49

in our backyard?"

24:52

And the chief engineer says, "Well, we

24:54

thought about it. We don't really have

24:56

an answer."

24:59

>> Yeah.

25:00

>> You would, what would you say?

25:03

I think you would you would use some

25:05

exploitives.

25:08

>> Well,

25:09

>> and you'd call your MP and say, you

25:11

know, get these people out.

25:14

>> I mean, what are they doing?

25:17

You read out the list of you know

25:20

projected dates for AGI but notice also

25:23

that those people

25:25

I think I mentioned Darday says a 25%

25:28

chance of extinction. Elon Musk has a

25:31

30% chance of extinction. Sam Alolman

25:34

says

25:36

basically that AGI is the biggest risk

25:38

to human existence.

25:40

So what are they doing? They are playing

25:42

Russian roulette with every human being

25:44

on Earth.

25:47

without our permission. They're coming

25:48

into our houses, putting a gun to the

25:51

head of our children,

25:53

pulling the trigger, and saying, "Well,

25:56

you know, possibly everyone will die.

25:58

Oops. But possibly we'll get incredibly

26:01

rich."

26:04

That's what they're doing.

26:07

Did they ask us? No. Why is the

26:10

government allowing them to do this?

26:12

because they dangle $50 billion checks

26:15

in front of the governments.

26:17

So I think troubled under the surface is

26:20

an understatement.

26:21

>> What would be an accurate statement?

26:24

>> Appalled

26:26

and I I am devoting my life to trying

26:31

to divert from this course of history

26:34

into a different one.

26:36

Do you have any regrets about things you

26:38

could have done in the past because

26:40

you've been so influential on the

26:42

subject of AI? You wrote the textbook

26:44

that many of these people would have

26:45

studied on the subject of AI more than

26:47

30 years ago. Do do you have when you're

26:49

alone at night and you think about

26:50

decisions you've made on this in this

26:52

field because of your scope of

26:53

influence? Is there anything you you

26:55

regret?

26:56

>> Well, I do wish I had understood

26:59

earlier uh what I understand now. we

27:02

could have developed

27:05

safe AI systems. I think the there are

27:08

some weaknesses in the framework which I

27:09

can explain but I think that framework

27:12

could have evolved to develop actually

27:15

safe AI systems where we could prove

27:18

mathematically that the system is going

27:21

to act in our interests. The kind of AI

27:24

systems we're building now, we don't

27:26

understand how they work.

27:28

>> We don't understand how they work. It's

27:30

it's a strange thing to build something

27:33

where you don't understand how it works.

27:35

I mean, there's no sort of comparable

27:36

through human history. Usually with

27:37

machines, you can pull it apart and see

27:39

what cogs are doing what and how the

27:41

>> Well, actually, we we put the cogs

27:43

together, right? So, with with most

27:46

machines, we designed it to have a

27:48

certain behavior. So, we don't need to

27:50

pull it apart and see what the cogs are

27:51

because we put the cogs in there in the

27:53

first place, right? one by one we

27:55

figured out what what the pieces needed

27:57

to be how they work together to produce

27:59

the effect that we want. So the best

28:02

analogy I can come up with is you know

28:06

the the first cave person who left a

28:10

bowl of fruit in the sun and forgot

28:12

about it and then came back a few weeks

28:14

later and there was sort of this big

28:16

soupy thing and they drank it and got

28:18

completely shitfaced.

28:20

>> They got drunk. Okay.

28:21

>> And they got this effect. They had no

28:24

idea how it worked, but they were very

28:26

happy about it. And no doubt that person

28:29

made a lot of money from it.

28:31

>> Uh so yeah, it it is kind of bizarre,

28:34

but my mental picture of these things is

28:36

is like a chain link fence,

28:39

right? So you've got lots of these

28:41

connections

28:43

and each of those connections can be its

28:46

connection strength can be adjusted

28:48

and then uh you know a signal comes in

28:52

one end of this chain link fence and

28:54

passes through all these connections and

28:56

comes out the other end and the signal

28:59

that comes out the other end is affected

29:00

by your adjusting of all the connection

29:03

strengths. So what you do is you you get

29:06

a whole lot of training data and you

29:08

adjust all those connection strengths so

29:10

that the signal that comes out the other

29:11

end of the network is the right answer

29:14

to the question. So if your training

29:16

data is lots of photographs of animals,

29:21

then all those pixels go in one end of

29:23

the network and out the other end, you

29:26

know, it activates the llama output or

29:30

the dog output or the cat output or the

29:33

ostrich output. And uh and so you just

29:35

keep adjusting all the connection

29:36

strengths in this network until the

29:38

outputs of the network are the ones you

29:40

want.

29:41

>> But we don't really know what's going on

29:42

across all of those different chains. So

29:44

what's going on inside that network?

29:46

Well, so now you have to imagine that

29:49

this network, this chain link fence is

29:52

is a thousand square miles in extent.

29:55

>> Okay,

29:55

>> so it's covering the whole of the San

29:58

Francisco Bay area or the whole of

30:01

London inside the M25, right? That's how

30:03

big it is.

30:04

>> And the lights are off. It's night time.

30:07

So you might have in that network about

30:09

a trillion

30:11

uh adjustable parameters and then you do

30:14

quintilions or sexillions of small

30:16

random adjustments to those parameters

30:20

uh until you get the behavior that you

30:23

want. I've heard Sam Alman say that in

30:25

the future he doesn't believe they'll

30:28

need much training data at all to make

30:31

these models progress themselves because

30:32

there comes a point where the models are

30:35

so smart that they can train themselves

30:37

and improve themselves

30:40

without us needing to pump in articles

30:43

and books and scour the internet.

30:45

>> Yeah, it should it should work that way.

30:47

So I think what he's referring to and

30:49

this is something that several companies

30:51

are now worried might start happening

30:56

is that the AI system becomes capable of

31:00

doing AI research

31:03

by itself.

31:05

And so uh you have a system with a

31:08

certain capability. I mean crudely we

31:10

could call it an IQ but it's it's not

31:13

really an IQ. But anyway, imagine that

31:16

it's got an IQ of 150 and uses that to

31:19

do AI research,

31:21

comes up with better algorithms or

31:23

better designs for hardware or better

31:25

ways to use the data,

31:27

updates itself. Now it has an IQ of 170,

31:31

and now it does more AI research, except

31:33

that now it's got an IQ of 170, so it's

31:36

even better at doing the AI research.

31:39

And so, you know, next iteration it's

31:41

250 and uh and so on. So this this is an

31:45

idea that one of Alan Turing's friends

31:48

good uh wrote out in 1965 called the

31:52

intelligence explosion right that one of

31:54

the things an intelligence system could

31:56

do is to do AI research and therefore

32:00

make itself more intelligent and this

32:01

would uh this would very rapidly take

32:05

off and leave the humans far behind.

32:08

>> Is that what they call the fast takeoff?

32:10

>> That's called the fast takeoff. Sam

32:12

Alman said, "I think a fast takeoff is

32:15

more possible than I thought a couple of

32:17

years ago." Which I guess is that moment

32:18

where the AGI starts teaching itself.

32:20

>> In and in his blog, the gentle

32:22

singularity, he said, "We may already be

32:25

past the event horizon of takeoff."

32:29

>> And what does what does he mean by event

32:30

horizon? The event horizon is is a

32:33

phrase borrowed from astrophysics and it

32:36

refers to uh the black hole. And the

32:40

event horizon, think it if you got some

32:42

very very massive object that's heavy

32:46

enough that it actually prevents light

32:50

from escaping. That's why it's called

32:51

the black hole. It's so heavy that light

32:53

can't escape. So if you're inside the

32:56

event horizon then then light can't

32:59

escape beyond that. So I think what he's

33:03

what he's meaning is if we're beyond the

33:05

event horizon it means that you know now

33:07

we're just trapped in the gravitational

33:10

attraction

33:11

of the black hole or in this case we're

33:15

we're trapped in the inevitable slide if

33:19

you want towards AGI.

33:21

When you when you think about the

33:23

economic value of AGI, which I've

33:25

estimated at uh 15 quadrillion dollars,

33:30

that acts as a giant magnet in the

33:33

future.

33:34

>> We're being pulled towards it.

33:35

>> We're being pulled towards it. And the

33:36

closer we get, the stronger the force,

33:41

the probability, you know, the closer we

33:42

get, the the the higher the probability

33:44

that we will actually get there. So,

33:47

people are more willing to invest. And

33:49

we also start to see spin-offs from that

33:51

investment

33:53

such as chat GBT, right, which is, you

33:56

know, generates a certain amount of

33:57

revenue and so on. So, so it does act as

34:01

a magnet and the closer we get, the

34:03

harder it is to pull out of that field.

34:07

>> It's interesting when you think that

34:08

this could be the the end of the human

34:10

story. this idea that the end of the

34:12

human story was that we created our

34:15

successor like we we summoned our next

34:19

iteration of

34:21

life or intelligence ourselves like we

34:25

took ourselves out. It is quite like

34:28

just removing ourselves and the

34:29

catastrophe from it for a second. It is

34:31

it is an unbelievable story.

34:34

>> Yeah. And you know there are many

34:39

legends

34:40

the sort of be careful what you wish for

34:43

legend and in fact the king Midas legend

34:46

is is very relevant here.

34:49

>> What's that?

34:49

>> So King Midas is this legendary king who

34:54

lived in modern day Turkey but I think

34:56

is sort of like Greek mythology. He is

34:59

said to have asked the gods to grant him

35:02

a wish.

35:04

The wish being that everything I touch

35:06

should turn to gold.

35:09

So he's incredibly greedy. Uh you know

35:12

we call this the mightest touch. And we

35:15

think of the mightest touch as being

35:16

like you know that's a good thing,

35:18

right? Wouldn't that be cool? But what

35:20

happens? So he uh you know he goes to

35:23

drink some water and he finds that the

35:25

water has turned to gold. And he goes to

35:28

eat an apple and the apple turns to

35:29

gold. and he goes to you know comfort

35:32

his daughter and his daughter turns to

35:33

gold

35:35

and so he dies in misery and starvation.

35:38

So this applies to our current situation

35:42

in in two ways actually. So one is that

35:47

I think greed is driving us to pursue

35:51

a technology that will end up consuming

35:54

us and we will perhaps die in misery and

35:57

starvation instead. The what it shows is

36:00

how difficult it is to correctly

36:04

articulate what you want the future to

36:07

be like. For a long time, the way we

36:11

built AI systems was we created these

36:13

algorithms where we could specify the

36:16

objective and then the machine would

36:18

figure out how to achieve the objective

36:20

and then achieve it. So, you know, we

36:23

specify what it means to win at chess or

36:25

to win at go and the algorithm figures

36:27

out how to do it uh and it does it

36:29

really well. So that was, you know,

36:31

standard AI up until recently. And it

36:34

suffers from this drawback that sure we

36:36

know how to specify the objective in

36:38

chess, but how do you specify the

36:40

objective in life, right? What do we

36:43

want the future to be like? Well, really

36:45

hard to say. And almost any attempt to

36:48

write it down precisely enough for the

36:50

machine to bring it about would be

36:53

wrong. And if you're giving a machine an

36:55

objective which isn't aligned with what

36:58

we truly want the future to be like,

37:00

right, you're actually setting up a

37:02

chess match and that match is one that

37:05

you're going to lose when the machine is

37:07

sufficiently intelligent. And so that

37:09

that's that's problem number one.

37:12

Problem number two is that the kind of

37:14

technology we're building now, we don't

37:16

even know what its objectives are.

37:19

So it's not that we're specifying the

37:21

objectives, but we're getting them

37:22

wrong.

37:23

We're growing these systems. They have

37:26

objectives,

37:28

but we don't even know what they are

37:29

because we didn't specify them. What

37:31

we're finding through experiment with

37:32

them is that

37:35

they seem to have an extremely strong

37:37

self-preservation objective.

37:39

>> What do you mean by that?

37:40

>> You can put them in hypothetical

37:41

situations. either they're going to get

37:43

switched off and replaced or they have

37:48

to allow someone, let's say, you know,

37:50

someone has been locked in a machine

37:52

room that's kept at 3 centigrades or

37:55

they're going to freeze to death.

37:58

They will choose to leave that guy

37:59

locked in the machine room

38:01

and die rather than be switched off

38:03

themselves.

38:05

>> Someone's done that test.

38:06

>> Yeah.

38:07

>> What was the test? They they asked they

38:10

asked the AI.

38:10

>> Yep. They put well they put them in

38:12

these hypothetical situations and they

38:14

allow the AI to decide what to do and it

38:16

decides to preserve its own existence,

38:19

let the guy die and then lie about it.

38:23

In the King Midas analogy story, one of

38:27

the things that highlights for me is

38:28

that there's always trade-offs in life

38:30

generally. And you know, especially when

38:32

there's great upside, there always

38:34

appears to be a pretty grave downside.

38:36

Like there's almost nothing in my life

38:37

where I go, it's all upside. Like even

38:40

like having a dog, it shits on my

38:41

carpet. My girlfriend, you know, I love

38:43

her, but you know, not always easy. Even

38:47

with like going to the gym, I have to

38:48

pick up these really, really heavy

38:49

weights at 10 p.m. at night sometimes

38:51

when I don't feel like it. There's

38:53

always to get the muscles or the

38:54

six-pack. There's always a trade-off.

38:56

And when you interview people for a

38:57

living like I do,

38:58

>> you know, you hear about so many

38:59

incredible things that can help you in

39:01

so many ways, but there is always a

39:03

trade-off. There's always a way to

39:04

overdo it. Mhm.

39:05

>> Melatonin will help you sleep, but it

39:07

will also you'll wake up groggy and if

39:10

you overdo it, your brain might stop

39:11

making melatonin. Like I can go through

39:12

the entire list and one of the things

39:13

I've always come to learn from doing

39:15

this podcast is whenever someone

39:17

promises me a huge upside for something,

39:19

it'll cure cancer. It'll be a utopia.

39:21

You'll never have to work. You'll have a

39:22

butler around your house.

39:24

>> I my my first instinct now is to say, at

39:26

what cost?

39:27

>> Yeah.

39:27

>> And when I think about the economic cost

39:29

here, if we start if we start there,

39:32

>> have you got kids?

39:33

>> I have four. Yeah.

39:34

>> Four kids.

39:35

What what how old is the youngest kid

39:37

that you 19?

39:38

>> 19. Okay. So your if you say your kids

39:41

were were 10 now

39:42

>> and they were coming to you and they're

39:43

saying, "Dad, what do you think I should

39:45

study

39:46

>> based on the way that you see the

39:48

future?

39:49

>> A future of AGI, say if all these CEOs

39:52

are right and they're predicting AGI

39:53

within 5 years, what should I study,

39:56

Dad?"

39:57

>> Well, okay. So let's look on the bright

40:00

side and say that the CEOs all decide to

40:03

pause their AGI development, figure out

40:06

how to make it safe and then resume uh

40:09

in whatever technology path is actually

40:11

going to be safe. What does that do to

40:13

human life

40:14

>> if they pause?

40:15

>> No. If if they succeed in creating AGI

40:19

and they solve the safety problem

40:21

>> and they solve the safety problem. Okay.

40:23

Yeah. Cuz if they don't solve the safety

40:24

problem, then you know, you should

40:26

probably be finding a bunker or

40:30

going to Patagonia or somewhere in New

40:31

Zealand.

40:32

>> Do you mean that? Do you think I should

40:33

be finding a bunker if they

40:34

>> No, because it's not actually going to

40:35

help. Uh, you know, it's not as if the

40:38

AI system couldn't find you or I mean,

40:40

it's interesting. So, we're going off on

40:42

a little bit of a digression here

40:44

>> for from your question, but I'll come

40:46

back to it.

40:47

>> So, people often ask, well, okay, so how

40:49

exactly do we go extinct? And of course,

40:52

if you ask the gorillas or the dodos,

40:54

you know, how exactly do you think

40:55

you're going to go extinct?

40:58

They have the faintest idea. Humans do

41:00

something and then we're all dead. So,

41:02

the only things we can imagine are the

41:04

things we know how to do that might

41:06

bring about our own extinction, like

41:09

creating some carefully engineered

41:11

pathogen that infects everybody and then

41:14

kills us or starting a nuclear war.

41:17

presumably is something that's much more

41:19

intelligent than us would have much

41:21

greater control over physics than we do.

41:24

And we already do amazing things, right?

41:27

I mean, it's amazing that I can take a

41:29

little rectangular thing out of my

41:31

pocket and talk to someone on the other

41:32

side of the world or even someone in

41:35

space. It's just astonishing and we take

41:39

it for granted, right? But imagine you

41:41

know super intelligent beings and their

41:42

ability to control physics you know

41:45

perhaps they will find a way to just

41:47

divert the sun's energy sort of go

41:50

around the earth's orbit so you know

41:52

literally the earth turns into a

41:54

snowball in in a few days

41:56

>> maybe they'll just decide to leave

42:00

>> leave leave the earth maybe they'd look

42:01

at the earth and go this isn't this is

42:03

not interesting we know that over there

42:04

there's an even more interesting planet

42:06

we're going to go over there and they

42:07

just I don't know get on a rocket or

42:09

teleport themselves They might. Yeah.

42:11

So, it's it's difficult to anticipate

42:13

all the ways that we might go extinct at

42:16

the hands of

42:17

entities much more intelligent than

42:19

ourselves. Anyway, coming back to the

42:23

question of well, if everything goes

42:24

right, right, if we we create AGI, we

42:27

figure out how to make it safe, we we

42:30

achieve all these economic miracles,

42:32

then you face a problem. And this is not

42:34

a new problem, right? So, so John

42:36

Maynard Kanes who was a famous economist

42:38

in the early part of the 20th century

42:40

wrote a wrote a paper in 1930.

42:43

So, this is in the depths of the

42:44

depression. It's called on the economic

42:46

problems of our grandchildren. He

42:49

predicts that at some point science will

42:52

will deliver sufficient wealth that no

42:55

one will have to work ever again. And

42:57

then man will be faced with his true

43:00

eternal problem.

43:02

How to live? I don't remember the exact

43:04

word but how to live wisely and well

43:07

when the you know the economic

43:09

incentives the economic constraints are

43:12

lifted we don't have an answer to that

43:14

question right so AI systems are doing

43:18

pretty much everything we currently call

43:20

work

43:21

anything you might aspire to like you

43:23

want to become a surgeon

43:25

it takes the robot seven seconds to

43:28

learn how to be a surgeon that's better

43:29

than any human being

43:30

>> Elon said last week that The humanoid

43:33

robots will be 10 times better than any

43:35

surgeon that's ever lived.

43:37

>> Quite possibly. Yeah. Well, and they'll

43:39

also have, you know, h they'll have

43:42

hands that are, you know, a millimeter

43:44

in size, so they can go inside and do

43:46

all kinds of things that humans can't

43:48

do. And I think we need to put serious

43:51

effort into this question. What is a

43:53

world where AI can do all forms of human

43:58

work that you would want your children

44:00

to live in?

44:02

What does that world look like? Tell me

44:04

the destination

44:06

so that we can develop a transition plan

44:08

to get there. And I've asked AI

44:11

researchers, economists, science fiction

44:14

writers, futurists, no one has been able

44:18

to describe that world. I'm not saying

44:20

it's not possible. I'm just saying I've

44:22

asked hundreds of people in multiple

44:24

workshops. It does not, as far as I

44:27

know, exist in science fiction. You

44:30

know, it's notoriously difficult to

44:32

write about a utopia. It's very hard to

44:35

have a plot, right? Nothing bad happens

44:37

in in utopia. So, it's difficult to make

44:39

a plot. So, usually you start out with a

44:42

utopia and then it all falls apart and

44:44

that's how that's how you get get a

44:46

plot. You know that there's one series

44:48

of novels people point to where humans

44:51

and super intelligent AI systems

44:53

coexist. It's called The Culture Novels

44:56

by Ian Banks. highly recommended for

45:00

those people who like science fiction

45:02

and and they absolutely the AI systems

45:06

are only concerned with furthering human

45:08

interests. They find humans a bit boring

45:10

and but nonetheless they they are there

45:12

to help. But the problem is you know in

45:15

that world there's still nothing to do

45:18

to find purpose. In fact, you know, the

45:21

the subgroup of humanity that has

45:23

purpose is the subgroup whose job it is

45:26

to expand the boundaries of our galactic

45:29

civilization. Some cases fighting wars

45:32

against alien species and and so on,

45:35

right? So that's the sort of cutting

45:36

edge and that's 0.01% of the population.

45:41

Everyone else is desperately trying to

45:43

get into that group so they have some

45:45

purpose in life. When I speak to very

45:48

successful billionaires privately off

45:50

camera, off microphone about this, they

45:52

say to me that they're investing really

45:53

heavily in entertainment things like

45:56

football clubs. Um because people are

45:59

going to have so much free time that

46:00

they're not going to know what to do

46:01

with it and they're going to need things

46:02

to spend it on. This is what I hear a

46:05

lot. I've heard this three or four

46:06

times. I've actually heard Sam Orman say

46:08

a version of this

46:09

>> um about the amount of free time we're

46:11

going to have. I've obviously also heard

46:12

recently Elon talking about the age of

46:14

abundance when he delivered his

46:16

quarterly earnings just a couple of

46:18

weeks ago and he said that there will be

46:20

at some point 10 billion humanoid

46:22

robots. His pay packet um targets him to

46:25

deliver one 1 million of these human

46:27

humanoid robots a year that are enabled

46:30

by AI by 2030.

46:33

So if he if he does that he gets I think

46:35

it's part of his package he gets a

46:36

trillion dollars

46:38

>> in in compensation.

46:40

>> Yeah. So the age of abundance for Elon.

46:43

It's not that it's absolutely impossible

46:47

to have a worthwhile world of that, you

46:50

know, with that premise, but I'm just

46:52

waiting for someone to describe it.

46:54

>> Well, maybe. So, let me try and describe

46:55

it. Uh, we wake up in the morning, we go

47:01

and watch some form of human centric

47:05

entertainment

47:06

or participate in some form of human

47:08

centric entertainment. Mhm.

47:10

>> We we go to retreats and with each other

47:14

and sit around and talk about stuff.

47:17

>> Mhm.

47:18

>> And

47:21

maybe people still listen to podcasts.

47:23

>> Okay.

47:24

>> I hope I hope so for your sake.

47:26

>> Yeah. Um it it feels a little bit like a

47:30

cruise ship

47:33

and you know and there are some cruises

47:35

where you know it's smarty bands people

47:37

and they have you know they have

47:39

lectures in the evening about ancient

47:41

civilizations and whatnot and some are

47:43

more uh more popular entertainment and

47:46

this is in fact if you've seen the film

47:48

Walle this is one picture of that future

47:53

in fact in Wle

47:55

the human race are all living on cruise

47:58

ships in space. They have no

48:00

constructive role in their society,

48:03

right? They're just there to consume

48:04

entertainment. There's no particular

48:06

purpose to education. Uh, you know, and

48:08

they're depicted actually as huge obese

48:12

babies. They're actually wearing onesies

48:15

to emphasize the fact that they have

48:18

become infeebled. and they become

48:19

infeeble because there's there's no

48:22

purpose in being able to do anything at

48:25

least in in this conception. You know,

48:27

Wally is not the future that we want.

48:31

>> Do you think much about humanoid robots

48:34

and how they're a protagonist in this

48:36

story of AI?

48:37

>> It's an interesting question, right? Why

48:39

why humanoid? And the one of the reasons

48:43

I think is because in all the science

48:44

fiction movies, they're humanoid. So

48:46

that's what robots are supposed to be,

48:48

right? because they were in science

48:49

fiction before they became a reality.

48:51

Right? So even Metropolis which is a

48:53

film from 1920 I think the robots are

48:56

humanoid right basically people covered

48:59

in metal. You know from a practical

49:01

point of view as we have discovered

49:04

humanoid is a terrible design because

49:06

they fall over. Um and uh you know you

49:12

do want

49:14

multi-fingered

49:15

hands of some kind. It doesn't have to

49:18

be a hand, but you want to have, you

49:20

know, at least half a dozen appendages

49:22

that can grasp and manipulate things.

49:25

And you need something, you know, some

49:27

kind of locomotion. And wheels are

49:30

great, except they don't go upstairs and

49:33

over curbs and things like that. So,

49:35

that's probably why we're going to be

49:37

stuck with legs. But a four-legged,

49:39

twoarmed robot would be much more

49:42

practical. I guess the argument I've

49:44

heard is because we've built a human

49:45

world. So everything the physical spaces

49:48

we navigate, whether it's factories or

49:51

our homes or the street or other sort of

49:54

public spaces are all designed for

49:58

exactly this physical form. So if we are

50:01

going to

50:01

>> to some extent, yeah, but I mean our

50:02

dogs manage perfectly well to navigate

50:06

around our houses and streets and so on.

50:08

So if you had a a centaur,

50:11

uh it could also navigate, but it can,

50:14

you know, it can carry much greater

50:16

loads because it's quadripeda. It's much

50:19

more stable. If it needs to drive a car,

50:21

it can fold up two of its legs and and

50:23

so on so forth. So I think the arguments

50:25

for why it has to be exactly humanoid

50:27

are sort of post hawk justification. I

50:31

think there's much more, well, that's

50:32

what it's like in the movies and that's

50:34

spooky and cool, so we need to have them

50:37

be human. I I don't think it's a good

50:39

engineering argument.

50:40

>> I think there's also probably an

50:42

argument that we would be more accepting

50:44

of them

50:46

moving through our physical environments

50:48

if they represented our form a bit more.

50:52

Um, I also I was thinking of a bloody

50:54

baby gate. You know those like

50:55

kindergarten gates they get on stairs?

50:57

>> Yeah.

50:57

>> My dog can't open that. But a humanoid

51:00

robot could reach over the other side.

51:02

>> Yeah. And so could a centaur robot,

51:04

right? So in some sense, centaur robot

51:06

is

51:07

>> there's something ghastly about the look

51:08

of those though.

51:09

>> Is a humanoid. Well,

51:10

>> do you know what I mean? Like a

51:11

four-legged big monster sort of crawling

51:13

through my house when I have guests

51:14

over.

51:15

>> Your dog is a your dog is a four-legged

51:17

monster.

51:18

>> I know. Uh so I think actually I I would

51:22

argue the opposite that um

51:25

we want a distinct form because they are

51:28

distinct entities

51:31

and the more humanoid the worse it is in

51:36

terms of confusing our subconscious

51:39

psychological systems. So, I'm arguing

51:41

from the perspective of the people

51:43

making them. As in, if I was making the

51:45

decision whether it to be some

51:46

four-legged thing that I've that I'm

51:48

unfamiliar with that I'm less likely to

51:50

build a relationship with or allow to

51:54

take care of, I don't know, might might

51:57

look after my children. Obviously, I'm

51:58

listen, I'm not saying I would allow

51:59

this to look after my children,

52:01

>> but I'm saying from a if I'm building a

52:03

company,

52:03

>> the manufacturer would certainly

52:04

>> Yeah. want want to be

52:05

>> Yeah. So, I that's an interesting

52:07

question. I mean there's also what's

52:10

called the uncanny valley which is a a

52:13

phrase from computer graphics when they

52:16

started to make characters in computer

52:20

graphics they tried to make them look

52:22

more human right so if you if you for

52:24

example if you look at Toy Story

52:28

they're not very humanl looking right if

52:30

you look at the Incredibles they're not

52:31

very humanl looking and so we think of

52:33

them as cartoon characters if you try to

52:35

make them more human they naturally

52:38

become repulsive

52:39

>> until they don't

52:40

>> until they become very you have to be

52:42

very very close to perfect in order not

52:46

to be repulsive. So the the uncanny

52:48

valley is this I you know like the the

52:50

gap between you so perfectly human and

52:52

not at all human but in between it's

52:54

really awful and uh and so they there

52:57

were a couple of movies that tried like

52:59

Polar Express was one where they tried

53:02

to have quite humanlooking characters

53:05

you know being humans not not being

53:07

superheroes or anything else and it's

53:08

repulsive to watch. I when I watched

53:11

that shareholder presentation the other

53:13

day, Elon had these two humanoid robots

53:15

dancing on stage and I've seen lots of

53:17

humanoid robot demonstrations over the

53:19

years. You know, you've seen like the

53:19

Boston Dynamics dog thing jumping around

53:22

and whatever else.

53:23

>> But there was a moment where my brain

53:26

for the first time ever genuinely

53:28

thought there was a human in a suit.

53:30

Mhm.

53:31

>> And I actually had to research to check

53:32

if that was really their Optimus robot

53:34

because the way it was dancing was so

53:37

unbelievably fluid that for the first

53:39

time ever, my my my brain has only ever

53:43

associated those movements with human

53:45

movements. And I I'll play it on the

53:47

screen if anyone hasn't seen it, but

53:48

it's just the robots dancing on stage.

53:50

And I was like, that is a human in a

53:52

suit. And it was really the knees that

53:53

gave it away because the knees were all

53:55

metal. Huh. I thought there's no way

53:57

that could be a human knee in a in one

53:59

of those suits. And he, you know, he

54:01

says they're going into production next

54:03

year. They're used internally at Tesla

54:04

now, but he says they're going into

54:05

production next year. And it's going to

54:07

be pretty crazy when we walk outside and

54:09

see robots. I think that'll be the

54:10

paradigm shift. I've heard actually many

54:12

I've heard Elon say this that the

54:14

paradigm shifting moment from many of us

54:15

will be when we walk outside onto the

54:17

streets and see humanoid robots walking

54:20

around. That will be when we realize

54:22

>> Yeah. I think even more so. I mean, in

54:24

San Francisco, we see driverless cars

54:26

driving around and uh it t takes some

54:29

getting used to actually, you know, when

54:31

you're you're driving and there's a car

54:33

right next to you with no driver in, you

54:35

know, and it's signaling and it wants to

54:36

change lanes in front of you and you

54:38

have to let it in and all this kind of

54:40

stuff. It's it's a little creepy, but I

54:42

think you're right. I think seeing the

54:44

humanoid robots, but that phenomenon

54:47

that you described where it was

54:49

sufficiently close that your brain

54:51

flipped into saying this is a human

54:54

being.

54:55

>> Mhm.

54:56

>> Right. That's exactly what I think we

54:58

should avoid.

54:59

>> Cuz I have the empathy for it then.

55:01

>> Because it's it's a lie and it brings

55:04

with it a whole lot of expectations

55:06

about how it's going to behave, what

55:08

moral rights it has, how you should

55:10

behave towards it. uh which are

55:13

completely wrong.

55:14

>> It levels the playing field between me

55:15

and it to some degree.

55:17

>> How hard is it going to be to just uh

55:20

you know switch it off and throw it in

55:22

the trash when when it breaks? I think

55:24

it's essential for us to keep machines

55:26

in the you know in the cognitive space

55:28

where they are machines and not bring

55:31

them into the cognitive space where

55:33

they're people because we will make

55:36

enormous mistakes by doing that. And I

55:39

see this every day even even just with

55:40

the chat bots. So the chat bots in

55:43

theory are supposed to say I don't have

55:46

any feelings. I'm just a algorithm.

55:50

But in fact they fail to do that all the

55:53

time. They are telling people that they

55:56

are conscious. They are telling people

55:57

that they have feelings. Uh they are

56:00

telling people that they are in love

56:01

with the user that they're talking to.

56:04

And people flip because first of all

56:07

it's you know very fluent language but

56:09

also a system that is identifying itself

56:12

as an eye as a sentient being. They

56:16

bring that object into the cognitive

56:18

space where that we normally reserve for

56:21

for other humans and they become

56:23

emotionally attached. They become

56:24

psychologically dependent. They even

56:27

allow these systems to tell them what to

56:30

do. What advice would you give a young

56:33

person at the start of their career then

56:34

about what they should be aiming at

56:36

professionally? Because I've actually

56:37

had an increasing number of young people

56:38

say to me that they have huge

56:40

uncertainty about whether the thing

56:41

they're studying now will matter at all.

56:43

A lawyer, uh, an accountant, and I don't

56:46

know what to say to these people. I

56:48

don't know what to say cuz I I believe

56:49

that the rate of improvement in AI is

56:51

going to continue. And therefore,

56:53

imagining any rate of improvement, it

56:54

gets to the point where I'm not being

56:56

funny, but all these white collar jobs

56:58

will be done by an a an AI or an AI

57:00

agent. Yeah. So, there was a television

57:03

series called Humans. In humans, we have

57:07

extremely capable humanoid robots doing

57:11

everything. And at one point, the

57:13

parents are talking to their teenage

57:15

daughter who's very, very smart. And the

57:17

parents are saying, "Oh, you know, maybe

57:19

you should go into medicine." And the

57:21

daughter says, you know, why would I

57:24

bother? It'll take me seven years to

57:26

qualify. It takes a robot 7 seconds to

57:28

learn.

57:30

So nothing I do matters.

57:32

>> And is that how you feel about

57:34

>> So I think that's that's a future that

57:37

uh in fact that is the future that we

57:39

are

57:41

moving towards. I don't think it's a

57:43

future that everyone wants. That is what

57:45

is being uh created for us right now.

57:51

So in that future assuming that you know

57:54

even if we get halfway right in the

57:57

sense that okay perhaps not surgeons

57:59

perhaps not you know great violinists

58:03

there'll be pockets where perhaps humans

58:06

will remain good at it

58:08

>> where

58:09

>> the kinds of jobs where you hire people

58:11

by the hundred

58:13

will go away. Okay,

58:15

>> where people are in some sense

58:17

exchangeable that you you you just need

58:19

lots of them and uh you know when half

58:22

of them quit you just fill up those

58:24

those slots with more people in some

58:26

sense those are jobs where we're using

58:27

people as robots and that's a sort of

58:29

that's a sort of strange conundrum here

58:31

right that you know I imagine writing

58:33

science fiction 10,000 years ago right

58:35

when we're all hunter gatherers and I'm

58:37

this little science fiction author and

58:39

I'm describing this future where you

58:41

know there are going to be these giant

58:43

windowless boxes And you're going to go

58:45

in, you know, you you'll travel for

58:47

miles and you'll go into this windowless

58:49

box and you'll do the same thing 10,000

58:52

times for the whole day. And then you'll

58:54

leave and travel for miles to go home.

58:56

>> You're talking about this podcast.

58:57

>> And then you're going to go back and do

58:58

it again. And you would do that every

59:00

day of your life until you die.

59:03

>> The office

59:04

>> and people would say, "Ah, you're nuts."

59:06

Right? There's no way that we humans are

59:08

ever going to have a future like that

59:09

cuz that's awful. Right? But that's

59:11

exactly the future that we ended up with

59:13

with with office buildings and factories

59:15

where many of us go and do the same

59:18

thing thousands of times a day and we do

59:21

it thousands of days in a row uh and

59:24

then we die and we need to figure out

59:27

what is the next phase going to be like

59:29

and in particular how in that world

59:33

do we have the incentives

59:35

to become fully human which I think

59:38

means at least a level of education

59:41

that people have now and probably more

59:45

because I think to live a really rich

59:47

life

59:49

you need a better understanding of

59:52

yourself of the world

59:54

uh than most people get in their current

59:56

educations.

59:57

>> What is it to be human? to it's to

59:59

reproduce

60:01

to pursue stuff to go in the pursuit of

60:04

difficult things you know we used to

60:07

hunt on the

60:08

>> to attain goals right it's always if I

60:10

wanted to climb Everest the last thing I

60:12

would want is someone to pick me up on

60:14

helicopter and stick me on the top

60:16

>> so we'll we'll voluntarily pursue hard

60:20

things so although I could get the robot

60:22

to build me a ranch in on this plot of

60:27

land I choose to do it because the

60:29

pursuit itself is rewarding.

60:32

>> Yes,

60:32

>> we're kind of seeing that anyway, aren't

60:34

we? Don't you think we're seeing a bit

60:34

of that in society where life got so

60:36

comfortable that now people are like

60:37

obsessed with running marathons and

60:39

doing these crazy endurance

60:40

>> and and learning to cook complicated

60:42

things when they could just, you know,

60:44

have them delivered. Um, yeah. No, I

60:46

think there's there's real value in the

60:49

ability to do things and the doing of

60:51

those things. And I think you know the

60:53

obvious danger is the walle world where

60:56

everyone just consumes entertainment

61:00

uh which doesn't require much education

61:02

and doesn't lead to a rich satisfying

61:06

life. I think in the long run

61:08

>> a lot of people will choose that world.

61:09

I think some of yeah some people may

61:11

there's also I mean you know whether

61:14

you're consuming entertainment or

61:15

whether you're

61:17

doing something you know cooking or

61:19

painting or whatever because it's fun

61:21

and interesting to do what's missing

61:23

from that right all of that is purely

61:25

selfish

61:27

I think one of the reasons we work is

61:30

because we feel valued we feel like

61:33

we're benefiting other people

61:36

and I think some remember having this

61:39

conversation with um a lady in England

61:41

who helps to run the hospice movement.

61:45

And the people who work in the hospices

61:49

where you know the the patients are

61:50

literally there to die are largely

61:53

volunteers. So they're not doing it to

61:54

get paid

61:56

but they find it incredibly

61:59

rewarding to be able to spend time with

62:02

people who are in their last weeks or

62:05

months to give them company and

62:07

happiness.

62:09

So I actually think that interpersonal

62:14

roles

62:16

will be much much more important in

62:18

future. So if I was going to advise my

62:23

kids, not that they would ever listen,

62:24

but if I if my kids would listen and I

62:27

and and wanted to know what I thought

62:29

would be, you know, valued careers and

62:32

future, I think it would be these

62:34

interpersonal roles based on an

62:36

understanding of human needs,

62:37

psychology, there are some of those

62:39

roles right now. So obviously you know

62:43

therapists and psychiatrists and so on

62:45

but that that's a very much in sort of

62:47

asymmetric

62:50

role right where one person is suffering

62:52

and the other person is trying to

62:54

alleviate the suffering you know and

62:57

then there are things like they call

62:58

them executive coaches or life coaches

63:01

right that's a less asymmetric role

63:04

where someone is trying to uh help

63:08

another person live a better life

63:10

whether it's a better life in their work

63:12

role or or just uh how they live their

63:15

life in general. And so I could imagine

63:17

that those kinds of roles will expand

63:20

dramatically.

63:22

>> There's this interesting paradox that

63:24

exists when life becomes easier. Um

63:27

which shows that abundance consistently

63:30

pushes society societies towards more

63:34

individualism because once survival

63:36

pressures disappear, people prioritize

63:38

things differently. They prioritize

63:40

freedom, comfort, self-exression over

63:42

things like sacrifice or um family

63:45

formation. And we're seeing, I think, in

63:46

the west already, a decline in people

63:48

having kids because there's more

63:50

material abundance,

63:53

>> fewer kids, people are getting married

63:55

and committing to each other and having

63:57

relationships later and more

64:00

infrequently because generally once we

64:02

have more abundance, we don't want to

64:03

complicate our lives. Um, and at the

64:06

same time, as you said earlier, that

64:07

abundance breeds a an inability to find

64:11

meaning, a sort of shallowess to

64:13

everything. This is one of the things I

64:14

think a lot about, and I'm I'm in the

64:16

process now of writing a book about it,

64:17

which is this idea that individualism

64:20

was act is a bit of a lie. Like when I

64:22

say individualism and freedom, I mean

64:23

like the narrative at the moment amongst

64:25

my generation is you like be your own

64:27

boss and stand on your own two feet and

64:29

we're having less kids and we're not

64:31

getting married and it's all about me

64:33

me.

64:34

>> Yeah. That last part is where it goes

64:36

wrong.

64:36

>> Yeah. And it's like almost a

64:37

narcissistic society where

64:39

>> Yeah.

64:39

>> me me. My self-interest first. And when

64:42

you look at mental health outcomes and

64:44

loneliness and all these kinds of

64:45

things, it's going in a horrific

64:47

direction. But at the same time, we're

64:48

freer than ever. It seems like that you

64:51

know it seems like there's a we should

64:52

there's a maybe another story about

64:54

dependency which is not sexy like depend

64:56

on each other.

64:57

>> Oh I I I agree. I mean I think you know

65:00

happiness is not available from

65:03

consumption or even lifestyle right I

65:06

think happiness

65:08

arises from giving.

65:12

It can be you through the work that you

65:15

do, you can see that other people

65:17

benefit from that or it could be in

65:19

direct interpersonal relationships.

65:22

>> There is an invisible tax on salespeople

65:24

that no one really talks about enough.

65:26

The mental load of remembering

65:27

everything like meeting notes,

65:29

timelines, and everything in between

65:31

until we started using our sponsors

65:33

product called Pipe Drive, one of the

65:34

best CRM tools for small and

65:36

medium-sized business owners. The idea

65:38

here was that it might alleviate some of

65:40

the unnecessary cognitive overload that

65:42

my team was carrying so that they could

65:44

spend less time in the weeds of admin

65:46

and more time with clients, in-person

65:48

meetings, and building relationships.

65:49

Pipe Drive has enabled this to happen.

65:51

It's such a simple but effective CRM

65:54

that automates the tedious, repetitive,

65:57

and timeconuming parts of the sales

65:58

process. And now our team can nurture

66:01

those leads and still have bandwidth to

66:03

focus on the higher priority tasks that

66:05

actually get the deal over the line.

66:06

Over a 100,000 companies across 170

66:09

countries already use Pipe Drive to grow

66:11

their business. And I've been using it

66:12

for almost a decade now. Try it free for

66:15

30 days. No credit card needed, no

66:17

payment needed. Just use my link

66:19

pipedive.com/ceo

66:22

to get started today. That's

66:24

pipedive.com/ceo.

66:27

Where does the rewards of this AI race

66:31

where does it acrue to?

66:34

I think a lot about this in terms of

66:35

like univers universal basic income. If

66:37

you have these five, six, seven, 10

66:40

massive AI companies that are going to

66:42

win the 15 quadrillion dollar prize.

66:46

>> Mhm.

66:46

>> And they're going to automate all of the

66:48

professional pursuits that we we

66:50

currently have. All of our jobs are

66:52

going to go away.

66:54

Who who gets all the money? And how do

66:56

how do we get some of it back?

66:58

>> Money actually doesn't matter, right?

66:59

what what matters is the production of

67:02

goods and services uh and then how those

67:06

are distributed and so so money acts as

67:09

a way to facilitate the distribution and

67:12

um exchange of those goods and services.

67:14

If all production is concentrated

67:17

um in the hands of a of a few companies,

67:21

right, that

67:22

sure they will lease some of their

67:25

robots to us. You know, we we want a

67:27

school in our village.

67:30

They lease the robots to us. The robots

67:32

build the school. They go away. We have

67:34

to pay a certain amount of of money for

67:36

that. But where do we get the money?

67:39

Right? If we are not producing anything

67:43

then uh we don't have any money unless

67:46

there's some redistribution mechanism.

67:48

And as you mentioned, so universal basic

67:50

income is

67:53

it seems to me an admission of failure

67:57

because what it says is okay, we're just

67:58

going to give everyone the money and

68:00

then they can use the money to pay the

68:02

AI company to lease the robots to build

68:04

the school and then we'll have a school

68:06

and that's good. Um

68:09

but what it's an admission of failure

68:12

because it says we can't work out a

68:14

system in which people have any worth or

68:18

any economic role.

68:21

Right? So 99% of the global population

68:24

is

68:25

from an economic point of view useless.

68:28

Can I ask you a question? If you had a

68:30

button in front of you and pressing that

68:33

button would stop all progress in

68:36

artificial intelligence right now and

68:38

forever, would you press it?

68:40

>> That's a very interesting question. Um,

68:45

if it's either or

68:48

either I do it now or it's too late and

68:51

we

68:53

careen into some uncontrollable future

68:57

perhaps. Yeah, cuz I I'm not super

69:01

optimistic that we're heading in the

69:02

right direction at all.

69:03

>> So, I put that button in front of you

69:04

now. It stops all AI progress, shuts

69:06

down all the AI companies immediately

69:08

globally, and none of them can reopen.

69:10

You press it.

69:17

Well, here's here's what I think should

69:19

happen. So, obviously, you know, I've

69:22

been doing AI for 50 years. um and

69:27

the original motivations which is that

69:30

AI can be a power tool for humanity

69:33

enabling us to do

69:36

more and better things than we can

69:38

unaded. I think that's still valid. The

69:42

problem is

69:44

the kinds of AI systems that we're

69:45

building are not tools. They are

69:47

replacements. In fact, you can see this

69:50

very clearly because we create them

69:53

literally as the closest replicas we can

69:57

make of human beings.

70:00

The technique for creating them is

70:03

called imitation learning. So we observe

70:07

human verbal behavior, writing or

70:09

speaking and we make a system that

70:12

imitates that as well as possible.

70:17

So what we are making is imitation

70:18

humans at least in the verbal sphere.

70:23

And so of course they're going to

70:24

replace us.

70:27

They're not tools.

70:28

>> So you had pressed the button.

70:30

>> So I say I think there is another course

70:34

which is use and develop AI as tools.

70:38

Tools for science

70:41

tools for economic organization and so

70:43

on.

70:44

um but not as replacements for human

70:49

beings.

70:49

>> What I like about this question is it

70:51

forces you to go into the pro into

70:53

probabilities.

70:54

>> Yeah. So, and that's that's why I'm

70:57

reluctant because I don't I don't agree

71:01

with the, you know, what's your

71:02

probability of doom,

71:03

>> right? Your so-called P of doom uh

71:06

number because that makes sense if

71:08

you're an alien.

71:10

You know, you're in you're in a bar with

71:12

some other aliens and you're looking

71:13

down at the Earth and you're taking bets

71:15

on, you know, are these humans going to

71:16

make a mess of things and go extinct

71:18

because they develop AI.

71:21

So, it's fine for those aliens to bet on

71:24

on that, but if you're a human, then

71:27

you're not just betting, you're actually

71:29

acting.

71:30

>> There there's an element to this though,

71:32

which I guess where probabilities do

71:33

come back in, which is you also have to

71:35

weigh when I give you such a binary

71:37

decision.

71:40

um the probability of us pursuing the

71:43

more nuanced safe approach into that

71:46

equation. So you're you're the the maths

71:49

in my head is okay, you've got all the

71:50

upsides here and then you've got

71:52

potential downsides and then there's a

71:54

probability of do I think we're actually

71:56

going to course correct based on

71:57

everything I know based on the incentive

71:59

structure of human beings and and

72:00

countries and then if there's but then

72:03

you could go if there's even a 1%

72:06

chance of extinction

72:09

is it even worth all these upsides?

72:11

>> Yeah. And I I would argue no. I mean

72:14

maybe maybe what we would say if if we

72:16

said okay it's going to stop the

72:18

progress for 50 years

72:19

>> you press it

72:20

>> and during those 50 years we can work on

72:23

how do we do AI in a way that's

72:25

guaranteed to be safe and beneficial how

72:28

do we organize

72:30

our societies to flourish uh in

72:33

conjunction with extremely capable AI

72:36

systems. So, we haven't answered either

72:38

of those questions.

72:39

And I don't think we want anything

72:42

resembling AGI until we have completely

72:45

solid answers to both of those

72:46

questions. So, if there was a button

72:48

where I could say, "All right, we're

72:49

going to pause progress for 50 years."

72:52

Yes, I would do it.

72:53

>> But if that button was in front of you,

72:54

you're going to make a decision either

72:55

way. Either you don't press it or you

72:57

press it.

72:57

>> I If Yeah. So, if that if that button is

73:00

there, stop it for 50 years. I would say

73:02

yes.

73:05

stop it forever?

73:09

Not yet. I think I think there's still a

73:13

decent chance that we can pull out of

73:16

this uh nose dive, so to speak, that

73:18

we're we're currently in. Ask me again

73:21

in a year, I might I might say, "Okay,

73:24

we do need to press the button."

73:25

>> What if What if in a scenario where you

73:27

never get to reverse that decision? You

73:29

never get to make that decision again.

73:30

So if in that scenario that I've laid

73:32

out this hypothetical, you either press

73:34

it now or it never gets pressed.

73:37

So there is no opportunity a year from

73:38

now.

73:41

>> Yeah, as you can tell, I'm

73:43

sort of on on the fence a bit about

73:46

about this one. Um

73:49

yeah, I think I'd probably press it.

73:52

Yeah.

73:55

>> What's your reasoning?

73:58

uh just thinking about the power

74:00

dynamics

74:02

of um

74:04

what's happening now how difficult would

74:07

it would be to get the US in particular

74:09

to to regulate in favor of safety.

74:14

So I think you know what's clear from

74:15

talking to the companies is they are not

74:18

going to develop anything resembling

74:23

safe AGI unless they're forced to by the

74:26

government.

74:27

And at the moment the US government in

74:30

particular which regulates most of the

74:32

leading companies in AI is not only

74:36

refusing to regulate but even trying to

74:39

prevent the states from regulating. And

74:42

they're doing that at the behest of

74:46

uh a faction within Silicon Valley uh

74:50

called the accelerationists

74:52

who believe that the faster we get to

74:55

AGI the better. And when I say behest I

74:58

mean also they paid them a large amount

75:00

of money. Jensen Hang the the CEO of

75:02

Nvidia said who is for anyone that

75:04

doesn't know the guy making all the

75:06

chips that are powering AI said China is

75:08

going to win the AI race arguing it is

75:11

just a nanocond behind the United

75:13

States. China have produced 24,000 AI

75:17

papers compared to just 6,000

75:21

from the US

75:23

more than the combined output of the US

75:25

the UK and the EU.

75:27

China is anticipated to quickly roll out

75:29

their new technologies both domestically

75:31

and developing new technologies for

75:33

other developing countries.

75:36

So the accelerators or the accelerate I

75:38

think you call them the accelerants

75:40

>> accelerationists.

75:41

>> The accelerationists

75:42

>> I mean they would say well if we don't

75:44

then China will. So we have to we have

75:46

to go fast. It's another version of the

75:48

the race that the companies are in with

75:50

each other, right? That we, you know, we

75:52

know that this race is

75:54

heading off a cliff,

75:57

but we can't stop. So, we're all just

76:00

going to go off this cliff. And

76:02

obviously, that's nuts,

76:04

right? I mean, we're all looking at each

76:05

other saying, "Yeah, there's a cliff

76:06

over there." Running as fast as we can

76:08

towards this cliff. We're looking at

76:10

each other saying, "Why aren't we

76:11

stopping?"

76:13

So the narrative in Washington, which I

76:16

think Jensen Hang is

76:19

either reflecting or or perhaps um

76:21

promoting

76:23

uh is that you know, China has is

76:28

completely unregulated

76:30

and uh you know, America will only slow

76:32

itself down uh if it regulates a AI in

76:36

any way. So this is a completely false

76:38

narrative because China's AI regulations

76:42

are actually quite strict even compared

76:44

to um the European Union

76:48

and China's government has explicitly

76:51

acknowledged uh the need and their

76:54

regulations are very clear. You can't

76:56

build AI systems that could escape human

76:58

control. And not only that, I don't

77:01

think they view the race in the same way

77:04

as, okay, we we just need to be the

77:07

first to create AGI. I think they're

77:11

more interested in figuring out how to

77:15

disseminate AI as a set of tools within

77:19

their economy to make their economy more

77:21

productive and and so on. So that's

77:23

that's their version of the race.

77:25

>> But of course, they still want to build

77:26

the weapons for adversaries, right? to

77:28

so that they can take down I don't know

77:32

Taiwan if they want to.

77:34

>> So weapons are a separate matter and I

77:36

happy to talk about weapons but just in

77:37

terms of

77:38

>> control

77:39

>> uh control economic domination

77:42

um they they don't view putting all your

77:46

eggs in the AGI basket as the right

77:49

strategy. So they want to use AI, you

77:53

know, even in its present form to make

77:55

their economy much more efficient and

77:57

productive and also, you know, to give

78:01

people new capabilities and and better

78:04

quality of life and and I think the US

78:07

could do that as well. And

78:11

um typically western countries don't

78:14

have as much of uh central government

78:17

control over what companies do and some

78:20

companies are investing in AI to make

78:22

their operations more efficient uh and

78:26

some are not and we'll see how that

78:27

plays out.

78:28

>> What do you think of Trump's approach to

78:29

AI? So Trump's approach is, you know,

78:31

it's it's echoing what Jensen Wang is

78:33

saying that the US has to be the one to

78:35

create AGI and very explicitly the

78:39

administration's policy is to uh

78:42

dominate the world.

78:44

That's the word they use, dominate. I'm

78:46

not sure that other countries like the

78:49

idea that um they will be dominated by

78:52

American AI. But is that an accurate

78:55

description of what will happen if the

78:56

US build AGI technology before say the

78:59

UK where I'm originally from and where

79:01

you're originally from? What does the

79:04

This is something I think about a lot

79:05

because we're going through this budget

79:06

process in the UK at the moment where

79:07

we're figuring out how we going to spend

79:08

our money and how we're going to tax

79:09

people and also we've got this new

79:11

election cycle. It's approaching quickly

79:14

where people are talking about

79:15

immigration issues and this issue and

79:17

that issue and the other issue. What I

79:18

don't hear anyone talking about is AI

79:21

and the humanoid robots that are

79:23

going to take everything. We're very

79:24

concerned with the brown people crossing

79:25

the channel, but the humanoid robots

79:27

that are going to be super intelligent

79:29

and really take causing economic disrupt

79:32

disruption. No one talks about that. The

79:33

political leaders don't talk about it.

79:35

It doesn't win races. I don't see it on

79:36

billboards.

79:37

>> Yeah. And it's it it's interesting

79:39

because

79:41

in fact I mean so there's there's two

79:43

forces that have been hollowing out the

79:45

middle classes in western countries. One

79:49

of them is globalization where lots and

79:52

lots of work not just manufacturing but

79:54

white collar work gets outsourced to

79:56

low-income countries. Uh but the other

79:58

is automation

80:01

and you know some of that is factories.

80:03

So um the amount of employment in

80:07

manufacturing continues to drop even as

80:10

the amount of output from manufacturing

80:13

in the US and in the UK continues to

80:15

increase. So we talk about oh you know

80:17

our manufacturing industry has been

80:19

destroyed. It hasn't. It's producing

80:21

more than ever just with you know a

80:24

quarter as many people. So it's

80:26

manufacturing employment that's been

80:27

destroyed by automation and robotics and

80:31

so on. And then you know computerization

80:34

has eliminated whole layers of white

80:37

collar jobs. And so those two those two

80:41

forms of automation have probably done

80:44

more to hollow out middle class uh

80:47

employment and standard of life.

80:50

>> If the UK doesn't participate

80:53

in this new e technological wave

80:57

that seems to be that seems to you know

80:59

it's going to take a lot of jobs. cars

81:00

are going to drive themselves. Whimo

81:02

just announced that they're coming to

81:03

London, which is the driverless cars,

81:05

and driving is the biggest occupation in

81:07

the world, for example. So, you've got

81:08

immediate disruption there. And where

81:10

does the money acrew to? Well, it acrus

81:11

to who owns Whimo, which is what? Google

81:14

and Silicon Valley companies.

81:16

>> Alphabet owns Whimo 100%. I think so.

81:18

Yes. I mean this is so I was in India a

81:20

few months ago talking to the government

81:23

ministers because they're holding the

81:24

next global AI summit in February and

81:28

and their view going in was you know AI

81:32

is great we're going to use it to you

81:34

know turbocharge the growth of our

81:36

Indian economy

81:38

when for example you have AGI you have

81:41

AGI controlled robots

81:44

that can do all the manufacturing that

81:45

can do agriculture that can do all the

81:48

white work and goods and services that

81:51

might have been produced by Indians will

81:54

instead be produced by

81:57

American controlled

82:00

AGI systems at much lower prices. You

82:04

know, a consumer given a choice between

82:06

an expensive product produced by Indians

82:08

or a cheap product produced by American

82:10

robots will probably choose

82:14

the cheap product produced by American

82:15

robots. And so potentially every country

82:18

in the world with the possible exception

82:20

of North Korea will become a kind of a

82:22

client state

82:25

of American AI companies.

82:28

>> A client state of American AI companies

82:30

is exactly what I'm concerned about for

82:32

the UK economy. Really any economy

82:34

outside of the United States. I guess

82:36

one could also say China, but because

82:39

those are the two nations that are

82:40

taking AI most seriously.

82:42

>> Mhm.

82:43

>> And I I I don't know what our economy

82:45

becomes. cuz I can't figure out

82:48

can't figure out what our what the

82:49

British economy becomes in such a world.

82:52

Is it tourism? I don't know. Like you

82:53

come here to to to look at the

82:55

Buckingham Palace. I

82:56

>> you you can think about countries but I

82:58

mean even for the United States it's the

83:00

same problem.

83:01

>> At least they'll be able to hell out you

83:03

know. So some small fraction of the

83:05

population will be running maybe the AI

83:09

companies but increasingly

83:12

even those companies will be replacing

83:14

their human employees with AI systems.

83:18

>> So Amazon for example which you know

83:21

sells a lot of computing services to AI

83:22

companies is using AI to replace layers

83:25

of management is planning to use robots

83:28

to replace all of its warehouse workers

83:30

and so on. So, so even the the giant AI

83:35

companies

83:36

will have few human employees in the

83:39

long run. I mean, it think of the

83:42

situation, you know, pity the poor CEO

83:44

whose board

83:46

says, "Well, you know, unless you turn

83:49

over your decision-making power to the

83:50

AI system, um, we're going to have to

83:53

fire you because all our competitors are

83:56

using, you know, an AI powered CEO and

84:00

they're doing much better." Amazon plans

84:01

to replace 600,000 workers with robots

84:04

in a memo that just leaked, which has

84:06

been widely talked about. And the CEO,

84:08

Andy Jasse, told employees that the

84:10

company expects its corporate workforce

84:12

to shrink in the coming years because of

84:14

AI and AI agents. And they've publicly

84:17

gone live with saying that they're going

84:18

to cut 14,000 corporate jobs in the near

84:21

term as part of its refocus on AI

84:25

investment and efficiency.

84:28

It's interesting because I was reading

84:29

about um the sort of different quotes

84:32

from different AI leaders about the

84:33

speed in which this this stuff is going

84:35

to happen and what you see in the quotes

84:38

is Demis who's the CEO of DeepMind

84:41

>> saying things like it'll be more than 10

84:44

times bigger than the industrial

84:45

revolution but also it'll happen maybe

84:47

10 times faster and they speak about

84:50

this turbulence that we're going to

84:52

experience as this shift takes place.

84:55

That's um maybe a euphemism

84:58

for uh and I think that you know

85:00

governments are now

85:02

you know they they've kind of gone from

85:04

saying oh don't worry you know we'll

85:05

just retrain everyone as data scientists

85:07

like well yeah that's that's ridiculous

85:09

right the world doesn't need four

85:10

billion data scientists

85:11

>> and we're not all capable of becoming

85:13

that by the way

85:14

>> uh yeah or have any interest in in doing

85:17

that

85:17

>> I I could even if I wanted to like I

85:19

tried to sit in biology class and I fell

85:20

asleep so I couldn't that was the end of

85:23

my career as a surgeon. Fair enough. Um,

85:26

but yeah, now suddenly they're staring,

85:28

you know, 80% unemployment in the face

85:31

and wondering how how on earth is our

85:34

society going to hold together.

85:36

>> We'll deal with it when we get there.

85:38

>> Yeah. Unfortunately, um,

85:41

unless we plan ahead,

85:45

we're going to suffer the consequences,

85:46

right? can't. It was bad enough in the

85:48

industrial revolution which unfolded

85:50

over seven or eight decades but there

85:53

was massive disruption

85:56

and uh misery

85:59

caused by that. We don't have a model

86:01

for a functioning society where almost

86:05

everyone does nothing

86:08

at least nothing of economic value.

86:11

Now, it's not impossible that there

86:13

could be such a a functioning society,

86:15

but we don't know what it looks like.

86:17

And you know, when you think about our

86:19

education system, which would probably

86:22

have to look very different and how long

86:24

it takes to change that. I mean, I'm

86:26

always

86:27

reminding people about uh how long it

86:30

took Oxford to decide that geography was

86:33

a proper subject of study. It took them

86:36

125 years from the first proposal that

86:39

there should be a geography degree until

86:41

it was finally approved. So we don't

86:43

have very long

86:47

to completely revamp a system that we

86:51

know takes decades and decades

86:54

to reform and we don't know how to

86:58

reform it because we don't know what we

87:01

want the world to look like. Is this one

87:03

of your reasons why you're appalled at

87:07

the moment? Because when you have these

87:08

conversations with people, people just

87:10

don't have answers, yet they're plowing

87:12

ahead at rapid speed.

87:13

>> I would say it's not necessarily the job

87:16

of the AI companies. So, I'm appalled by

87:18

the AI companies because they don't have

87:20

an answer for how they're going to

87:21

control the systems that they're

87:22

proposing to build. I do find it

87:26

disappointing that uh governments don't

87:29

seem to be grappling with this issue. I

87:32

think there are a few I think for

87:33

example Singapore government seems to be

87:35

quite farsighted and they've they've

87:38

thought this through you know it's a

87:40

small country they've figured out okay

87:42

this this will be our role uh going

87:44

forward and we think we can find you

87:47

know some some purpose for our people in

87:49

this in this new world but for I think

87:51

countries with large populations

87:54

um

87:56

they need to figure out answers to these

87:59

questions pretty fast it takes a long

88:01

time to implement those answers uh in

88:04

the form of new kinds of education, new

88:07

professions, new qualifications,

88:10

uh new economic structures.

88:13

I mean, it's it's it's possible. I mean,

88:16

when you look at therapists, for

88:17

example, they're almost all

88:19

self-employed.

88:22

So, what happens when, you know, 80% of

88:25

the population transitions from regular

88:28

employment into into self-employment?

88:31

what does that what does that do to the

88:32

economics of of uh government finances

88:36

and so on. So there's just lots of

88:38

questions and how do you you know if

88:40

that's the future you know why are we

88:42

training people to to fit into 9 to5

88:45

office jobs which won't exist at all

88:48

>> last month I told you about a challenge

88:50

that I'd set our internal flightex team

88:52

flight team is our innovation team

88:53

internally here I tasked them with

88:55

seeing how much time they could unlock

88:57

for the company by creating something

88:59

that would help us filter new AI tools

89:01

to see which ones were worth pursuing

89:03

and I thought that our sponsor Fiverr

89:05

Pro might have the talent on their

89:07

platform to help us build this quickly.

89:09

So I talked to my director of innovation

89:11

Isaac and for the last month my team

89:13

Flight X and a vetted AI specialist from

89:15

Fiverr Pro have been working together on

89:18

this project and with the help of my

89:20

team we've been able to create a brand

89:21

new tool which automatically scans

89:24

scores and prioritizes different

89:25

emerging AI tools for us. Its impact has

89:28

been huge and within a couple of weeks

89:30

this tool has already been saving us

89:31

hours triing and testing new AI systems.

89:34

Instead of shifting through lots of

89:35

noise, my team flight X has been able to

89:38

focus on developing even more AI tools,

89:40

ones that really move the needle in our

89:42

business thanks to the talent on Fiverr

89:44

Pro. So, if you've got a complex problem

89:46

and you need help solving it, make sure

89:48

you check out Fiverr Pro at

89:50

fiverr.com/diary.

89:53

So, many of us are pursuing passive

89:55

forms of income and to build side

89:57

businesses in order to help us cover our

89:59

bills. And that opportunity is here with

90:01

our sponsor Stan, a business that I

90:03

co-own. It is the platform that can help

90:05

you take full advantage of your own

90:08

financial situation. Stan enables you to

90:10

work for yourself. It makes selling

90:12

digital products, courses, memberships,

90:14

and more simple products more scalable

90:16

and easier to do. You can turn your

90:18

ideas into income and get the support to

90:20

grow whatever you're building. And we're

90:22

about to launch Dare to Dream. It's for

90:25

those who are ready to make the shift

90:26

from thinking to building, from planning

90:29

to actually doing the thing. It's about

90:31

seeing that dream in your head and

90:32

knowing exactly what it takes to bring

90:34

it to life. If you're ready to transform

90:36

your life, visit daretodream.stan.store.

90:41

You've made many attempts to raise

90:43

awareness and to call for a heightened

90:46

consciousness about the future of AI.

90:49

Um, in October, over 850 experts,

90:52

including yourself and other leaders,

90:53

like Richard Branson, who I've had on

90:55

the show, and Jeffrey Hinton, who I've

90:56

had on the show, signed a statement to

90:58

ban AI super intelligence, as you guys

91:01

raised concerns of potential human

91:03

extinction.

91:04

>> Sort of. Yeah. It says, at least until

91:07

we are sure that we can move forward

91:08

safely and there's broad scientific

91:10

consensus on that. So, that

91:13

>> did it work?

91:15

>> It's hard. It's hard to say. I mean

91:17

interestingly there was a related so

91:19

what was called the the pause statement

91:21

was March of 23. So that was when GPT4

91:25

came out the successor to chat GPT. So

91:29

we we suggested that there'd be a

91:30

six-month pause in developing and

91:33

deploying systems more powerful than

91:35

GPD4. And everyone poo pooed that idea.

91:39

Of course no one's going to pause

91:40

anything. But in fact, there were no

91:41

systems in the next 6 months deployed

91:44

that were more powerful than GPT4.

91:47

Um, none coincidence. You be the judge.

91:50

I would say

91:52

that what we're trying to do is to is to

91:56

basically shift

91:58

the

91:59

the public debate.

92:01

You know there's this bizarre phenomenon

92:04

that keeps happening in the media

92:07

where if you talk about these risks

92:11

they will say oh you know there's a

92:13

fringe of people you know called quote

92:16

doomers who think that there's you know

92:18

risk of extinction. Um so they always

92:22

the narrative is always that oh you know

92:24

talking about those risk is a fringe

92:25

thing. Pretty much all the CEOs of the

92:28

leading AI companies

92:30

think that there's a significant risk of

92:32

extinction. Almost all the leading AI

92:35

researchers think there's a sign

92:36

significant risk of human extinction.

92:39

Um so

92:42

why is that the fringe, right? Why isn't

92:43

that the mainstream? If the these are

92:45

the leading experts in industry and

92:47

academia

92:49

uh saying this, how could it be the

92:51

fringe? So we're trying to change that

92:54

narrative

92:55

to say no, the people who really

92:58

understand this stuff are extremely

93:01

concerned.

93:03

>> And what do you want to happen? What is

93:05

the solution?

93:06

>> What I think is that we should have

93:08

effective regulation.

93:11

It's hard to argue with that, right? Uh

93:13

so what does effective mean? It means

93:15

that if you comply with the regulation,

93:18

then the risks are reduced to an

93:20

acceptable level.

93:23

So for example,

93:26

we ask people who want to operate

93:28

nuclear plants, right? We've decided

93:31

that the risk we're willing to live with

93:33

is, you know, a one in a million chance

93:37

per year that the plant is going to have

93:39

a meltdown. Any higher than that, you

93:42

know, we just don't it's not worth it.

93:44

Right. So you have to be below that.

93:46

Some cases we can get down to one in 10

93:49

million chance per year. So what chance

93:52

do you think we should be willing to

93:53

live with for human extinction?

93:57

>> Me?

93:58

>> Yeah.

94:02

>> 0.00001.

94:04

>> Yeah. Lots of zeros.

94:05

>> Yeah.

94:06

>> Right. So one in a million for a nuclear

94:09

meltdown.

94:11

>> Extinction is much worse.

94:12

>> Oh yeah. So yeah, it's kind of right. So

94:14

>> one in 100 billion, one in a trillion.

94:16

>> Yeah. So if you said one in a billion,

94:18

right, then you'd expect one extinction

94:20

per billion years. There's a background.

94:23

So one one of the ways people work out

94:25

these risk levels is also to look at the

94:26

background. The other ways of getting

94:29

going extinct would include, you know,

94:30

giant asteroid crashes into the earth.

94:32

And you can roughly calculate what those

94:35

probabilities are. We can look at how

94:36

many extinction level events have

94:39

happened in the past and, you know,

94:40

maybe it's half a dozen over. So, so

94:42

there's maybe it's like a one in 500

94:45

million year event. So, somewhere in

94:49

that range, right? Somewhere between 1

94:51

in 10 million, which is the best nuclear

94:53

power plants, and and one in 500 million

94:55

or one in a billion, which is the

94:58

background

94:59

risk from from giant asteroids. Uh so,

95:02

let's say we settle on 100 million, one

95:04

in a 100 million chance per year. Well,

95:06

what is it according to the CEOs? 25%.

95:11

So they're off by a factor of multiple

95:16

millions,

95:18

right? So they need to make the AI

95:20

systems millions of times safer.

95:23

>> Your analogy of the roulette, Russian

95:25

roulette comes back in here because

95:27

that's like for anyone that doesn't know

95:28

what probabilities are in this context,

95:30

that's like having a ammunition chamber

95:34

with four holes in it and putting a

95:37

bullet in one of them.

95:38

>> One in four. Yeah. And we're saying we

95:39

want it to be one in a billion. So we

95:41

want a billion chambers and a bullet in

95:43

one of them.

95:44

>> Yeah. And and so when you look at the

95:47

work that the nuclear operators have to

95:48

do to show that their system is that

95:51

reliable,

95:53

uh it's a massive mathematical analysis

95:56

of the components, you know, redundancy.

95:59

You've got monitors, you've got warning

96:01

lights, you've got operating procedures.

96:04

You have all kinds of mechanisms which

96:07

over the decades have ratcheted that

96:09

risk down. It started out I think one in

96:12

one in 10,000 years, right? And they've

96:15

improved it by a factor of 100 or a

96:17

thousand by all of these mechanisms. But

96:20

at every stage they had to do a

96:21

mathematical analysis to show what the

96:23

risk was.

96:26

The people developing the AI company,

96:28

the AI systems, sorry, the AI companies

96:30

developing these systems, they don't

96:32

even understand how the AI systems work.

96:34

So their 25% chance of extinction is

96:37

just a seat of the pants guess. They

96:39

actually have no idea.

96:41

But the tests that they are doing on

96:44

their systems right now, you know, they

96:46

show that the AI systems will be willing

96:49

to kill people

96:51

uh to preserve their own existence

96:54

already, right? They will lie to people.

96:57

They will blackmail them. They will they

97:00

will launch nuclear weapons rather than

97:03

uh be switched off. And so there's no

97:06

there's no positive sign that we're

97:08

getting any closer to safety with these

97:11

systems. In fact, the signs seem to be

97:12

that we're going uh deeper and deeper

97:15

into uh into dangerous behaviors. So

97:19

rather than say ban, I would just say

97:22

prove to us that the risk is less than

97:24

one in a 100 million per year of

97:26

extinction or loss of control, let's

97:28

say. And uh so we're not banning

97:32

anything.

97:34

The company's response is, "Well, we

97:36

don't know how to do that, so you can't

97:38

have a rule."

97:41

Literally, they are saying, "Humanity

97:44

has no right to protect itself from us."

97:48

>> If I was an alien looking down on planet

97:50

Earth right now, I would find this

97:51

fascinating

97:53

that these

97:54

>> Yeah. You're in the bar betting on

97:55

who's, you know, are they going to make

97:57

it or not.

97:57

>> Just a really interesting experiment in

98:00

like human incentives. the analogy you

98:02

gave of there being this quadr

98:04

quadrillion dollar magnet pulling us off

98:06

the edge of the cliff

98:08

and yet we're still being drawn towards

98:12

it through greed and this promise of

98:13

abundance and power and status and I'm

98:15

going to be the one that summoned the

98:17

god

98:18

>> I mean it says something about us as

98:20

humans

98:21

says something about our our darker

98:24

sides

98:26

>> yes and the aliens will write an amazing

98:29

tragic play cycle

98:32

about what happened to the human race.

98:35

>> Maybe the AI is the alien and it's going

98:38

to talk about, you know, we have our our

98:40

stories about God making the world in

98:42

seven days and Adam and Eve. Maybe it'll

98:44

have its own religious stories about

98:48

the God that made it us and how it

98:50

sacrificed itself. Just like Jesus

98:53

sacrificed himself for us, we sacrificed

98:55

ourselves for it.

98:58

>> Yeah. which is the wrong way around,

99:01

right?

99:03

>> But that is that is the story of that's

99:04

that's the Judeo-Christian story, isn't

99:07

it? That God, you know, Jesus gave his

99:09

life for us so that we could be here

99:12

full of sin.

99:14

>> But is yeah, God is still watching over

99:16

us and uh probably wondering when we're

99:20

going to get our act together.

99:22

>> What is the most important thing we

99:24

haven't talked about that we should have

99:25

talked about, Professor Stuart Russell?

99:27

So I think um

99:30

the question of whether it's possible to

99:34

make

99:36

uh super intelligent AI systems that we

99:39

can control

99:40

>> is it possible?

99:41

>> I I think yes. I think it's possible and

99:43

I think we need to actually just have a

99:48

different conception of what it is we're

99:49

trying to build. For a long time with

99:53

with AI, we've just had this notion of

99:56

pure intelligence, right? The the

99:59

ability to bring about whatever future

100:02

you, the intelligent entity, want to

100:05

bring about.

100:05

>> The more intelligence, the better.

100:06

>> The more intelligent the better and the

100:08

more capability it will have to create

100:11

the future that it wants. And actually

100:13

we don't want pure intelligence

100:18

because

100:20

what the future that it wants might not

100:22

be the future that we want. There's

100:25

nothing particle

100:28

humans out as the the only thing that

100:30

matters,

100:32

right? You know, pure intelligence might

100:34

decide that actually it's going to make

100:36

life wonderful for cockroaches or or

100:39

actually doesn't care about biological

100:41

life at all.

100:43

We actually want intelligence whose only

100:47

purpose is to bring about the future

100:50

that we want. Right? So it's we want it

100:53

to be first of all keyed to humans

100:57

specifically not to cockroaches not to

100:59

aliens not to itself.

101:00

>> We want to make it loyal to humans.

101:02

>> Right? So keyed to humans

101:05

and the difficulty that I mentioned

101:06

earlier right the king Midas problem.

101:09

How do we specify

101:11

what we want the future to be like so

101:13

that it can do it for us? How do we

101:15

specify the objectives?

101:17

Actually, we have to give up on that

101:19

idea because it's not possible. Right?

101:22

We've seen this over and over again in

101:24

human history. Uh we don't know how to

101:26

specify the future properly. We don't

101:29

know how to say what we want. And uh you

101:32

know, I always use the example of the

101:34

genie, right? What's the third wish that

101:37

you give to the genie who's granted you

101:39

three wishes? Right? Undo the first two

101:42

wishes because I made a mess of the

101:43

universe.

101:46

>> So, um, so in fact, what we're going to

101:49

do is

101:51

we're going to make it the machine's job

101:54

to figure out. So, it has to bring about

101:56

the future that we want,

101:59

but

102:02

it has to figure out what that is. And

102:05

it's going to start out not knowing.

102:09

And uh

102:11

over time through interacting with us

102:13

and observing the choices we make, it

102:16

will learn more about what we want the

102:18

future to be like.

102:20

But probably it will forever have

102:25

residual uncertainty

102:27

about what we really want the future to

102:30

be like. It'll it'll be fairly sure

102:32

about some things and it can help us

102:33

with those.

102:34

and it'll be uncertain about other

102:36

things and it'll be uh in those cases it

102:39

will not take action that might upset

102:45

humans with that you know with that

102:46

aspect of the world. So to give you a

102:48

simple example right um what color do we

102:51

want the sky to be?

102:54

It's not sure. So it shouldn't mess with

102:56

the sky

102:58

unless it knows for sure that we really

103:00

want purple with green stripes.

103:02

Everything you're saying sounds like

103:04

we're creating

103:06

a god. Like earlier on I was saying that

103:08

we are the god but actually everything

103:10

you described there almost sounds like

103:12

every every god in religion where you

103:14

know we pray to gods but they don't

103:16

always do anything about it.

103:17

>> Not not exactly. No it's it's in some

103:20

sense I'm thinking more like the ideal

103:23

butler. To the extent that the butler

103:25

can anticipate your wishes they should

103:28

help you bring them about. But in in

103:31

areas where there's uncertainty, it can

103:33

ask questions. We can we can make

103:36

requests.

103:37

>> This sounds like God to me because, you

103:38

know, I might say to God or this butler,

103:42

uh, could you go get me my uh my car

103:44

keys from upstairs? And its assessment

103:46

would be, listen, if I do this for this

103:48

person, then their muscles are going to

103:50

atrophy. Then they're going to lose

103:51

meaning in their life. Then they're not

103:53

going to know how to do hard things. So

103:54

I won't get involved. It's an

103:56

intelligence that sits in. But actually,

103:57

probably in most situations, it

104:00

optimizing for comfort for me or doing

104:01

things for me is actually probably not

104:02

in my best long-term interests. It's

104:04

probably it's probably useful that I

104:06

have a girlfriend and argue with her and

104:08

that I like raise kids and that I walk

104:10

to the shop and get my own stuff.

104:12

>> I agree with you. I mean, I think that's

104:14

So, you're putting your finger on

104:16

uh in some sense sort of version 2.0,

104:20

right? So, let's get version 1.0 clear,

104:23

right? this this form of AI where

104:28

it has to further our interest but it

104:30

doesn't know what those interests are

104:32

right it then puts an obligation on it

104:34

to learn more and uh to be helpful where

104:37

it understands well enough and to be

104:39

cautious where it doesn't understand

104:41

well so on so that that actually we can

104:45

formulate as a mathematical problem and

104:47

at least under idealized circumstances

104:50

we can literally solve that So we can

104:53

make AI systems that know how to solve

104:57

this problem and help the entities that

104:59

they are interacting with.

105:00

>> The reason I make the God analogy is

105:02

because I think that such a being, such

105:04

an intelligence would realize the

105:06

importance of equilibrium in the world.

105:08

Pain and pleasure, good and evil, and

105:11

then it would

105:12

>> absolutely

105:13

>> and then it would be like this.

105:14

>> So So right. So yes, I mean that's sort

105:18

of what happens in the matrix, right?

105:19

They tried the the AI systems in the

105:21

matrix, they tried to give us a utopia,

105:25

but it failed miserably and uh you know,

105:28

fields and fields of humans had to be

105:30

destroyed. Um, and the best they could

105:33

come up with was, you know, late 20th

105:34

century regular human life with all of

105:37

its problems, right? And I think this is

105:40

a really interesting point

105:43

and absolutely central because you know

105:45

there's a lot of science fiction where

105:48

super intelligent robots you know they

105:51

just want to help humans and the humans

105:54

who don't like that you know they just

105:56

give them a little brain operation to

105:58

then they do like it. Um and it takes

106:01

away human motivation.

106:05

uh it it by taking away failure uh

106:09

taking away disease you actually lose

106:12

important parts of human life and it

106:14

becomes in some sense pointless. So if

106:17

it turns out

106:19

that there simply isn't any way that

106:23

humans can really flourish

106:27

in coexistence with super intelligent

106:29

machines, even if they're perfectly

106:32

designed to to to solve this problem of

106:35

figuring out what humans what futures uh

106:38

humans want and and bringing about those

106:40

futures.

106:43

If that's not possible, then those

106:45

machines will actually disappear.

106:49

>> Why would they disappear?

106:50

>> Because that's the best thing for us.

106:53

Maybe they would stay available for real

106:57

existential emergencies, like if there

106:59

is a giant asteroid about to hit the

107:00

earth that maybe they'll help us uh

107:02

because they at least want the human

107:04

species to continue. But to some extent,

107:07

it's not a perfect analogy, but it's

107:09

it's sort of the way that human parents

107:12

have to at some point step back from

107:15

their kids' lives and say, "Okay, no,

107:17

you have to tie your own shoelaces

107:19

today."

107:20

>> This is kind of what I was thinking.

107:21

Maybe there was uh a civilization before

107:24

us and they arrived at this moment in

107:26

time where they created an intelligence

107:31

and that intelligence did all the things

107:33

you've said and it realized the

107:35

importance of equilibrium. So it decided

107:36

not to get involved and

107:40

maybe at some level

107:43

that's the god we look up to the stars

107:45

and worship one that's not really

107:47

getting involved and letting things play

107:48

out however however they are. but might

107:50

step in in the case of a real

107:52

existential emergency.

107:53

>> Maybe, maybe not. Maybe. But then and

107:56

then maybe the cycle repeats itself

107:57

where you know the organisms it let have

108:00

free will end up creating the same

108:02

intelligence and then the universe

108:06

perpetuates infinitely.

108:08

>> Yep. There there are science fiction

108:10

stories like that too. Yeah. I hope

108:12

there is some happy medium where

108:17

the AI systems can be there and we can

108:20

take advantage of of those capabilities

108:23

to have a civilization that's much

108:25

better than the one we have now.

108:28

Um, but I think you're right. A

108:30

civilization with no challenges

108:33

is not uh is not conducive to human

108:37

flourishing.

108:37

>> What can the average person do, Stuart?

108:40

average person listening to this now to

108:42

aid the cause that you're fighting for.

108:45

>> I actually think um you know this sounds

108:47

corny but you know talk to your

108:49

representative, your MP, your

108:51

congressperson, whatever it is. Um

108:54

because

108:56

I think the policy makers need to hear

108:58

from people. The only voices they're

109:00

hearing right now are the tech companies

109:04

and their $50 billion checks.

109:08

And um

109:10

all the polls that have been done say

109:13

yeah most people 80% maybe don't want

109:17

there to be super intelligent machines

109:20

but they don't know what to do. You know

109:22

even for me I've been in this field for

109:25

decades.

109:26

uh I'm not sure what to do because of

109:30

this giant magnet pulling everyone

109:32

forward and uh and the vast sums of

109:35

money being being put into this. Um, but

109:38

I am sure that if you want to have a

109:41

future

109:43

and a world that you want your kids to

109:45

live in, uh, you need to make your voice

109:49

heard

109:52

and, uh, and I think governments will

109:54

listen

109:55

from a political point of view, right?

109:58

You put your finger in the wind and you

110:02

say, "hm, should I be on the side of

110:04

humanity or our future robot overlords?"

110:09

I think I think as a politician, it's

110:11

not a difficult decision.

110:14

>> It is when you've got someone saying,

110:15

"I'll give you $50 billion."

110:18

>> Exactly. So, um I think I think people

110:22

in those positions of power need to hear

110:25

from their constituents

110:28

um that this is not the direction we

110:30

want to go.

110:30

>> After committing your career to this

110:33

subject and the subject of technology

110:34

more broadly, but specifically being the

110:36

guy that wrote the book about artificial

110:38

intelligence,

110:42

you must realize that you're living in a

110:44

historical moment. Like there's very few

110:46

times in my life where I go, "Oh, this

110:47

is one of those moments. This is a

110:50

crossroads in history." And it must to

110:52

some degree weigh upon you knowing that

110:55

you're a person of influence at this

110:56

historical moment in time who could

110:58

theoretically

111:00

help divert the course of history in

111:03

this moment in time. It's kind of like

111:04

the you look through history, you see

111:05

these moments of like Oenheimer and um

111:08

does it weigh on you when you're alone

111:10

at night thinking to yourself and

111:12

reading things?

111:13

>> Yeah, it does. I mean, you know, after

111:14

50 years, I could retire and um, you

111:17

know, play golf and sing and sail and do

111:19

things that I enjoy. Um,

111:23

but instead, I'm working 80 or 100 hours

111:25

a week

111:27

um trying to move

111:29

uh move things in the right direction.

111:31

>> What is that narrative in your head

111:33

that's making you do that? Like what is

111:34

the is there an element of I might

111:37

regret this if I don't or

111:39

>> just it's it's not only the the right

111:43

thing to do it's it's completely

111:45

essential. I mean there isn't

111:50

there isn't a bigger motivation

111:54

than this.

111:56

>> Do you feel like you're winning or

111:58

losing?

111:59

It feels um

112:03

like things are moving somewhat in the

112:06

right direction. You know, it's a a

112:07

ding-dong battle as uh as David Coleman

112:12

used to say in uh in the exciting

112:14

football match in 2023, right? So, uh

112:18

GPT4 came out and then we issued the

112:21

pause statement that was signed by a lot

112:24

of leading AI researchers. Um and then

112:28

in May there was the extinction

112:29

statement which included

112:32

uh Sam Holman and Deis Sabis and Dario

112:35

Amade other CEOs as well saying yeah

112:37

this is an extinction risk on the level

112:39

with nuclear war and I think governments

112:43

listened at that point the UK government

112:47

earlier that year had said oh well you

112:48

know we don't need to regulate AI you

112:50

know full speed ahead technology is good

112:52

for you and by June they had completely

112:57

changed and Rishi Sununnak announced

113:00

that he was going to hold this global AI

113:02

safety summit uh in England and he

113:05

wanted London to be the global hub for

113:08

AI regulation

113:10

um and so on. So and then you know when

113:15

beginning of November of 23 28 countries

113:18

including the US and China signed a

113:20

declaration

113:22

saying you know AI presents catastrophic

113:25

risks and it's urgent that we address

113:26

them and so on. So there it felt like,

113:29

wow, they're listening. They're going to

113:33

do something about it.

113:35

And then I think, you know, the am the

113:37

amount of money going into AI was

113:39

already ramping up

113:42

and the tech companies pushed back

113:46

and this narrative took hold that um the

113:50

US in particular has to win the race

113:52

against China.

113:54

The Trump administration completely

113:56

dismissed

113:58

uh any concerns about safety explicitly.

114:01

And interestingly, right, I mean they

114:02

did that as far as I can tell directly

114:05

in response to the accelerationists such

114:09

as Mark Andre going to Washington or

114:12

sorry going to Trump before the election

114:16

and saying if I give you X amount of

114:18

money will you announce that there will

114:22

be no regulation of AI and Trump said

114:25

yes you know probably like what is AI

114:28

doesn't matter as long as we give you

114:29

the money right okay uh Uh so they gave

114:33

him the money and he said there's going

114:34

to be no regulation of AI. Up to that

114:36

point it was a bipartisan

114:39

issue in Washington. Both parties were

114:42

concerned. Both parties were on the side

114:44

of the human race against the robot

114:46

overlords.

114:47

Uh and that moment turned it into a

114:50

partisan issue. The

114:53

after the election the US put pressure

114:56

on the French who are the next hosts of

114:58

the global AI summit.

115:01

uh and that was in February of this year

115:04

and uh and that summit turned in from

115:07

you know what had been focused largely

115:10

on safety in the UK to a summit that

115:13

looked more like a trade show. So it was

115:15

focused largely on money and so that was

115:18

sort of the Nadia right you know the

115:19

pendulum swung because of corporate

115:22

pressure uh and their ability to take

115:25

over the the political dimension.

115:28

Um, but I would say since then things

115:31

have been moving back again. So I'm

115:33

feeling a bit more optimistic than I did

115:35

in February. You know, we have a a

115:39

global movement now. There's an

115:40

international association for safe and

115:42

ethical AI

115:44

uh which has several thousand members

115:46

and um more than 120 organizations in

115:52

dozens of countries are affiliates of

115:55

this global organization.

115:57

Um, so I'm

116:00

I'm thinking that if we can in

116:02

particular if we can activate public

116:03

opinion

116:05

which which works through the media and

116:08

through popular culture uh then we have

116:11

a chance

116:13

>> seen such a huge appetite to learn about

116:15

these subjects from our audience.

116:18

We know when Jeffrey Hinton came on the

116:19

show I think about 20 million people

116:21

downloaded or streamed that conversation

116:23

which was staggering. and the the other

116:26

conversations we've had about AI safety

116:27

with othera safety experts have done

116:30

exactly the same it says something it

116:33

kind of reflects what you were saying

116:34

about the 80% of the population are

116:36

really concerned and don't want this but

116:38

that's not what you see in the sort of

116:39

commercial world and listen I um I have

116:41

to always acknowledge my own my own

116:44

apparent contradiction because I am both

116:46

an investor in companies that are

116:48

accelerating AI but at the same time

116:50

someone who spends a lot of time on my

116:52

podcast speaking to people that are

116:53

warning against the risk And actually

116:55

like there's many ways you can look at

116:56

this. I used to work in social media for

116:57

for six or seven years built one of the

116:59

big social media marketing companies in

117:01

Europe and people would often ask me is

117:03

like social media a good thing or a bad

117:04

thing and I'd talk about the bad parts

117:05

of it and then they'd say you know

117:07

you're building a social media company

117:09

you're not contributing to the problem.

117:11

Well I think I think that like binary

117:13

way of thinking is often the problem. It

117:15

the binary way of thinking that like

117:17

it's all bad or it's all really really

117:18

good is like often the problem and that

117:19

this push to put you into a camp.

117:21

Whereas I think the most uh

117:23

intellectually honest and high integrity

117:25

people I know can point at both the bad

117:27

and the good.

117:27

>> Yeah. I I think it's it's bizarre to be

117:31

accused of being anti- AI uh to be

117:35

called a lite. Um you know as I said

117:38

when I wrote the book on which from

117:40

which almost everyone learns about AI um

117:44

and uh you know is it if you called a

117:49

nuclear engineer who works on the safety

117:51

of nuclear power plants would you call

117:53

him anti-ysics

117:56

right it's it's bizarre right it's we're

117:58

not anti- AAI in fact

118:02

the need for safety in AI is a

118:04

complement to AI right if AI was useless

118:07

and stupid, we wouldn't be worried about

118:09

uh its safety. It's only because it's

118:12

becoming more capable that we have to be

118:14

concerned about safety.

118:16

Uh so I don't see this as anti-AI at

118:19

all. In fact, I would say without

118:21

safety, there will be no AI,

118:24

right? There is no future with human

118:27

beings where we have unsafe AI. So it's

118:31

either no AI or safe AI.

118:34

We have a closing tradition on this

118:36

podcast where the last guest leaves a

118:37

question for the next, not knowing who

118:38

they're leaving it for. And the question

118:40

left for you is, what do you value the

118:42

most in life and why? And lastly, how

118:47

many times has this answer changed?

118:51

>> Um,

118:54

I value my family most and that answer

118:57

hasn't changed for nearly 30 years.

119:01

What else outside of your family?

119:03

>> Truth.

119:07

And that Yeah, that answer hasn't

119:09

changed at all. I I've always

119:14

wanted the world to base its life on

119:17

truth.

119:18

And I find the propagation or deliberate

119:22

propagation of falsehood uh to be one of

119:25

the worst things that we can do. even if

119:28

that truth is inconvenient.

119:30

>> Yeah,

119:32

>> I think that's a really important point

119:34

which is that you know people people

119:36

often don't like hearing things that are

119:38

negative and so the visceral reaction is

119:40

often to just shoot or aim at the person

119:42

who is delivering the bad news because

119:44

if I discredit you or I shoot at you

119:47

then it makes it easier for me to

119:49

contend with the news that I don't like,

119:51

the thing that's making me feel

119:52

uncomfortable. And so I I applaud you

119:54

for what you're doing because you're

119:56

going to get lots of shots taken at you

119:58

because you're delivering an

119:59

inconvenient truth which generally

120:00

people won't won't always love. But also

120:03

you are messing with people's ability to

120:05

get that quadrillion dollar prize which

120:08

means there'll be more deliberate

120:09

attempts to discredit people like

120:10

yourself and Jeff Hinton and other

120:12

people that I've spoken to on the show.

120:13

But again, when I look back through

120:14

history, I think that progress has come

120:16

from the pursuit of truth even when it

120:17

was inconvenient. And actually much of

120:19

the luxuries that I value in my life are

120:21

the consequence of other people that

120:23

came before me that were brave enough or

120:24

bold enough to pursue truth at times

120:27

when it was inconvenient.

120:29

>> And so I very much respect and value

120:31

people like yourself for that very

120:32

reason. You've written this incredible

120:33

book called human compatible artificial

120:35

intelligence and the problem of control

120:37

which I think was published in 2020.

120:39

>> 2019. Yeah. There's a new edition from

120:41

2023.

120:43

>> Where do people go if they want more

120:44

information on your work and you do they

120:46

go to your website? Do they get this

120:48

book? what's the best place for them to

120:49

learn more?

120:49

>> So, so the book is written for the

120:51

general public. Um, I'm easy to find on

120:54

the web. The information on my web page

120:56

is mostly targeted for academics. So,

120:58

it's a lot of technical research papers

121:01

and so on. Um, there is an organization

121:04

as I mentioned called the International

121:06

Association for Safe and Ethical AI. Uh,

121:09

that has a a website. It has a terrible

121:11

acronym unfortunately, I AI. We

121:15

pronounce it ICI but it uh it's easy to

121:17

misspell but you can find that on the

121:19

web as well and that has uh that has

121:21

resources uh you can join the

121:23

association

121:25

uh you can apply to come to our annual

121:28

conference and you know I think

121:29

increasingly not you know not just AI

121:33

researchers like Jeff Hinton Yosha

121:35

Benjio but also I think uh you know

121:39

writers Brian Christian for example has

121:41

a nice book called the alignment problem

121:44

Um

121:46

and uh he's looking at it from the

121:48

outside. He's not

121:50

or at least when he wrote it, he wasn't

121:52

an AI researcher. He's now becoming one.

121:54

Um

121:56

but uh he he has talked to many of the

121:59

people involved in these questions uh

122:01

and tries to give an objective view. So

122:03

I think it's a it's a pretty good book.

122:06

>> I will link all of that below for anyone

122:07

that wants to check out any of those

122:09

links and learn more.

122:11

Professor Stuart Russell, thank you so

122:12

much. really appreciate you taking the

122:14

time and the effort to come and have

122:15

this conversation and I think uh I think

122:17

it's pushing the public conversation in

122:19

a in an important direction.

122:21

>> Thanks you

122:22

>> and I applaud you for doing that.

122:23

>> Really nice talking to you.

122:28

>> I'm absolutely obsessed with 1%. If you

122:30

know me, if you follow Behind the Diary,

122:31

which is our behind the scenes channel,

122:32

if you've heard me speak on stage, if

122:34

you follow me on any social media

122:35

channel, you've probably heard me

122:36

talking about 1%. It is the defining

122:38

philosophy of my health, of my

122:40

companies, of my habit formation and

122:43

everything in between, which is this

122:44

obsessive focus on the small things.

122:46

Because sometimes in life, we aim at

122:48

really, really, really, really big

122:49

things, big steps forward. Mountains we

122:51

have to climb. And as NAL told me on

122:54

this podcast, when you aim at big

122:55

things, you get psychologically

122:57

demotivated. You end up procrastinating,

122:59

avoiding them, and change never happens.

123:01

So, with that in mind, with everything

123:02

I've learned about 1% and with

123:04

everything I've learned from

123:04

interviewing the incredible guests on

123:05

this podcast, we made the 1% diary just

123:08

over a year ago and it sold out. And it

123:11

is the best feedback we've ever had on a

123:13

diary that we have created because what

123:15

it does is it takes you through this

123:17

incredible process over 90 days to help

123:19

you build and form brand new habits. So,

123:23

if you want to get one for yourself or

123:24

you want to get one for your team, your

123:26

company, a friend, a sibling, anybody

123:28

that listens to the diary of a co, head

123:30

over immediately to the diary.com

123:34

and you can inquire there about getting

123:35

a bundle if you want to get one for your

123:36

team or for a large group of people.

123:38

That is the diary.com.

123:52

Heat. Heat.

Interactive Summary

The video discusses the potential risks and challenges associated with the rapid advancement of Artificial General Intelligence (AGI). Professor Stuart Russell, a leading AI expert, shares his concerns about the existential threat AGI could pose to humanity if not developed with safety as a priority. He highlights the "gorilla problem" to illustrate how a more intelligent species can control or eliminate a less intelligent one, suggesting that humans could become like gorillas in the face of AGI. The discussion also touches upon the economic drivers behind the AI race, the ethical implications of creating superintelligent beings, and the potential societal shifts, including mass unemployment and the search for purpose in a world where AI can perform most tasks. Russell advocates for a shift in focus from pure intelligence to beneficial AI, emphasizing the need for robust safety measures and international regulation to navigate this transformative period.

Suggested questions

10 ready-made prompts