HomeVideos

Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!

Now Playing

Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!

Transcript

2396 segments

0:00

They call you the godfather of AI. So

0:02

what would you be saying to people about

0:04

their career prospects in a world of

0:06

super intelligence? Train to be a

0:07

plumber. Really? Yeah. Okay. I'm going

0:10

to become a plumber. Jeffrey Hinton is

0:13

the Nobel Prize winning pioneer whose

0:15

groundbreaking work has shaped AI and

0:18

the future of humanity. Why do they call

0:20

it the godfather of AI? because there

0:21

weren't many people who believed that we

0:23

could model AI on the brain so that it

0:25

learned to do complicated things like

0:27

recognize objects and images or even do

0:29

reasoning. And I pushed that approach

0:30

for 50 years and then Google acquired

0:32

that technology and I worked there for

0:34

10 years on something that's now used

0:35

all the time in AI. And then you left.

0:37

Yeah. Why? So that I could talk freely

0:39

at a conference. What did you want to

0:41

talk about freely? How dangerous AI

0:43

could be.

0:45

I realized that these things will one

0:47

day get smarter than us. And we've never

0:49

had to deal with that. And if you want

0:50

to know what life's like when you're not

0:51

the apex intelligence, ask a chicken. So

0:55

there's risks that come from people

0:57

misusing AI. And then there's risks from

0:59

AI getting super smart and deciding it

1:00

doesn't need us. Is that a real risk?

1:02

Yes, it is. But they're not going to

1:03

stop it cuz it's too good for too many

1:05

things. What about regulations? They

1:06

have some, but they're not designed to

1:08

deal with most of the threats. Like the

1:09

European regulations have a clause that

1:11

say none of these apply to military uses

1:13

of AI. Really? Yeah. It's crazy. One of

1:16

your students left OpenAI. Yeah. He was

1:19

probably the most important person

1:20

behind the development of the early

1:22

versions of church GPT and I think he

1:23

left because he had safety concerns. We

1:25

should recognize that this stuff is an

1:27

existential threat and we have to face

1:29

the possibility that unless we do

1:31

something soon we're near the end. So

1:34

let's do the risks. What do we end up

1:36

doing in such a world?

1:39

This has always blown my mind a little

1:41

bit. 53% of you that listen to the show

1:43

regularly haven't yet subscribed to the

1:46

show. So, could I ask you for a favor

1:47

before we start? If you like the show

1:49

and you like what we do here and you

1:50

want to support us, the free simple way

1:52

that you can do just that is by hitting

1:53

the subscribe button. And my commitment

1:55

to you is if you do that, then I'll do

1:57

everything in my power, me and my team,

1:58

to make sure that this show is better

2:00

for you every single week. We'll listen

2:02

to your feedback. We'll find the guests

2:03

that you want me to speak to and we'll

2:05

continue to do what we do. Thank you so

2:07

much.

2:11

Jeffrey Hinsson, they call you the

2:14

godfather of AI.

2:16

Uh yes they do. Why do they call you

2:18

that? There weren't that many people who

2:21

believed that we could make neural

2:23

networks work, artificial neural

2:24

networks. So for a long time in AI from

2:27

the 1950s onwards, there were kind of

2:31

two ideas about how to do AI.

2:34

One idea was that sort of core of human

2:36

intelligence was reasoning. And to do

2:39

reasoning, you needed to use some form

2:40

of logic. And so AI had to be based

2:43

around logic. And in your head, you must

2:47

have something like symbolic expressions

2:48

that you manipulated with rules. And

2:50

that's how intelligence worked. And

2:53

things like learning or reasoning by

2:55

analogy, that all come later once we've

2:57

figured out how basic reasoning works.

2:59

There was a different approach, which is

3:01

to say, let's model AI on the brain

3:04

because obviously the brain makes us

3:06

intelligent. So simulate a network of

3:10

brain cells on a computer and try and

3:13

figure out how you would learn strengths

3:14

of connections between brain cells so

3:17

that it learned to do complicated things

3:19

like recognize objects in images or

3:21

recognize speech or even do reasoning. I

3:24

pushed that approach for like 50 years

3:27

because so few people believed in it.

3:30

There weren't many good universities

3:32

that had groups that did that. So if you

3:35

did that the best young students who

3:37

believed in that came and worked with

3:39

you. So I was very fortunate in getting

3:41

a whole lot of really good students some

3:44

of which have gone on to create and play

3:46

an instrumental role in creating

3:48

platforms like open AI. Yes. So I sus

3:52

a nice example a whole bunch of them.

3:54

Why did you believe that modeling it off

3:57

the brain was a more effective approach?

3:59

It wasn't just me believed it early on.

4:02

Fonoyman believed it and Cheuring

4:04

believed it and if either of those had

4:07

lived I think AI would have had a very

4:08

different history but they both died

4:10

young. You think AI would have been here

4:13

sooner? I think neural net the neural

4:15

net approach would have been accepted

4:18

much sooner if either of them had lived

4:20

in this season of your life. What

4:23

mission are you on? My main mission now

4:26

is to warn people how dangerous AI could

4:29

be. Did you know that when you became

4:33

the godfather of AI? No, not really. I

4:36

was quite slow to understand some of the

4:38

risks. Some of the risks were always

4:40

very obvious, like people would use AI

4:42

to make autonomous lethal weapons. That

4:44

is things that go around deciding by

4:46

themselves who to kill. Other risks,

4:49

like the idea that they would one day

4:50

get smarter than us and maybe would

4:53

become irrelevant, I was slow to

4:56

recognize that. Other people recognized

4:58

it 20 years ago. I only recognized it a

5:01

few years ago that that was a real risk

5:03

that was come might be coming quite

5:04

soon. How could you not have foreseen

5:07

that if if with everything you know here

5:10

about cracking the ability for these

5:12

computers to learn similar to how humans

5:14

learn and just you know introducing any

5:17

rate of improvement? It's a very good

5:19

question. How could you not have seen

5:20

that? But remember neural networks 20 30

5:24

years ago were very primitive in what

5:27

they could do. They were nowhere near as

5:28

good as humans, but things like vision

5:31

and language and speech recognition. The

5:33

idea that you have to now worry about it

5:35

getting smarter than people, that seems

5:36

silly then. When did that change? It

5:39

changed for the general population when

5:41

chat GPT came out. It changed for me

5:46

when I realized that the kinds of

5:48

digital intelligences we're making have

5:51

something that makes them far superior

5:53

to the kind of biological intelligence

5:54

we have. If I want to share information

5:57

with you, so I go off and I learn

5:59

something and I'd like to tell you what

6:01

I learned. So I produce some sentences.

6:04

This is a rather simplistic model, but

6:06

roughly right. Your brain is trying to

6:07

figure out how can I change the strength

6:09

of connections between neurons. So I

6:10

might have put that word next. And so

6:12

you'll do a lot of learning when a very

6:14

surprising word comes and not much

6:15

learning when if it's when it's very

6:17

obvious word. If I say fish and chips,

6:19

you don't do much learning when I say

6:21

chips. But if I say fish and cucumber,

6:23

you do a lot more learning. You wonder

6:25

why did I say cucumber? So that's

6:27

roughly what's going on in your brain.

6:29

I'm predicting what's coming next.

6:31

That's how we think it's working. Nobody

6:33

really knows for sure how the brain

6:35

works. And nobody knows how it gets the

6:37

information about whether you should

6:39

increase the strength of a connection or

6:40

decrease the strength of a connection.

6:42

That's the crucial thing. But what we do

6:44

know now from AI

6:47

is that if you could get information

6:49

about whether to increase or decrease

6:51

the connection strength so as to do

6:53

better at whatever task you're trying to

6:55

do, then we could learn incredible

6:57

things because that's what we're doing

6:58

now with artificial neuronets.

7:01

It's just we don't know for real brains

7:03

how they get that signal about whether

7:04

to increase or decrease.

7:06

As we sit here today, what are the big

7:08

concerns you have around safety of AI?

7:10

if we were to to list the the top couple

7:14

that are really front of mind and that

7:15

we should be thinking about. Um, can I

7:17

have more than a couple? Go ahead. I'll

7:19

write them all down and we'll go through

7:20

them. Okay. First of all, I want to make

7:22

a distinction between two completely

7:25

different kinds of risk.

7:27

There's risks that come from people

7:30

misusing AI. Yeah. And that's most of

7:32

the risks and all of the short-term

7:35

risks. And then there's risks that come

7:37

from AI getting super smart and deciding

7:40

it doesn't need us. Is that a real risk?

7:43

And I talk mainly about that second risk

7:45

because lots of people say, "Is that a

7:47

real risk?" And yes, it is. Now, we

7:51

don't know how much of a risk it is.

7:52

We've never been in that situation

7:54

before. We've never had to deal with

7:55

things smarter than us. So really, the

7:58

thing about that existential threat is

8:01

that we have no idea how to deal with

8:04

it. We have no idea what it's going to

8:06

look like. And anybody who tells you

8:07

they know just what's going to happen

8:08

and how to deal with it, they're talking

8:10

nonsense. So, we don't know how to

8:12

estimate the probabil probabilities

8:14

it'll replace us. Um, some people say

8:17

it's like less than 1%. My friend Yan

8:19

Lar who was a postto with me thinks no

8:21

no no, we're always going to be we build

8:24

these things. We're always going to be

8:25

in control. We'll build them to be

8:27

obedient.

8:29

And other people like Yudkowski say,

8:33

"No, no, no. These things are going to

8:35

wipe us out for sure. If anybody builds

8:36

it, it's going to wipe us all out." And

8:38

he's confident of that. I think both of

8:41

those positions are extreme. It's very

8:43

hard to estimate the probabilities in

8:45

between. If you had to bet on who was

8:48

right out of your two friends,

8:51

I simply don't know. So, if I had to

8:54

bet, I'd say the probabilities in

8:55

between, and I don't know where to

8:57

estimate it in between. I often say 10

8:59

to 20% chance they'll wipe us out, but

9:02

that's just gut based on the idea that

9:05

we're we're still making them and we're

9:07

pretty ingenious. And the hope is that

9:10

if enough smart people do enough

9:12

research with enough resources, we'll

9:14

figure out a way to build them so

9:15

they'll never want to harm us. Sometimes

9:19

I think if we we talk about that second

9:21

um path, sometimes I think about nuclear

9:23

bombs and the the invention of the

9:25

atomic bomb and how it compares like how

9:27

is this different because the atomic

9:29

bomb came along and I imagine a lot of

9:30

people at that time thought our days are

9:31

numbered. Yes, I was there. We did.

9:35

Yeah. But but but what's what h we're

9:37

still here. We're still here. Yes. So

9:40

the atomic bomb was really only good for

9:42

one thing and it was very obvious how it

9:45

worked. Even if you hadn't had the

9:47

pictures of Hiroshima and Nagasaki, it

9:50

was obvious that it was a very big bomb

9:53

that was very dangerous. With AI,

9:56

it's good for many, many things. It's

10:00

going to be magnificent in healthcare

10:02

and education and more or less any

10:04

industry that needs to use its data is

10:07

going to be able to use it better with

10:08

AI. So, we're not going to stop the

10:11

development.

10:13

You know, people say, "Well, why don't

10:14

we just stop it now?" We're not going to

10:17

stop it because it's too good for too

10:19

many things. Also, we're not going to

10:21

stop it because it's good for battle

10:22

robots, and none of the countries that

10:25

sell weapons are going to want to stop

10:27

it. Like the European regulations, they

10:30

have some regulations about AI, and it's

10:31

good they have some regulations, but

10:33

they're not designed to deal with most

10:35

of the threats. And in particular, the

10:38

European regulations have a a clause in

10:40

them that say none of these regulations

10:42

apply to military uses of AI.

10:45

So governments are willing to regulate

10:47

regulate companies and people, but

10:50

they're not willing to regulate

10:51

themselves.

10:53

It seems pretty crazy to me that they I

10:56

go back and forward, but if Europe has a

10:58

regulation, but the rest of the world

10:59

doesn't

11:01

competitive disadvantage. Yeah, we're

11:03

seeing this already. I don't think

11:04

people realize that when OpenAI release

11:06

a new model or a new piece of software

11:08

in America, they can't release it to

11:11

Europe yet because of regulations here.

11:12

So Sam Alman tweeted saying, "Our new AI

11:15

agent thing is available to everybody,

11:16

but it can't come to Europe yet because

11:18

there's regulations."

11:20

Yes. What does that gives us a

11:22

productive disadvantage? Productivity

11:24

disadvantage. What we need is I mean at

11:27

this point in history when we're about

11:29

to produce things more intelligent than

11:30

ourselves, what we really need is a kind

11:34

of world government that works run by

11:36

intelligent, thoughtful people. And

11:38

that's not what we got.

11:40

So free-for-all. Well, that what we've

11:43

got is sort of

11:47

we've got capitalism which is done very

11:49

nicely by us. is produce lots of goods

11:51

goods and services for us. But these big

11:55

companies, they're legally required to

11:58

try and maximize profits and that's not

12:01

what you want from the people developing

12:03

this stuff.

12:04

So let's do the risks then. You talked

12:06

about there's human risks and then

12:08

there's So I've distinguished these two

12:09

kinds of risk. Let's talk about all the

12:11

risks from bad human actors using AI.

12:15

There's cyber attacks.

12:18

So between 2023 and 2024,

12:22

they increased by about a factor of

12:24

12,200%.

12:27

And that's probably because these large

12:29

language models make it much easier to

12:32

do fishing attacks. And a fishing attack

12:34

for anyone that doesn't know is it's

12:36

they send you something saying, uh, hi,

12:39

I'm your friend John and I'm stuck in El

12:41

Salvador. Could you just wire this

12:43

money? That's one kind of attack. But

12:45

the fishing attacks are really trying to

12:47

get your loon credentials. And now with

12:49

AI, they can clone my voice, my image.

12:52

They can do all that. I'm struggling at

12:53

the moment because there's a bunch of AI

12:55

scams on X and also Meta. And there's

12:57

one in particular on Meta, so Instagram,

12:59

Facebook at the moment, which is a paid

13:01

advert where they've taken my voice from

13:03

the podcast. They've taken the my

13:04

mannerisms and they've made a new video

13:06

of me encouraging people to go and take

13:08

part in this crypto Ponzi scam or

13:11

whatever. And we've been, you know, we

13:13

spent weeks and weeks and weeks and

13:14

weeks and end emailing Meta telling,

13:15

"Please take this down." They take it

13:17

down, another one pops up. They take

13:18

that one down, another one pops up. So,

13:20

it's like whack-a-ole. And then it's

13:21

very annoying. The the heartbreaking

13:23

part is you get the messages from people

13:24

that have fallen for the scam and

13:26

they've lost £500 or $500 and they cross

13:28

with you cuz you recommended it and I'm

13:30

I'm like I'm sad for them. It's very

13:32

annoying. Yeah. I have a a smaller

13:34

version of that which is PE some people

13:36

now publish papers with me as one of the

13:39

authors. Mhm. And it looks like it's in

13:41

order that they can get lots of

13:43

citations to themselves. Ah, so cyber

13:46

attacks a very real threat. There's been

13:48

an explosion of those. And these already

13:51

obviously AI is very patient. So they

13:53

can go through 100 million lines of code

13:56

looking for known ways of attacking

13:57

them. That's easy to do. But they're

14:00

going to get more creative and they may

14:02

some people believe and I some people

14:06

who know a lot believe that maybe by

14:09

2030 they'll be creating new kinds of

14:12

cyber attacks which no person ever

14:15

thought of. So that's very worrisome

14:18

because they can think for themselves

14:19

and discover they can think for

14:20

themselves. They can draw new

14:22

conclusions from much more data than a

14:24

person ever saw. Is there anything

14:26

you're doing to protect yourself from

14:29

cyber attacks at all? Yes. It's one of

14:32

the few places where I changed what I do

14:34

radically because I'm scared of cyber

14:36

attacks. Canadian banks are extremely

14:39

safe. In 2008, no Canadian banks came

14:43

anywhere near going bust. So, they're

14:45

very safe banks because they're well

14:46

regulated, fairly well regulated.

14:49

Nevertheless, I think a cyber attack

14:51

might be able to bring down a bank. Now,

14:54

if you have all my savings are in shares

14:57

in banks held by banks, so if the bank

15:01

gets attacked and it holds your shares,

15:04

they're still your shares. And so, I

15:07

think you'd be okay unless the attacker

15:11

sells the shares because the bank can

15:12

sell the shares. If the attacker sells

15:15

your shares, I think you're screwed. I

15:18

don't know. I mean, maybe the bank would

15:20

have to try and reimburse you, but the

15:21

bank's bust by now, right? So,

15:24

So I'm worried about a Canadian bank

15:26

being taken down by a cyber attack and

15:29

the attacker selling selling shares that

15:31

it holds. So I spread my money and my

15:34

children's money between three banks in

15:37

the belief that if a cyber attack takes

15:39

down one Canadian bank, the other

15:42

Canadian banks will very quickly get

15:44

very careful. And do you have a phone

15:47

that's not connected to the internet? Do

15:49

you have any like, you know, I'm

15:49

thinking about storing data and stuff

15:51

like that. Do you think it's wise to

15:53

consider having cold storage? I have a

15:56

little disc drive and I back up my

15:58

laptop on this hard drive. So I actually

16:01

have everything on my laptop on a hard

16:03

drive. At least you know if the whole

16:05

internet went down I had the sense I

16:07

still got it on my laptop and I still

16:09

got my information. Okay. Then the next

16:13

thing is using AI to create nasty

16:16

viruses.

16:18

Okay. And the problem with that is that

16:22

just requires one crazy guy with the

16:25

grudge. One guy who knows a little bit

16:27

of molecular biology, knows a lot about

16:29

AI, and just wants to destroy the world.

16:33

You can now create

16:35

new viruses relatively cheaply using AI.

16:39

And you don't have to be a very skilled

16:41

molecular biologist to do it. And that's

16:43

very scary. So you could have a small

16:44

cult, for example.

16:47

a small cult might be able to raise a

16:49

few million dollars. For a few million

16:51

dollars, they might be able to design a

16:53

whole bunch of viruses. Well, I'm

16:54

thinking about some of our foreign

16:56

adversaries doing government funded

16:58

programs. I mean, there was lots of talk

16:59

around COVID and Woo the Wuhan

17:01

laboratory and what they were doing and

17:02

gain a function research, but I'm

17:04

wondering if in, you know, a China or a

17:05

Russia or an Iran or something, the

17:08

government could fund a program for a

17:10

small group of scientists to make a

17:12

virus that they could, you know, I think

17:14

they could. Yes. Now, they'd be worried

17:16

about retaliation. They'd be worried

17:18

about other governments doing the same

17:19

to them. Hopefully, that would help keep

17:21

it under control. They might also be

17:22

worried about the virus spreading to

17:24

their country. Okay? Then there's um

17:28

corrupting elections.

17:31

So, if you wanted to use AI to corrupt

17:33

elections,

17:35

a very effective thing is to be able to

17:37

do targeted political advertisements

17:40

where you know a lot about the person.

17:44

So anybody who wanted to use AI for

17:47

corrupting elections would try and get

17:49

as much data as they could about

17:52

everybody in the electorate. With that

17:54

in mind, it's a bit worrying what Musk

17:56

is doing at present in the States, going

17:59

in and insisting on getting access to

18:01

all these things that were very

18:02

carefully siloed. The claim is it's to

18:05

make things more efficient, but it's

18:07

exactly what you would want if you

18:08

intended to corrupt the next election.

18:10

How do you mean? Because you get all

18:11

this data on the people. You get all

18:13

this data on people. You know how much

18:14

they make where they you know everything

18:16

about them. Once you know that, it's

18:18

very easy to manipulate them because you

18:21

can make an AI that you can send

18:23

messages um that they'll find very

18:25

convincing telling them not to vote, for

18:27

example.

18:29

So, I have no no reason other than

18:32

common sense to think this, but I

18:35

wouldn't be surprised if part of the

18:36

motivation of getting all this data from

18:39

American government sources is to

18:42

corrupt elections. Another part might be

18:45

that it's very nice training data for a

18:47

big model, but he would have to be

18:49

taking that data from the government and

18:51

feeding it into his Yes. And what

18:53

they've done is turned off lots of the

18:55

security controls, got rid of the some

18:58

of the organization to protect against

19:00

that. Um, so that's corrupting

19:02

elections. Okay. Then there's creating

19:06

these two echo chambers

19:09

by organizations like YouTube

19:13

and Facebook showing people things that

19:16

will make them indignant. People love to

19:19

be indignant. Indignant as in angry or

19:22

what does indignant mean? Feeling I'm

19:25

sort of angry but feeling righteous.

19:27

Okay. So, for example, if you were to

19:30

show me something that said Trump did

19:33

this crazy thing, here's a video of

19:34

Trump doing this completely crazy thing.

19:36

I would immediately click on it.

19:40

Okay. So, putting us in echo chambers

19:42

and dividing us. Yes. And that's um the

19:45

policy that YouTube and Facebook and

19:48

others use for deciding what to show you

19:51

next is causing that. If they had a

19:55

policy of showing you balanced things,

19:58

they wouldn't get so many clicks and

20:00

they wouldn't be able to sell so many

20:01

advertisements.

20:02

And so it's basically the profit motive

20:04

is saying show them whatever will make

20:07

them click. And what'll make them click

20:09

is things that are more and more

20:11

extreme. And that confirmed my existing

20:13

bias. That confirm my existing bias. So

20:15

you're getting your biases confirmed all

20:17

the time further and further and further

20:19

and further, which means you're you're

20:20

driving away, which is now there's in

20:22

the states there's two communities that

20:23

don't hardly talk to each other. I'm not

20:25

sure people realize that this is

20:26

actually happening every time they open

20:27

an app. But if you go on a Tik Tok or a

20:29

YouTube or one of these big social

20:31

networks, the algorithm, as you you

20:33

said, is designed to show you more of

20:35

the things that you had interest in last

20:38

time. So, if you just play that out over

20:39

10 years, it's going to drive you

20:41

further and further and further into

20:42

whatever ideology or belief you have and

20:45

further away from nuance and common

20:46

sense and um parity, which is a pretty

20:50

remarkable thing. I I like people don't

20:52

know it's happening. They just open

20:53

their phones and experience something

20:56

and think this is the news or the

20:59

experience everyone else is having.

21:00

Right. So, basically, if you have a

21:03

newspaper and everybody gets the same

21:04

newspaper, Yeah. you get to see all

21:07

sorts of things you weren't looking for

21:08

and you get a sense that if it's in the

21:10

newspaper it's an important thing or

21:12

significant thing but if you have your

21:14

own news feed my news feed on my iPhone

21:18

3/arters of the stories are about AI and

21:20

I find it very hard to know if the whole

21:23

world's talking about AI all the time or

21:25

if it's just my newsfeed

21:28

okay so driving me into my echo chambers

21:31

um which is going to continue to divide

21:33

us further and further I'm actually

21:34

noticing that the algorithm are becoming

21:36

even more,

21:38

what's the word?

21:40

Tailored. And people might go, "Oh,

21:42

that's great." But what it means is

21:43

they're becoming even more personalized,

21:44

which is means that my reality is

21:47

becoming even further from your reality.

21:49

Yeah. It's crazy. We don't have a shared

21:51

reality anymore. I share reality with

21:54

other people who watch the BBC and other

21:57

BBC news and other people who read the

21:59

Guardian and other people who read the

22:00

New York Times. I have almost no shared

22:04

reality with people who watch Fox News.

22:07

It's pretty It's pretty um I I It's

22:10

worrisome. Yeah. Behind all this is the

22:13

idea that these companies just want to

22:15

make profit and they'll do whatever it

22:17

takes to make more profit because they

22:19

have to. They're legally obliged to do

22:22

that. So, we almost can't blame the

22:24

company, can we? If they're if Well,

22:28

capitalism's done very well for us. It's

22:30

produced lots of goodies. Yeah. But you

22:32

need to have it very well regulated.

22:34

So what you really want is to have rules

22:37

so that when some company is trying to

22:40

make as much profit as possible,

22:43

in order to make that profit, they have

22:44

to do things that are good for people in

22:47

general, not things that are bad for

22:48

people in general. So once you get to a

22:51

situation where in order to make more

22:52

profit the company starts doing things

22:54

that are very bad for society like

22:57

showing you things that are more and

22:58

more extreme that's what regulations are

23:00

for. So you need regulations with

23:03

capitalism. Now companies will always

23:06

say regulations get in the way make us

23:09

less efficient and that's true. The

23:11

whole point of regulations is to stop

23:12

them doing things to make profit that

23:14

hurt society. And we need strong

23:16

regulation. who's going to decide

23:18

whether it hurts society or not because

23:20

you know that's the job of politicians

23:23

unfortunately if the politicians are

23:25

owned by the companies that's not so

23:26

good and also the politicians might not

23:28

understand the technology we you've

23:29

probably seen the Senate hearings where

23:31

they wheel out you know Mark Zuckerberg

23:32

and these big tech CEOs and it is quite

23:34

embarrassing because they're asking the

23:36

wrong questions well I've seen the video

23:38

of the US education secretary talking

23:42

about how they're going to get AI in the

23:44

classrooms except she thought it was

23:46

called A1

23:48

She's actually there saying we're going

23:50

to have all the kids interacting with

23:51

A1. There is a school system that's

23:54

going to start um making sure that first

23:57

graders or even preks have A1 teaching,

24:01

you know, every year starting, you know,

24:02

that far down in the grades. And that's

24:04

just a that's a wonderful thing.

24:07

[Laughter]

24:10

And these are what these are the people

24:12

that these are the people in charge.

24:14

Ultimately the tech companies are in

24:16

charge because they will outsmart the

24:18

tech companies in the states now at

24:20

least a few weeks ago when I was there

24:23

they were running an advertisement about

24:26

how it was very important not to

24:28

regulate AI because it would hurt us in

24:30

the competition with China. Yeah. And

24:32

that's a that's a plausible argument

24:33

there. Yes it will. But you have to

24:36

decide, do you want to compete with

24:38

China by doing things that will do a lot

24:44

of harm to your society? And you

24:46

probably don't.

24:49

I guess they would say that it's not

24:51

just China, it's Denmark and Australia

24:53

and Canada and the UK. They're not so

24:56

worried about and Germany. But if they

24:57

kneecap themselves with regulation, if

24:59

they slow themselves down, then the

25:01

founders, the entrepreneurs, the

25:02

investors are going to go. I think

25:03

calling it kneecapping is taking a

25:05

particular point of view is take taking

25:08

the point of view that regulations are

25:09

sort of very harmful. What you need to

25:11

do is just constrain the big companies

25:13

so that in order to make profit, they

25:16

have to do things that are socially

25:18

useful. Like Google search is a great

25:20

example that didn't need regulation

25:22

because it just made information

25:24

available to people. It was great. But

25:27

then if you take YouTube which starts

25:29

showing you adverts and showing you more

25:31

and more extreme things that needs

25:33

regulation but we don't have the people

25:36

to regulate it as we've identified. I

25:38

think people know pretty well um that

25:41

particular problem of showing you more

25:43

and more extreme things. That's a

25:44

well-known problem that the politicians

25:46

understand. They just um need to get on

25:49

and regulate it. So that was the the

25:51

next point which was that the algorithms

25:53

are going to drive us further into our

25:54

echo chambers, right?

25:56

What's next? Lethal autonomous weapons.

26:00

Lethal autonomous weapons.

26:03

That means things that can kill you and

26:05

make their own decision about whether to

26:07

kill you, which is the great dream, I

26:10

guess, of the military-industrial

26:12

complex being able to create such

26:14

weapons. So, the worst thing about them

26:17

is big powerful countries always have

26:20

the ability to invade smaller poorer

26:23

countries. they're just more powerful.

26:26

But if you do that using actual

26:28

soldiers, you get bodies coming back in

26:31

bags and the relatives of the soldiers

26:34

who were killed don't like it. So you

26:37

get something like Vietnam. Mhm. In the

26:39

end, there's a lot of protest at home.

26:42

If instead of bodies coming back in

26:44

bags, it was dead robots, there'd be

26:47

much less protest and the

26:49

military-industrial complex would like

26:51

it much more because robots are

26:53

expensive. And suppose you had something

26:56

that could get killed and was expensive

26:58

to replace. That would be just great.

27:01

Big countries can invade small countries

27:04

much more easily because they don't have

27:05

their soldiers being killed. And the

27:07

risk here is that these robots will

27:12

malfunction or they'll just be more No,

27:14

no, that's even if the robots do exactly

27:16

what the people who built the robots

27:18

want them to do, the risk is that it's

27:20

going to make big countries invade small

27:22

countries more often. More often because

27:23

they can Yeah. And it's not a nice thing

27:25

to do. So it brings down the friction of

27:26

war. It brings down the cost of doing an

27:28

invasion.

27:30

And these machines will be smarter at

27:32

warfare as well. So they'll be well even

27:34

when the machines aren't smarter. So the

27:36

lethal autonomous weapons, they can make

27:39

them now. And they I think all the big

27:42

defense models are busy making them.

27:44

Even if they're not smarter than people,

27:46

are still very nasty, scary things. Cuz

27:48

I'm thinking that, you know, they could

27:50

show just a picture. Go get this guy.

27:53

Yeah. And go take out anyone he's been

27:55

texting and this little wasp. So, two

27:58

days ago, I was visiting a friend of

28:00

mine in Sussex who had a drone that cost

28:03

less than £200

28:05

and

28:06

the drone went up. It took a good look

28:09

at me and then it could follow me

28:11

through the woods and it follow It was

28:14

very spooky having this drone. It was

28:15

about 2 meters behind me. It was looking

28:17

at me and if I moved over there, it

28:20

moved over there. It could just track

28:21

me. Mhm. For 200 pounds, but it was

28:24

already quite spooky. Yeah. And I

28:26

imagine there's as you say a race going

28:28

on as we speak to who can build the most

28:30

complex autonomous autonomous weapons.

28:33

There is a a risk I often hear that some

28:35

of these things will combine and the

28:38

cyber attack will release weapons.

28:41

Sure. Um you can you can get

28:44

combinatorily many risks by combining

28:46

these other risks. Mhm. So, I mean, for

28:49

example, you could get a super

28:51

intelligent AI that decides to get rid

28:54

of people, and the obvious way to do

28:56

that is just to make one of these nasty

28:57

viruses. If you made a virus that was

29:01

very contagious, very lethal, and very

29:04

slow,

29:06

everybody would have it before they

29:07

realized what was happening. I mean, I

29:09

think if a super intelligence wanted to

29:11

get rid of us, it will probably go for

29:13

something biological like that that

29:14

wouldn't affect it. Do you not think it

29:16

could just very quickly turn us against

29:17

each other? For example, it could send a

29:19

warning on the nuclear systems in

29:22

America that there's a nuclear bomb

29:23

coming from Russia or vice versa and one

29:26

retaliates. Yeah. I mean, my basic view

29:28

is there's so many ways in which the

29:30

super intelligence could get rid of us.

29:32

It's not worth speculating about.

29:35

What What is What you have to do is

29:38

prevent it ever wanting to. That's what

29:41

we should be doing research on. There's

29:43

no way we're going to prevent it from

29:45

it's smarter than us, right? There's no

29:46

way we're going to prevent it getting

29:48

rid of us if it wants to. We're not used

29:50

to thinking about things smarter than

29:52

us. If you want to know what life's like

29:55

when you're not the apex intelligence,

29:58

ask a chicken.

30:03

Yeah. I was thinking about my dog Pablo,

30:04

my French bulldog, this morning as I

30:06

left home. He has no idea where I'm

30:08

going. He has no idea what I do, right?

30:10

Can't even talk to him. Yeah. And the g

30:13

the intelligence gap will be like that.

30:15

So you're telling me that if I'm Pablo,

30:17

my French bulldog, I need to figure out

30:19

a way to make my owner not wipe me out.

30:24

Yeah. So we have one example of that

30:27

which is mothers and babies. Evolution

30:30

put a lot of work into that. Mothers are

30:31

smarter than babies, but babies are in

30:33

control. And they're in control because

30:35

the mother just can't bear lots of

30:37

hormones and things, but the b the

30:39

mother just can't bear the sound of the

30:41

baby crying. Not all mothers. Not all

30:44

mothers. And then the baby's not in

30:45

control and then bad things happen. We

30:48

somehow need to figure out how to make

30:51

them not want to take over. The analogy

30:53

I often use is forget about

30:56

intelligence, think about physical

30:57

strength. Suppose you have a nice little

30:59

tiger cup. It's sort of bit bigger than

31:02

a cat. It's really cute.

31:04

It's very cuddly, very interesting to

31:06

watch. Except that you better be sure

31:08

that when it grows up, it never wants to

31:10

kill you. Cuz if it ever wanted to kill

31:12

you, you'd be dead in a few seconds. And

31:15

you're saying the AI we have now is the

31:16

target cub. Yep. And it's growing up.

31:19

Yep.

31:21

So, we need to train it as it's when

31:23

it's a baby. Well, now a tiger has lots

31:24

of in stuff built in. So, you know, when

31:26

it grows up, it's not a safe thing to

31:28

have around. But lions, people that have

31:31

lions as pets, yes. Sometimes the lion

31:33

is affectionate to its creator but not

31:35

to others. Yes. And we don't know

31:37

whether these AIs

31:40

we we simply don't know whether we can

31:42

make them not want to take over and not

31:44

want to hurt us. Do you think we can? Do

31:46

you think it's possible to train super

31:47

intelligence? I don't think it's clear

31:49

that we can. So I think it might be

31:51

hopeless. But I also think we might be

31:55

able to. And it'd be sort of crazy if

31:58

people went extinct cuz we couldn't be

31:59

bothered to try. If that's even a

32:02

possibility, how do you feel about your

32:04

life's work? Because you were Yeah. Um,

32:08

it sort of takes the edge off it,

32:09

doesn't it? I mean, the idea is going to

32:12

be wonderful in healthcare and wonderful

32:13

in education and wonderful. I mean, it's

32:16

going to make call centers much more

32:17

efficient, though one worries a bit

32:19

about what the people who are doing that

32:20

job now do. It makes me sad. I don't

32:24

feel particularly guilty about

32:25

developing AI like 40 years ago because

32:30

at that time we had no idea that this

32:32

stuff was going to happen this fast. We

32:34

thought we had plenty of time to worry

32:36

about things like that. They when you

32:38

when you can't get the to do much, you

32:40

want to get it to do a little bit more.

32:41

You don't worry about this stupid little

32:44

thing is going to take over from people.

32:46

You just want it to be able to do a

32:47

little bit more of the things people can

32:48

do. It's not like I knowingly did

32:52

something thinking this might wipe us

32:54

all out, but I'm going to do it anyway.

32:56

Mhm. But it is a bit sad that it's not

33:00

just going to be something for good.

33:03

So I feel I have a duty now to talk

33:05

about the risks.

33:07

And if you could play it forward and you

33:08

could go forward 30, 50 years and you

33:10

found out that it led to the extinction

33:11

of humanity and if that does end up

33:14

being

33:17

being the outcome,

33:20

well, if you played it forward and it

33:23

led to the extinction of humanity, I

33:25

would use that to tell people to tell

33:28

their governments that we really have to

33:30

work on how we're going to keep this

33:32

stuff under control. I think we need

33:34

people to tell governments that

33:36

governments have to force the companies

33:38

to use their resources to work on safety

33:41

and they're not doing much of that

33:43

because you don't make profits that way.

33:45

One of your your students we talked

33:46

about earlier um Ilia Yep. Ilia left

33:51

OpenAI. Yep. And there was lots of

33:53

conversation around the fact that he

33:55

left because he had safety concerns.

33:57

Yes. And he's gone on to set set up a AI

34:01

safety company. Yes.

34:04

Why do you think he left?

34:06

I think he left because he had safety

34:08

concerns. Really? He um I still have

34:11

lunch with him from time to time. His

34:13

parents live in Toronto. When he comes

34:14

to Toronto, we have lunch together. He

34:16

doesn't talk to me about what went on at

34:17

Open AI, so I have no inside information

34:19

about that. But I know I very well and

34:22

he is genuinely concerned with safety.

34:24

So I think that's why he left because he

34:26

was one of the top people. I mean he was

34:28

he was probably the most important

34:30

person behind the development of um

34:32

church GPT the the early versions like

34:35

GPT2 he was very important in the

34:37

development of that you know him

34:39

personally so you know his character yes

34:41

he has a good moral compass he's not

34:43

like someone like Musco has no moral

34:45

compass does Sam Alman have a good moral

34:48

compass

34:50

we'll see

34:53

I don't know Sam so I don't want to

34:56

comment on that. But from what you've

34:58

seen, are you concerned about the

35:00

actions that they've taken? Because if

35:02

you know Ilia and Ilia's a good guy and

35:04

he's left

35:06

that would give you some insight. Yes.

35:08

It would give you some reason to believe

35:10

that there's a problem there. And if you

35:12

look at Sam's statements

35:15

some years ago,

35:17

he sort of happily said in one interview

35:20

and this stuff will probably kill us

35:22

all. That's not exactly what he said,

35:23

but that's what it amounted to. Now he's

35:25

saying you don't need to worry too much

35:27

about it. And I suspect that's not

35:30

driven by

35:32

seeking after the truth. That's driven

35:34

by seeking after money. Is it money or

35:38

is it power? Yeah. I shouldn't have said

35:40

money. It's some some combination of

35:42

those. Yes. Okay. I guess money is a

35:44

proxy for power. But I am I've got a

35:46

friend who's a billionaire and he is in

35:50

those circles. And when I went to his

35:52

house and had uh lunch with him one day,

35:54

he knows lots of people in AI, building

35:55

the biggest AI companies in the world.

35:57

And he gave me a cautionary warning

35:59

across the across his kitchen table in

36:01

London where he gave me an insight into

36:03

the private conversations these people

36:05

have, not the media interviews they do

36:06

where they talk about safety and all

36:08

these things, but actually what some of

36:09

these individuals think is going to

36:11

happen and what do they think is going

36:13

to happen. It's not what they say

36:15

publicly. You know, one one person who I

36:19

shouldn't name who is the who is leading

36:20

one of the biggest AI companies in the

36:22

world. He told me that he knows this

36:24

person very well and he privately thinks

36:25

that we're heading towards this kind of

36:27

dystopian world where we have just huge

36:29

amounts of free time. We don't work

36:31

anymore. And this person doesn't really

36:33

give a [ __ ] about the harm that it's

36:35

going to have on the world. And this

36:36

person who I'm referring to is building

36:37

one of the biggest AI companies in the

36:39

world. And I then watch this person's

36:41

interviews online trying to figure out

36:42

which of three people it is. Yeah. Well,

36:44

it's one of those three people. Okay.

36:45

And I watch this person's interviews

36:46

online and I I reflect on a conversation

36:48

that my billionaire friend had with me

36:50

who knows him and I go, "Fucking hell,

36:52

this guy's lying publicly." Like, he's

36:54

not telling the the truth to the world.

36:56

And that's haunted me a little bit. It's

36:57

part of the reason I have so many

36:58

conversations around AR in this podcast

36:59

because I'm like, I don't know if

37:00

they're I think they're a some of them

37:04

are a little bit sadistic about power. I

37:06

think they they like the idea that they

37:09

will change the world, that they will be

37:11

the one that fundamentally shifts the

37:14

world. I think Musk is clearly like

37:16

that, right?

37:19

He's such a complex character that I

37:21

don't I don't really know how to place

37:22

Musk. Um he's done some really good

37:24

things like um pushing electric cars.

37:28

That was a really good thing to do.

37:29

Yeah. Some of the things he said about

37:31

self-driving were a bit exaggerated, but

37:33

he that was a really useful thing he

37:35

did. Giving the Ukrainians communication

37:38

during the war with Russia. Stling. Um

37:41

that was a really good thing he did.

37:43

there's a bunch of things like that. Um,

37:45

but he's also done some very bad things.

37:49

So, coming back to this point of

37:53

the possibility of destruction

37:57

and the motives of these big companies,

38:01

are you at all hopeful that anything can

38:03

be done to slow down the pace and

38:05

acceleration of AI? Okay, there's two

38:07

issues. One is can you slow it down?

38:10

Yeah. And the other is, can you make it

38:12

so it will be safe in the end? It won't

38:15

wipe us all out. I don't believe we're

38:18

going to slow it down. Yeah. And the

38:20

reason I don't believe we're going to

38:21

slow it down is because there's

38:22

competition between countries and

38:24

competition between companies within a

38:26

country and all of that is making it go

38:28

faster and faster. And if the US slowed

38:31

it down, China wouldn't slow it down.

38:34

Does IA think it's possible to make AI

38:38

safe?

38:40

I think he does. He won't tell me what

38:42

his secret source is. I I'm not sure how

38:45

many people know what his secret source

38:47

is. I think a lot of the investors don't

38:48

know what his secret source is, but

38:50

they've given him billions of dollars

38:51

anyway because they have so much faith

38:53

in Asia, which isn't foolish. I mean, he

38:56

was very important in Alexet, which got

38:59

object recognition working well. He was

39:01

the main the main force behind the

39:05

things like GBC2

39:07

which then led to CH GPT.

39:10

So I think having a lot of faith in IA

39:12

is a very reasonable decision. There's

39:14

something quite haunting about the guy

39:16

that made and was the main force behind

39:18

GPT2 which led rise to this whole

39:20

revolution left the company because of

39:24

safety reasons. He knows something that

39:26

I don't know about what might happen

39:29

next. Well, the company had now I don't

39:32

know the precise details um but I'm

39:35

fairly sure the company had indicated

39:37

that would it would use a significant

39:38

fraction of its resources of the compute

39:41

time for doing safety research and then

39:44

it kept then it reduced that fraction. I

39:46

think that's one of the things that

39:47

happened. Yeah, that was reported

39:48

publicly. Yes. Yeah.

39:51

We've gotten to the autonomous weapons

39:54

part of the risk framework. Right. So

39:57

the next one is joblessness. Yeah. In

40:00

the past, new technologies have come in

40:03

which didn't lead to joblessness. New

40:05

jobs were created. So the classic

40:07

example people use is automatic tele

40:09

machines. When automatic tele machines

40:11

came in, a lot of bank tellers didn't

40:14

lose their jobs. They just got to do

40:16

more interesting things. But here, I

40:19

think this is more like when they got

40:21

machines in the industrial revolution.

40:24

And

40:26

you can't have a job digging ditches now

40:28

because a machine can dig ditches much

40:30

better than you can. And I think for

40:33

mundane intellectual labor, AI is just

40:36

going to replace everybody. Now, it will

40:40

may well be in the form of you have

40:43

fewer people using air assistance. So

40:46

it's a combination of a person and an AI

40:48

assistant are now doing the work that 10

40:50

people could do previously. People say

40:53

that it will create new jobs though, so

40:55

we'll be fine. Yes. And that's been the

40:57

case for other technologies, but this is

40:59

a very different kind of technology. If

41:01

it can do all mundane human intellectual

41:03

labor,

41:05

then what new jobs is it going to

41:07

create? You'd you'd have to be very

41:09

skilled to have a job that it couldn't

41:11

just do. So I don't I don't think

41:13

they're right. I think you can try and

41:15

generalize from other technologies that

41:18

have come in like computers or automatic

41:20

tele machines, but I think this is

41:22

different. People use this phrase. They

41:24

say AI won't take your job. A human

41:26

using AI will take your job. Yes, I

41:28

think that's true. But for many jobs,

41:31

that'll mean you need far fewer people.

41:34

My niece answers letters of complaint to

41:36

a health service. It used to take her 25

41:39

minutes. She'd read the complaint and

41:41

she'd think how to reply and she'd write

41:43

a letter. And now she just scans it into

41:47

um a chatbot and it writes the letter.

41:51

She just checks the letter. Occasionally

41:53

she tells it to revise it in some ways.

41:56

The whole process takes her five

41:57

minutes. That means she can answer five

42:00

times as many letters and that means

42:03

they need five times fewer of her so she

42:06

can do the job that five of her used to

42:08

do. Now, that will mean they need less

42:13

people. In other jobs, like in health

42:15

care, they're much more elastic. So, if

42:18

you could make doctors five times as

42:20

efficient, we could all have five times

42:22

as much health care for the same price,

42:24

and that would be great. There's there's

42:27

almost no limit to how much health care

42:28

people can absorb. They always want more

42:31

healthare if there's no cost to it.

42:34

There are jobs where you can make a

42:36

person with an AI assistant much more

42:38

efficient and you won't lead to less

42:41

people because you'll just have much

42:42

more of that being done. But most jobs I

42:46

think are not like that. Am I right in

42:48

thinking the sort of industrial

42:49

revolution

42:50

played a role in replacing muscles? Yes.

42:53

Exactly. And this revolution in AI

42:55

replaces intelligence the brain. Yeah.

42:58

So, so mundane intellectual labor is

43:00

like having strong muscles and it's not

43:03

worth much anymore. So, muscles have

43:05

been replaced. Now we intelligence is

43:07

being replaced. Yeah. So, what remains?

43:11

Maybe for a while some kinds of

43:13

creativity but the whole idea of super

43:15

intelligence is nothing remains. Um

43:18

these things will get to be better than

43:19

us at everything. So, what what do we

43:21

end up doing in such a world? Well, if

43:23

they work for us, we end up getting lots

43:27

of goods and services for not much

43:29

effort. Okay. But that sounds tempting

43:32

and nice, but I don't know. There's a

43:35

cautionary tale in creating more and

43:37

more ease for humans in in it going

43:39

badly. Yes. And we need to figure out if

43:43

we can make it go well. So the the nice

43:46

scenario is imagine a company with a CEO

43:50

who is very dumb, probably the son of

43:53

the former CEO. And he has an executive

43:56

assistant who's very smart and he says,

44:01

"I think we should do this." And the

44:03

executive assistant makes it all work.

44:05

The CEO feels great. He doesn't

44:07

understand that he's not really in

44:09

control. And in in some sense, he is in

44:12

control. He suggests what the company

44:14

should do. She just makes it all work.

44:16

Everything's great. That's the good

44:18

scenario. And the bad scenario, the bad

44:21

scenario, she thinks, "Why do we need

44:22

him?"

44:24

Yeah.

44:26

I mean, in a world where we have super

44:29

intelligence, which you don't believe is

44:30

that far away. Yeah, I think it might

44:32

not be that far away. It's very hard to

44:34

predict, but I think we might get it in

44:36

like 20 years or even less. I made the

44:39

biggest investment I've ever made in a

44:41

company because of my girlfriend. I came

44:43

home one night and my lovely girlfriend

44:46

was up at 1:00 a.m. in the morning

44:48

pulling her hair out as she tried to

44:49

piece together her own online store for

44:53

her business. And in that moment, I

44:55

remembered an email I'd had from a guy

44:56

called John, the founder of Stanto, our

44:59

new sponsor and a company I've invested

45:02

incredibly heavily in. And Standtore

45:04

helps creators to sell digital products,

45:05

courses, coaching, and memberships all

45:07

through a simple customizable link in

45:10

bio system. And it handles everything,

45:12

payments, bookings, emails, community

45:14

engagement, and even links with Shopify.

45:16

And I believe in it so much that I'm

45:18

going to launch a Stan challenge. And as

45:22

part of this challenge, I'm going to

45:23

give away $100,000 to one of you. If you

45:26

want to take part in this challenge, if

45:28

you want to monetize the knowledge that

45:29

you have, visit stephenbartlet.stan

45:32

stan.store to sign up. And you'll also

45:34

get an extended 30-day free trial of

45:37

Stan Store if you use that link. Your

45:39

next move could quite frankly change

45:41

everything. Because I talked about

45:43

ketosis on this podcast and ketones, a

45:45

brand called Ketone IQ sent me their

45:47

little product here and it was on my

45:49

desk when I got to the office. I picked

45:51

it up. It sat on my desk for a couple of

45:52

weeks. Then one day, I tried it and

45:55

honestly, I have not looked back ever

45:58

since. I now have this everywhere I go

46:01

when I travel all around the world. It's

46:02

in my hotel room. My team will put it

46:03

there. Before I did the podcast

46:05

recording today that I've just finished,

46:07

I had a shot of Ketone IQ. And as is

46:09

always the case when I fall in love with

46:11

a product, I called the CEO and asked if

46:12

I could invest a couple of million quid

46:14

into their company. So, I'm now an

46:16

investor in the company as well as them

46:18

being a brand sponsor. I find it so easy

46:20

to drop into deep focused work when I've

46:22

had one of these. I would love you to

46:24

try one and see the impact it has on

46:26

you, your focus, your productivity, and

46:28

your endurance. So, if you want to try

46:29

it today, visit ketone.com/stephven

46:31

for 30% off your subscription. Plus,

46:33

you'll receive a free gift with your

46:35

second shipment. That's

46:37

ketone.com/stephven.

46:39

I'm excited for you. I am. So, what's

46:42

the difference between what we have now

46:43

and super intelligence? Because it seems

46:45

to be really intelligent to me when I

46:46

use like chatbt3 or Gemini or Okay. So

46:50

it's already AI is already better than

46:52

us at a lot of things in particular

46:55

areas like chess for example. Yeah. AI

46:59

is so much better than us that people

47:01

will never beat those things again.

47:02

Maybe the occasional win but basically

47:05

they'll never be comparable again.

47:07

Obviously the same in go in terms of the

47:10

amount of knowledge they have. Um

47:12

something like GBT4 knows thousands of

47:15

times more than you do. There's a few

47:17

areas in which your knowledge is better

47:19

than its and in almost all areas it just

47:22

knows more than you do. What areas am I

47:25

better than it? Probably in interviewing

47:30

CEOs. You're probably better at that.

47:33

You've got a lot of experience at it.

47:34

You're a good interviewer. You know a

47:36

lot about it. If you tried if you got

47:39

GPT4 to interview a CEO, probably do a

47:42

worse job. Okay.

47:46

I'm trying to think if that if I agree

47:47

with that statement. Uh GPT4 I think for

47:50

sure. Yeah. Um but I but I guess you

47:52

could but it may not be long before

47:54

Yeah. I guess you could train one on

47:55

this how I ask questions and what I do

47:57

and Sure. And if you took a general

48:00

purpose sort of foundation model and

48:02

then you trained it up on not just you

48:05

but every every interviewer you could

48:07

find doing interviews like this but

48:10

especially you. You'll probably get to

48:12

be quite good at doing your job but

48:13

probably not as good as you for a while.

48:17

Okay. So, there's a few areas left and

48:19

then super intelligence becomes when

48:20

it's better than us at all things. When

48:23

it's much smarter than you and almost

48:25

all things is better than you. Yeah. And

48:27

you you you say that this might be a

48:29

decade away or so. Yeah. It might be. It

48:32

might be even closer. Some people think

48:34

it's even closer and might well be much

48:36

further. It might be 50 years away.

48:38

That's still a possibility. It might be

48:40

that somehow training on human data

48:43

limits you to not being much smarter

48:45

than humans. My guess is between 10 and

48:47

20 years we'll have super intelligence.

48:50

On this point of joblessness, it's

48:52

something that I've been thinking a lot

48:53

about in particular because I started

48:54

messing around with AI agents and we

48:56

released an episode on the podcast

48:57

actually this morning where we had a

48:58

debate about AI agents with some a CEO

49:01

of a big AI agent company and a few

49:03

other people and it was the first moment

49:05

where I had no it was another moment

49:08

where I had a Eureka moment about what

49:10

the future might look like when I was

49:11

able in the interview to tell this agent

49:14

to order all of us drinks and then 5

49:16

minutes later in the interview you see

49:17

the guy show up with the drinks and I

49:19

didn't touch anything. I just told it to

49:21

order us drinks to the studio. And you

49:23

didn't know about who you normally got

49:24

your drinks from. It figured that out

49:26

from the web. Yeah, figured out cuz it

49:27

went on Uber Eats. It has my my my data,

49:30

I guess. And it I we put it on the

49:32

screen in real time so everyone at home

49:33

could see the agent going through the

49:34

internet, picking the drinks, adding a

49:36

tip for the driver, putting my address

49:38

in, putting my credit card details in,

49:40

and then the next thing you see is the

49:41

drinks show up. So that was one moment.

49:44

And then the other moment was when I

49:46

used a tool called Replet and I built

49:49

software by just telling the agent what

49:50

I wanted. Yes. It's amazing, right? It's

49:53

amazing and terrifying at the same time.

49:56

Yes. Because and if it can build

49:58

software like that, right? Yeah.

50:00

Remember that the AI when it's training

50:03

is using code and if it can modify its

50:06

own code

50:08

then it gets quite scary, right? because

50:10

it can modify. It can change itself in a

50:12

way we can't change ourselves. We can't

50:14

change our innate endowment, right?

50:17

There's nothing about itself that it

50:19

couldn't change.

50:21

On this point of joblessness, you have

50:23

kids. I do. And they have kids. No, they

50:26

don't have kids. No grandkids yet. What

50:27

would you be saying to people about

50:29

their career prospects in a world of

50:31

super intelligence? What should we we be

50:33

thinking about? Um, in the meantime, I'd

50:35

say it's going to be a long time before

50:37

it's as good at physical manipulation as

50:40

us. Okay. And so, a good bet would be to

50:44

be a plumber.

50:47

until the humanoid robots show up in

50:50

such a world where there is mass

50:51

joblessness which is not something that

50:53

you just predict but this is something

50:54

that Sam Alman open AI I've heard him

50:56

predict and many of the CEOs Elon Musk I

50:59

watched an interview which I'll play on

51:00

screen of him being asked this question

51:02

and it's very rare that you see Elon

51:04

Musk silent for 12 seconds or whatever

51:05

it was and then he basically says

51:08

something about he actually is living in

51:10

suspended disbelief i.e. He's basically

51:12

just not thinking about it. When you

51:14

think about advising your children on a

51:15

career with so much that is changing,

51:18

what do you tell them is going to be of

51:19

value?

51:33

Well,

51:35

that is a tough question to answer. I

51:38

would just say, you know, to to sort of

51:39

follow their heart in terms of what they

51:41

they find um interesting to do or

51:43

fulfilling to do. I mean, if I think

51:45

about it too hard, frankly, it can be uh

51:48

dispariting and uh demotivating. Um

51:53

because I mean, I I go through I mean I

51:56

I I've put a lot of blood, sweat, and

51:59

tears into building the companies and

52:02

then it and then I'm like, wait, should

52:04

I be doing this? Because if I'm

52:06

sacrificing time with friends and family

52:08

that I would prefer to to to but but

52:11

then ultimately the AI can do all these

52:13

things. Does that make sense? I I don't

52:16

know. Um to some extent I have to have

52:19

deliberate suspension of disbelief in

52:21

order to to remain motivated. Um so I I

52:26

guess I would say just you know

52:31

work on things that you find

52:32

interesting, fulfilling and um and and

52:34

that contribute uh some good to the rest

52:36

of society. Yeah. A lot of these threats

52:39

it's very hard to intellectually you can

52:42

see the threat but it's very hard to

52:45

come to terms with it emotionally.

52:47

Yeah. I haven't come to terms with it

52:49

emotionally yet. What do you mean by

52:50

that?

52:53

I haven't come to terms with what the

52:56

development of super intelligence could

52:58

do to my children's future.

53:01

I'm okay. I'm 77.

53:04

I'm going to be out of here soon. But

53:07

for my children and my my younger

53:09

friends, my nephews and nieces and their

53:13

children, um

53:17

I just don't like to think about what

53:19

could happen.

53:23

Why? Cuz it could be awful.

53:29

In In what way?

53:32

Well, if I ever decided to take over. I

53:35

mean, it would need people for a while

53:37

to run the power stations until it

53:40

designed better analog machines to run

53:42

the power stations. There's so many ways

53:45

it could get rid of people, all of which

53:47

would of course be very nasty.

53:50

Is that part of the reason you do what

53:51

you do now? Yeah. I I mean, I think we

53:54

should be making a huge effort right now

53:57

to try and figure out if we can develop

53:59

it safely. Are you concerned about the

54:01

midterm impact potentially on your

54:03

nephews and your your kids in terms of

54:05

their jobs as well? Yeah, I'm concerned

54:07

about all that. Are there any particular

54:09

industries that you think are most at

54:10

risk? People talk about the creative

54:11

industries a lot and sort of knowledge

54:14

work. They talk about lawyers and

54:16

accountants and stuff like that. Yeah.

54:17

So, that's why I mentioned plumbers. I

54:19

think plumbers are less at risk. Okay,

54:21

I'm going to become a plumber. Someone

54:22

like a legal assistant, a parallegal.

54:26

Um they're not going to be needed for

54:28

very long. And is there a wealth

54:29

inequality issue here that will will

54:32

arise from this? Yeah, I think in a

54:34

society which shared out things fairly,

54:37

if you get a big increase in

54:39

productivity, everybody should be better

54:41

off.

54:43

But if you can replace lots of people by

54:46

AIS,

54:48

then the people who get replaced will be

54:50

worse off

54:52

and the company that supplies the AIS

54:55

will be much better off

54:58

and the company that uses the AIS. So

55:01

it's going to increase the gap between

55:03

rich and poor. And we know that if you

55:06

look at that gap between rich and poor,

55:08

that basically tells you how nice the

55:09

society is. If you have a big gap, you

55:12

get very nasty societies in which people

55:14

live in world communities and put other

55:17

people in mass jails. It's not good to

55:21

increase the gap between rich and poor.

55:22

The International Monetary Fund has

55:24

expressed profound concerns that

55:25

generative AI could cause massive labor

55:28

disruptions and rising inequality and

55:30

has called for policies that prevent

55:32

this from happening. I read that in the

55:34

business insider. So, have they given

55:36

any of what the policies should look

55:38

like? No. Yeah, that's the problem. I

55:40

mean, if AI can make everything much

55:42

more efficient and get rid of people for

55:44

most jobs or have a person assisted by I

55:47

doing many many people's work, it's not

55:51

obvious what to do about it. It's

55:53

universal basic income,

55:55

give everybody money. Yeah, I I I think

55:58

that's a good start and it stops people

56:01

starving. But for a lot of people, their

56:04

dignity is tied up with their job. I

56:06

mean, who you think you are is tied up

56:08

with you doing this job, right? Yeah.

56:11

And if we said, "We'll give you the same

56:13

money just to sit around," that would

56:16

impact your dignity. You said something

56:18

earlier about it surpassing or being

56:21

superior to human intelligence. A lot of

56:23

people, I think, like to believe that AI

56:26

is is on a computer and it's something

56:28

you can just turn off if you don't like

56:29

it. Well, let me tell you why I think

56:31

it's superior. Okay. Um, it's digital.

56:35

And because it's digital, you can have

56:38

you can simulate a neural network on one

56:40

piece of hardware. Yeah. And you can

56:42

simulate exactly the same neural network

56:44

on a different piece of hardware. So you

56:47

can have clones of the same

56:48

intelligence.

56:50

Now you could get this one to go off and

56:52

look at one bit of the internet and this

56:54

other one to look at a different bit of

56:56

the internet. And while they're looking

56:58

at these different bits of the internet,

57:00

they can be syncing with each other. So

57:02

they keep their weights the same, the

57:04

connection strengths the same. Weights

57:05

are connection strengths. Mhm. So this

57:07

one might look at something on the

57:08

internet and say, "Oh, I'd like to

57:10

increase this strength of this

57:11

connection a bit." And it can convey

57:14

that information to this one. So it can

57:16

increase the strength of that connection

57:17

a bit based on this one's experience.

57:19

And when you say the strength of the

57:20

connection, you're talking about

57:22

learning. That's learning. Yes. Learning

57:24

consists of saying instead of this one

57:26

giving 2.4 four votes for whether that

57:28

one should turn on. We'll have this one

57:30

give 2.5 votes for whether this one

57:31

should turn on. And that will be a

57:33

little bit of learning. So these two

57:36

different copies of the same neural net

57:39

are getting different experiences.

57:41

They're looking at different data, but

57:43

they're sharing what they've learned by

57:44

averaging their weights together. Mhm.

57:47

And they can do that averaging at like a

57:49

you can average a trillion weights. When

57:52

you and I transfer information, we're

57:54

limited to the amount of information in

57:55

a sentence. And the amount of

57:57

information in a sentence is maybe a 100

57:59

bits. It's very little information.

58:01

We're lucky if we're transferring like

58:03

10 bits a second. These things are

58:05

transferring trillions of bits a second.

58:08

So, they're billions of times better

58:09

than us at sharing information.

58:12

And that's because they're digital. And

58:14

you can have two bits of hardware using

58:16

the connection strengths in exactly the

58:18

same way. We're analog and you can't do

58:20

that. Your brain's different from my

58:21

brain. And if I could see the connection

58:24

strengths between all your neurons, it

58:26

wouldn't do me any good because my

58:27

neurons work slightly differently and

58:29

they're connected up slightly

58:30

differently. Mhm. So when you die, all

58:33

your knowledge dies with you. When these

58:35

things die, suppose you take these two

58:38

digital intelligences that are clones of

58:39

each other and you destroy the hardware

58:42

they run on. As long as you've stored

58:44

the connection strength somewhere, you

58:46

can just build new hardware that

58:48

executes the same instructions. So,

58:51

it'll know how to use those connection

58:52

strengths and you've recreated that

58:54

intelligence. So, they're immortal.

58:56

We've actually solved the problem of

58:58

immortality, but it's only for digital

59:01

things. So, it knows it will essentially

59:05

know everything that humans know but

59:06

more because it will learn new things.

59:09

It will learn new things. It would also

59:11

see all sorts of analogies that people

59:13

probably never saw.

59:15

So, for example, at the point when GPT4

59:18

couldn't look on the web, I asked it,

59:21

"Why is a compost heap like an atom

59:24

bomb?"

59:26

Off you go. I have no idea. Exactly.

59:28

Excellent. Most that's exactly what most

59:30

people would say. It said, "Well, the

59:32

time scales are very different and the

59:35

energy scales are very different." But

59:37

then I went on to talk about how a

59:38

compost he as it gets hotter generates

59:40

heat faster and an atom bomb as it

59:44

produces more neutrons generates

59:46

neutrons faster. And so they're both

59:48

chain reactions but at very different

59:50

time in energy scales. And I believe

59:52

GPT4 had seen that during its training.

59:56

It had understood the analogy between a

59:58

compost heap and an atom bomb. And the

60:00

reason I believe that is if you've only

60:02

got a trillion connections, remember you

60:04

have 100 trillion. And you need to have

60:06

thousands of times more knowledge than a

60:08

person, you need to compress information

60:11

into those connections. And to compress

60:14

information, you need to see analogies

60:16

between different things. In other

60:18

words, it needs to see all the things

60:20

that are chain reactions and understand

60:21

the basic idea of a chain reaction and

60:23

code that code the ways in which they're

60:25

different. And that's just a more

60:26

efficient way of coding things than

60:28

coding each of them separately.

60:31

So it's seen many many analogies

60:33

probably many analogies that people have

60:35

never seen. That's why I also think that

60:38

people who say these things will never

60:39

be creative. They're going to be much

60:40

more creative than us because they're

60:43

going to see all sorts of analogies we

60:44

never saw. And a lot of creativity is

60:46

about seeing strange analogies.

60:49

People are somewhat romantic about the

60:50

specialness of what it is to be human.

60:52

And you hear lots of people saying it's

60:54

very very different. It's a it's a

60:55

computer. We are, you know, we're

60:57

conscious. We are creatives. We we have

60:59

these sort of innate unique abilities

61:02

that the computers will never have. What

61:04

do you say to those people? I'd argue a

61:06

bit with the innate. Um,

61:09

so

61:11

the first thing I say is we have a long

61:13

history of believing people were

61:15

special. And we should have learned by

61:17

now. We thought we were at the center of

61:19

the universe. We thought we were made in

61:21

the image of God. white people thought

61:24

they were very special. We just tend to

61:27

want to think we're special.

61:29

My belief is that more or less everyone

61:33

has a completely wrong model of what the

61:35

mind is. Let's suppose I drink a lot or

61:37

I drop some acid and not recommended and

61:41

I

61:43

say to you I have the subjective

61:46

experience of little pink elephants

61:47

floating in front of me. Mhm. Most

61:50

people

61:51

interpret that as there's some kind of

61:54

inner theater called the mind

61:58

and only I can see what's in my mind and

62:01

in this inner theata there's little pink

62:04

elephants floating around.

62:06

So in other words, what's happened is my

62:08

perceptual systems gone wrong and I'm

62:10

trying to indicate to you how it's gone

62:13

wrong and what it's trying to tell me.

62:15

And the way I do that is by telling you

62:17

what would have to be out there in the

62:19

real world for it to be telling the

62:22

truth.

62:24

And so these little pink elephants,

62:27

they're not in some inner theater. These

62:29

little pink elephants are hypothetical

62:31

things in the real world. And that's my

62:33

way of telling you how my perceptual

62:36

systems telling me FIPS. So now let's do

62:38

that with a chatbot. Yeah. because I

62:41

believe that current multimodal chatbots

62:43

have subjective experiences and very few

62:46

people believe that. But I'll try and

62:48

make you believe it. So suppose I have a

62:51

multimodal chatbot. It's got a robot arm

62:53

so it can point and it's got a camera so

62:55

it can see things and I put an object in

62:58

front of it and I say point at the

63:00

object. It goes like this. No problem.

63:03

Then I put a prism in front of its lens.

63:06

And so then I put an object in front of

63:07

it and I say point at the object and it

63:09

goes there.

63:11

And I say, "No, that's not where the

63:14

object is. The object's actually

63:15

straight in front of you, but I put a

63:17

prism in front of your lens." And the

63:19

chatbot says, "Oh, I see. The prism bent

63:21

the light rays." So, um, the object's

63:24

actually there, but I had the subjective

63:26

experience that it was there.

63:29

Now, if the chatbot says that, is using

63:31

the word subjective experience exactly

63:33

the way people use them. It's an

63:35

alternative view of what's going on.

63:37

They're hypothetical states of the

63:39

world. which if they were true would

63:41

mean my perceptual system wasn't lying.

63:43

And that's the best way I can tell you

63:44

what my perceptual system is doing when

63:46

it's lying to me. Now, we need to go

63:49

further to deal with sentience and

63:51

consciousness and feelings and emotions,

63:53

but I think in the end they're all going

63:54

to be dealt with in a similar way.

63:56

There's no reason machines can't have

63:57

them all because people say machines

63:59

can't have feelings. And people are

64:02

curiously confident about that. I have

64:04

no idea why. Suppose I make a battle

64:06

robot and it's a little battle robot and

64:09

it sees a big battle robot that's much

64:12

more powerful than it. It would be

64:14

really useful if it got scared.

64:19

Now, when I get scared, um, various

64:22

physiological things happen that we

64:23

don't need to go into, and those won't

64:25

happen with the robot. But all the

64:27

cognitive things like I better get the

64:29

hell out of here and I better sort of

64:32

change my way of thinking so I focus and

64:35

focus and focus and don't get

64:36

distracted. All of that will happen with

64:39

robots, too. People will build in things

64:42

so that they when the circumstances such

64:45

they should get the hell out of there,

64:46

they get scared and run away. They'll

64:48

have emotions then. They won't have the

64:51

physiological aspects, but they will

64:53

have all the cognitive aspects. And I

64:55

think it would be odd to say they're

64:56

just simulating emotions. No, they're

64:58

really having those emotions. The little

65:00

robot got scared and ran away. It's not

65:02

running away because of adrenaline. It's

65:04

running away because of a sequence of

65:06

sort of neurological in its neural net

65:09

processes happened which which have the

65:10

equivalent effect to adrenaline. So do

65:13

you do you and it's not just adrenaline,

65:15

right? There's a lot of cognitive stuff

65:16

goes on when you get scared. Yeah. So,

65:18

do you think that

65:21

there is conscious AI? And when I say

65:24

conscious, I mean that represents the

65:26

same properties of consciousness that a

65:28

human has. There's two issues here.

65:30

There's a sort of empirical one and a

65:31

philosophical one. I don't think there's

65:33

anything in principle that stops

65:36

machines from being conscious.

65:38

I'll give you a little demonstration of

65:39

that before we carry on. Suppose I take

65:42

your brain and I take one brain cell in

65:44

your brain and I replace it by this a

65:47

bit black mirror-l like. I replace it by

65:50

a little piece of nanotechnology that's

65:52

just the same size that behaves in

65:55

exactly the same way when it gets pings

65:56

from other neurons. It sends out pings

65:58

just as the brain cell would have. So

66:00

the other neurons don't know anything's

66:02

changed.

66:04

Okay. I've just replaced one of your

66:05

brain cells with this little piece of

66:07

nanote technology. Would you still be

66:08

conscious?

66:10

Yeah. Now you can see where this

66:12

argument is going. Yeah. So if you

66:13

replaced all of them as I replace them

66:16

all, at what point do you stop being

66:17

conscious? Well, people think of

66:19

consciousness as this like ethereal

66:22

thing that exists maybe beyond the brain

66:24

cells. Yeah. Well, people have a lot of

66:26

crazy ideas.

66:29

Um, people don't know what consciousness

66:31

is and they often don't know what they

66:32

mean by it. And then they fall back on

66:35

saying, well, I know it cuz I've got it

66:37

and I can see that I've got it and they

66:39

fall back on this theata model of the

66:41

mind which I think is nonsense. What do

66:44

you think of consciousness as if you had

66:45

to try and define it? Is it because I

66:47

think of it as just like the awareness

66:48

of myself? I don't know. I think it's a

66:51

term we'll stop using. Suppose you want

66:53

to understand how a car works. Well, you

66:56

know, some cars have a lot of oomph and

66:58

other cars have a lot less oomph. Like

67:00

an Aston Martin's got lots of oomph. And

67:03

a little Toyota Corolla doesn't have

67:05

much oomph. But oomph isn't a very good

67:08

concept for understanding cars. Um, if

67:11

you want to understand cars, you need to

67:12

understand about electric engines or

67:14

petrol engines and how they work. And it

67:17

gives rise to oomph, but oomph isn't a

67:19

very useful explanatory concept. It's a

67:21

kind of essence of a car. It's the

67:23

essence of an Aston Martin, but it

67:25

doesn't explain much. I think

67:26

consciousness is like that. And I think

67:28

we'll stop using that term, but I don't

67:31

think there's anything any reason why a

67:33

machine shouldn't have it. If your view

67:36

of consciousness is that it

67:37

intrinsically involves self-awareness,

67:40

then the machine's got to have

67:41

self-awareness. He's got to have

67:43

cognition about its own cognition and

67:44

stuff. But

67:47

I'm a materialist through and through.

67:50

And I don't think there's any reason why

67:52

a machine shouldn't have consciousness.

67:54

Do you think they do then have the same

67:56

consciousness that we think of ourselves

67:59

as being uniquely uh given as a gift

68:02

when we're born? I'm ambivalent about

68:05

that at present. So

68:08

I don't think there's this hard line. I

68:10

think as soon as you have a machine that

68:12

has some self-awareness,

68:14

it's got some consciousness. Um, I think

68:18

it's an emergent property of a complex

68:19

system. It's not a sort of essence

68:22

that's

68:24

throughout the universe. It's you make

68:26

this really complicated system that's

68:28

complicated enough to have a model of

68:29

itself

68:30

and it does perception. And I think then

68:34

you're beginning to get a conscious

68:36

machines. So I don't think there's any

68:38

sharp distinction between what we've got

68:39

now and conscious machines. I don't

68:42

think it's going to one day we're going

68:43

to wake up and say, "Hey, if you put

68:45

this special chemical in, it becomes

68:46

conscious." It's not going to be like

68:48

that. I think we all wonder if these

68:50

computers are like thinking like we are

68:53

on their own when we're not there. And

68:55

if they're experiencing emotions, if

68:56

they're contending with I think we

68:58

probably, you know, we think about

68:59

things like love and things that are

69:01

feel unique to biological species. Um,

69:04

are they sat there thinking? Are they do

69:07

they have concerns? I think they really

69:09

are thinking and I think as soon as you

69:11

make AI agents they will have concerns.

69:14

If you wanted to make an effective AI

69:16

agent suppose you let's take a call

69:18

center. In a call center you have people

69:21

at present they have all sorts of

69:23

emotions and feelings which are kind of

69:26

useful. So suppose I call up the call

69:28

center and I'm actually lonely and I

69:32

don't actually want to know the answer

69:34

to why my computer isn't working. I just

69:36

want somebody to talk to. After a while,

69:40

the person in the call center will

69:42

either get bored or get annoyed with me

69:45

and will terminate it.

69:47

Well, you replace them by an AI agent.

69:50

The AI agent needs to have the same kind

69:52

of responses. If someone's just called

69:54

up because they just want to talk to the

69:55

AI agent and we're happy to talk for the

69:58

whole day to the AI agent, that's not

70:00

good for business. And you want an AI

70:02

agent that either gets bored or gets

70:03

irritated and says, "I'm sorry, but I

70:06

don't have time for this." And once it

70:08

does that, I think it's got emotions.

70:12

Now, like I say, emotions have two

70:14

aspects to them. There's the cognitive

70:16

aspect and the behavioral aspect, and

70:18

then there's a physiological aspect, and

70:21

those go together with us. And if the AI

70:25

agent gets embarrassed, it won't go red.

70:27

Yeah. Um, so there's no physiological

70:29

skin won't start sweating. Yeah, but it

70:31

might have all the same behavior. And in

70:33

that case, I'd say yeah, it's having

70:34

emotion. It's got an emotion. So, it's

70:36

going to have the same sort of cognitive

70:37

thought and then it's going to act upon

70:39

that cognitive in the same way, but

70:41

without the physiological responses. And

70:43

does that matter that it doesn't go red

70:45

in the face? And it's just a different I

70:47

mean, that's a response to the It makes

70:48

it somewhat different from us. Yeah. For

70:51

some things, the physiological aspects

70:53

are very important like love. They're a

70:55

long way from having love the same way

70:57

we do. But I don't see why they

71:00

shouldn't have emotions. So I think

71:02

what's happened is people have a model

71:05

of how the mind works and what feelings

71:07

are and what emotions are and their

71:10

model is just wrong. What um what

71:13

brought you to Google? You you worked at

71:16

Google for about a decade, right? Yeah.

71:18

What brought you there? I have a son who

71:21

has learning difficulties

71:24

and in order to be sure he would never

71:26

be out on the street, I needed to get

71:30

several million dollars and I wasn't

71:32

going to get that as an academic. I

71:35

tried. So, I taught a Corsera course in

71:38

the hope that I'd make lots of money

71:39

that way, but there was no money in

71:40

that. Mhm. So I figured out well the

71:43

only way to get millions of dollars is

71:46

to sell myself to a big company.

71:51

And so when I was 65,

71:54

fortunately for me, I had two brilliant

71:57

students who produced something called

71:59

Alexet, which was neural net that was

72:01

very good at recognizing objects in

72:03

images. And

72:06

so Ilia and Alex and I set up a little

72:10

company and auctioned it. And we

72:12

actually set up an auction where we had

72:14

a number of big companies bidding for

72:16

us.

72:18

And that company was called AlexNet. No,

72:21

the the the network that recognized

72:23

objects was called Alexet. The company

72:25

was called DNN Research, deep neural

72:27

network research. And it was doing

72:29

things like this. I'll put this graph up

72:31

on the screen. That's that's Alexet.

72:33

This picture shows eight images and Alex

72:37

Net's ability, which is your company's

72:39

ability to spot what was in those

72:40

images. Yeah. So, it could tell the

72:43

difference between various kinds of

72:44

mushroom. And about 12% of imageet is

72:48

dogs. And to be good at imageet, you

72:51

have to tell the difference between very

72:53

similar kinds of dog. And it would got

72:55

to be very good at that. And your your

72:57

company Alexet won several awards I

73:00

believe for its ability to out

73:02

outperform its competitors. And so

73:04

Google ultimately ended up acquiring

73:07

your technology. Google acquired that

73:09

technology and some other technology.

73:12

And you went to work at Google at age

73:14

what 66. I went at age 65 to work at

73:17

Google. 65. And you left at age 76? 75.

73:21

75. Okay. I worked there for more or

73:23

less exactly 10 years. And what were you

73:25

doing there? Okay, they were very nice

73:27

to me. They said they said pretty much

73:29

you can do what you like. I worked on

73:31

something called distillation that did

73:33

really work well

73:35

and that's now used all the time in AI

73:38

in AI and distillation is a way of

73:40

taking what a big model knows a big

73:42

neural net knows and getting that

73:44

knowledge into a small neural net. Then

73:46

at the end I got very interested in

73:48

analog computation and whether it would

73:50

be possible to get these big language

73:52

models running in analog hardware. So

73:55

they used much less energy. And it was

73:58

when I was doing that work that I began

74:00

to really realize how much better

74:02

digital is for sharing information.

74:05

Was there a Eureka moment?

74:08

There was a Eureka month or two. Um and

74:11

it was a sort of coupling of chat beauty

74:14

coming out although Google had very

74:15

similar things a year earlier and I'd

74:19

seen those and that had a big effect

74:20

effect on me. The closest I had to a

74:23

Eureka moment was when a Google system

74:26

called Palm was able to say why a joke

74:29

was funny. And I'd always thought of

74:31

that as a kind of landmark. If it can

74:33

say why a joke's funny, it really does

74:35

understand and it could say why a joke

74:38

was funny.

74:41

And that coupled with realizing why

74:43

digital is so much better than analog

74:45

for sharing information

74:47

suddenly made me very interested in AI

74:50

safety and that these things were going

74:53

to get a lot smarter than us. Why did

74:56

you leave Google? The main reason I left

74:58

Google was cuz I was 75 and I wanted to

75:01

retire. I've done a very bad job of

75:04

that. The precise timing of when I left

75:07

Google was so that I could talk freely

75:09

at a conference at MIT, but I left

75:11

because I was I'm old and I was finding

75:15

it harder to program. I was making many

75:17

more mistakes when I programmed, which

75:18

is very annoying. You wanted to talk

75:20

freely at a conference at MIT. Yes. At

75:23

MIT, organized by MIT Tech Review. What

75:25

did you want to talk about freely? AI

75:27

safety. And you couldn't do that while

75:28

you were at Google. Well, I could have

75:31

done it while I was at Google. And

75:32

Google encouraged me to stay and work on

75:33

AI safety and said I could do whatever I

75:35

liked on AI safety. You kind of sense to

75:38

yourself if you work for a big company.

75:40

You don't feel right saying things that

75:43

will damage the big company. Even if you

75:45

could get away with it, it just feels

75:47

wrong to me. I didn't leave because I

75:50

was cross with anything Google was

75:51

doing. I think Google actually behaved

75:52

very responsibly. When they had these

75:54

big chat bots, they didn't release them

75:57

possibly because they were worried about

75:59

their reputation. they had a very good

76:01

reputation and they didn't want to

76:02

damage it. So open AI didn't have a

76:05

reputation and so they could afford to

76:07

take the gamble. I mean there's also a

76:09

big conversation happening around how it

76:11

will cannibalize their core business in

76:13

search. There is now. Yes. Yeah. Yeah.

76:16

And it's the old innovators dilemas to

76:18

some degree I guess that contending with

76:20

bad skin. I've had it and I'm sure many

76:23

of you listening have had it too or

76:25

maybe you have it right now. I know how

76:28

draining it can be, especially if you're

76:30

in a job where you're presenting often

76:31

like I am. So, let me tell you about

76:33

something that's helped both my partner

76:35

and me and my sister, which is red light

76:37

therapy. I only got into this a couple

76:39

of years ago, but I wish I'd known a

76:41

little bit sooner. I've been using our

76:43

show sponsors Boncharg's infrared sauna

76:45

blanket for a while now, but I just got

76:47

hold of their red light therapy mask as

76:49

well. Red light has been proven to have

76:51

so many benefits for the body. Like any

76:53

area of your skin that's exposed will

76:55

see a reduction in scarring, wrinkles,

76:57

and even blemishes. It also helps with

76:59

complexion. It boosts collagen, and it

77:01

does that by targeting the upper layers

77:03

of your skin. And Boncharge ships

77:05

worldwide with easy returns and a

77:07

year-long warranty on all of their

77:09

products. So, if you'd like to try it

77:10

yourself, head over to

77:11

bondcharge.com/diary

77:13

and use code diary for 25% off any

77:16

product sitewide. Just make sure you

77:18

order through this link.

77:20

bondcharge.com/diary

77:22

with code diary. Make sure you keep what

77:24

I'm about to say to yourself. I'm

77:26

inviting 10,000 of you to come even

77:29

deeper into the diary of a CEO. Welcome

77:31

to my inner circle. This is a brand new

77:34

private community that I'm launching to

77:35

the world. We have so many incredible

77:37

things that happen that you are never

77:39

shown. We have the briefs that are on my

77:41

iPad when I'm recording the

77:42

conversation. We have clips we've never

77:44

released. We have behindthe-scenes

77:46

conversations with the guests. and also

77:47

the episodes that we've never ever

77:50

released and so much more. In the

77:53

circle, you'll have direct access to me.

77:55

You can tell us what you want this show

77:56

to be, who you want us to interview, and

77:58

the types of conversations you would

78:00

love us to have. But remember, for now,

78:02

we're only inviting the first 10,000

78:04

people that join before it closes. So,

78:07

if you want to join our private closed

78:08

community, head to the link in the

78:09

description below or go to

78:10

daccircle.com.

78:14

I will speak to you there.

78:16

I'm continually shocked by the types of

78:17

individuals that listen to this

78:19

conversation um because they come up to

78:20

me sometimes. So I hear from

78:21

politicians, I hear from some real

78:23

people, I hear from entrepreneurs all

78:24

over the world, whether they are the

78:26

entrepreneurs building some of the

78:27

biggest companies in the world or their,

78:28

you know, early stage startups. For

78:31

those people that are listening to this

78:33

conversation now that are in positions

78:35

of power and influence,

78:38

world leaders, let's say, what's your

78:40

message to them?

78:42

I'd say what you need is highly

78:44

regulated capitalism. That's what seems

78:45

to work best. And what would you say to

78:47

the average person

78:49

not doesn't work in the industry,

78:51

somewhat concerned about the future,

78:54

doesn't know if they're helpless or not.

78:56

What should they be doing in their own

78:57

lives?

78:59

My feeling is there's not much they can

79:01

do. This isn't isn't going to be decided

79:04

by just as climate change isn't going to

79:06

be decided by people separating out the

79:09

plastic bags from the um compostables.

79:12

That's not going to have much effect.

79:14

It's going to be decided by whether the

79:16

lobbyists for the big energy companies

79:18

can be kept under control. I don't think

79:21

there's much people can do to except for

79:26

try and pressure their governments to

79:30

force the big companies to work on AI

79:32

safety that they can do.

79:36

You've lived a a fascinating fascinating

79:39

winding life. I think one of the things

79:40

most people don't know about you is that

79:42

your family has a

79:45

big history of being involved in

79:47

tremendous things. You have a family

79:49

tree which is one of the most impressive

79:51

that I've ever seen or read about. Your

79:54

great greatgrandfather George Bull

79:57

founded the Boolean algebra logic which

79:59

is one of the foundational principles of

80:01

modern computer science. You have uh

80:03

your great great grandmother Mary

80:05

Everest Bull who was a mathematician and

80:07

educator who made huge leaps forward in

80:11

mathematics from what I was able to

80:13

ascertain. Um I mean I can the list goes

80:15

on and on and on. I mean, your great

80:16

great uncle George Everest is what Mount

80:20

Everest is named after.

80:22

Is that is that correct? I think he's my

80:25

great great great uncle. His his niece

80:30

married George Bull.

80:33

So Mary Mary Bull was Mary Everest Bull.

80:36

Um she was the niece of Everest. And

80:39

your first cousin once removed, Joan

80:41

Hinton, was involved in the a nuclear

80:43

physicist who worked on the Manhattan

80:45

project, which is the World War II

80:47

development of the first nuclear bomb.

80:48

Yeah. She was one of the two female

80:51

physicists at Los Alamos.

80:53

And then after they dropped the bomb,

80:56

she moved to China. Why? She was very

80:59

cross with them dropping the bomb. And

81:02

her family had a lot of links with

81:04

China. Her mother was friends with

81:08

Chairman Mo.

81:10

Quite weird.

81:13

When you look back at your life,

81:14

Jeffrey,

81:16

we have the hindsight you have now and

81:18

the ret retrospective clarity,

81:22

what might you have done differently if

81:24

you were advising me?

81:26

I guess I have two pieces of advice. One

81:30

is if you have an intuition that people

81:34

are doing things wrong and there's a

81:35

better way to do things, don't give up

81:38

on that intuition just because people

81:40

say it's silly. Don't give up on the

81:42

intuition until you figured out why it's

81:44

wrong. Figured out for yourself why that

81:47

intuition isn't correct. And usually

81:50

it's wrong if it disagrees with

81:52

everybody else and you'll eventually

81:54

figure out why it's wrong.

81:56

But just occasionally you'll have an

81:58

intuition that's actually right and

82:00

everybody else is wrong. And I lucked

82:02

out that way. Early on I thought neural

82:05

nets are definitely the way to go to

82:08

make AI and almost everybody said that

82:11

was crazy and I stuck with it because I

82:13

couldn't. It seemed to me it was

82:15

obviously right.

82:17

Now the idea that you should stick with

82:19

your intuitions isn't going to work if

82:22

you have bad intuitions. But if you have

82:24

bad intuitions, you're never going to do

82:26

anything anyway, so you might as well

82:27

stick with them.

82:30

And in your own career journey, is there

82:32

anything you look back on and say, "With

82:33

the hindsight I have now, I should have

82:35

taken a different approach at that

82:36

juncture."

82:38

I wish I'd spent more time with my wife

82:42

um

82:47

and with my children when they were

82:48

little.

82:50

I was kind of obsessed with work.

82:55

Your wife passed away. Yeah. From

82:57

ovarian cancer. No. Or that was another

83:00

wife. Okay. Um I had two wives to have

83:03

cancer. Oh, really? Sorry. The first one

83:05

died of ovarian cancer and the second

83:07

one died of pancreatic cancer. And you

83:09

wish you'd spent more time with her?

83:10

With the second wife? Yeah. Who was a

83:12

wonderful person?

83:15

Why did you say that in your 70s? What

83:17

is it that you've you figured out that I

83:19

might not know yet?

83:21

Oh, just cuz she's gone and I can't

83:22

spend more time with her now. Mhm.

83:26

But you didn't know that at the time.

83:30

At the time, you think

83:33

I mean it was likely I would die before

83:35

her just cuz she was a woman and I was a

83:37

man. Um I didn't

83:40

I just didn't spend enough time when I

83:42

could.

83:43

I I think I I inquire there because I

83:46

think there's many of us that are so

83:47

consumed with what we're doing

83:48

professionally that we kind of assume

83:51

immortality with our partners because

83:52

they've always been there. So we Yeah. I

83:54

mean she was very supportive of me

83:56

spending a lot of time working but and

83:59

why did you say your children as well?

84:01

What's the what's the Well, I didn't

84:02

spend enough time with them when they

84:03

were little

84:05

and you regret that now. Yeah.

84:12

If you um if you had a closing message

84:14

for for my for my listeners about AI and

84:16

AI safety, what would that be? Jeffrey,

84:20

there's still a chance that we can

84:22

figure out how to develop AI that won't

84:25

want to take over from us. And because

84:27

there's a chance, we should put enormous

84:30

resources into trying to figure that out

84:31

because if we don't, it's going to take

84:33

over. And are you hopeful?

84:36

I just don't know. I'm agnostic.

84:40

you must get get bed get in bed at night

84:42

and when you're thinking to yourself

84:44

about probabilities of outcomes there

84:46

must be a bias in one direction because

84:49

there certainly is for me I imagine

84:50

everyone listening now has a

84:53

internal prediction that they might not

84:56

say out loud but of how they think it's

84:57

going to play out I really don't know I

85:00

genuinely don't know I think it's

85:02

incredibly uncertain when I'm feeling

85:05

slightly depressed I think people are

85:08

toast is going to take over while I'm

85:10

feeling cheerful. I think we'll figure

85:12

out a way. Maybe one of the facets of

85:14

being a human um is because we've always

85:17

been here, like we were saying about our

85:19

loved ones and our relationships, we

85:21

assume casually that we will always be

85:23

here and we'll always figure everything

85:25

out. But there's a beginning and an end

85:27

to everything as we saw from the

85:28

dinosaurs. I mean, yeah. And

85:32

we have to face the possibility

85:35

that unless we do something soon,

85:39

we're near the end.

85:42

We have a closing tradition on this

85:43

podcast where the last guest leaves a

85:45

question in their diary. And the

85:47

question that they've left for you is

85:54

with everything that you see ahead of

85:56

us,

85:58

what is the biggest threat you see to

86:00

human happiness?

86:04

I think the joblessness is a fairly

86:07

urgent short-term threat to human

86:09

happiness. I think if you make lots and

86:11

lots of people unemployed, even if they

86:14

get universal basic income, um they're

86:17

not going to be happy

86:19

because they need purpose. Because they

86:22

need purpose. Yes. And struggle. They

86:23

need to feel they're contributing

86:25

something. They're useful. And do you

86:27

think that outcome that there's going to

86:29

be huge job displacement is more

86:31

probable than not? Yes, I do. And what

86:34

sort of that one I think is definitely

86:36

more probable than not. If I worked in a

86:38

call center, I'd be terrified.

86:41

And what's the time frame for that in

86:42

terms of mass jobs? I think it's

86:44

beginning to happen already. I read an

86:46

article in the Atlantic recently that

86:49

said it's already getting hard for

86:51

university graduates to get jobs. And

86:54

part of that may be that people are

86:56

already using AI for the jobs they would

86:58

have got. I spoke to the CEO of a major

87:02

company that everyone will know of, lots

87:04

of people use, and he said to me in DMs

87:07

that they used to have seven just over

87:08

7,000 employees. He said uh by last year

87:11

they were down to I think 5,000. He said

87:13

right now they have 3,600. And he said

87:15

by the end of summer because of AI

87:17

agents they'll be down to 3,000. So

87:19

you've got So it's happening already.

87:21

Yes. He's halfed his workforce because

87:23

AI agents can now handle 80% of the

87:25

customer service inquiries and other

87:28

things. So it's it's happening already.

87:30

Yeah. So urgent action is needed. Yep. I

87:33

don't know what that urgent action is.

87:36

That's a tricky one because that depends

87:37

very much on the political system and

87:40

political systems are all going in the

87:42

wrong direction at present. I mean what

87:44

do we need to do? Save up money? Like do

87:46

we save money? Do we move to another

87:47

part of the world? I don't know. What

87:50

would you tell your kids to do? They

87:53

said, "Dad, like there's going to be

87:54

loads of job displacement." Because I

87:56

worked for Google for 10 years. is they

87:58

have enough money. Okay. Okay. [ __ ] So,

88:01

they're not typical. What if they didn't

88:03

have money? Trained to be a plumber.

88:06

Really? Yeah.

88:10

Jeffrey, thank you so much. You're the

88:12

first Nobel Prize winner that I've ever

88:15

had a conversation with, I think, in my

88:18

life. So, that's a tremendous honor. And

88:20

you you you received that award for a

88:22

lifetime of exceptional work and pushing

88:23

the world forward in so many profound

88:25

ways that will lead to great and that

88:28

have led to great advancements and

88:29

things that matter so much to us. And

88:31

now you've turned this season in your

88:32

life to shining a light on some of your

88:34

own work, but also on the the the

88:36

broader risks of AI and how um and how

88:40

it might impact us adversely. And

88:42

there's very few people that have worked

88:44

inside the machine of a Google or a big

88:47

tech company that have contributed to

88:48

the field of AI that are now at the very

88:51

forefront of warning us against the very

88:53

thing that they worked upon. There are

88:56

actually surprising number of us now.

88:58

They're not as uh as public and they're

89:01

actually quite hard to get to have these

89:03

kinds of conversations because many of

89:04

them are still in that industry. So, you

89:07

know, someone who tries to contact these

89:08

people often and ask invites them to

89:10

have conversations, they often are a

89:11

little bit hesitant to speak openly.

89:13

They speak privately, but they're less

89:15

willing to openly because maybe maybe

89:17

they still have something at some sort

89:19

of incentives at play. I have an

89:21

advantage over them, which is I'm older,

89:23

so I'm unemployed, so I can say what I

89:25

Well, there you go. So, thank you for

89:27

doing what you do. It's a real honor and

89:28

please do continue to do it. Thank you.

89:30

Thank you so much.

89:32

People

89:35

think I'm joking when I say that, but

89:36

I'm not. The plumbing fish. Yeah. Yeah.

89:41

And plumbers are pretty well paid.

89:43

[Music]

90:03

[Music]

Interactive Summary

Jeffrey Hinton, a pioneer in AI and Nobel Prize winner, discusses the profound risks and potential future of artificial intelligence. He explains his early work in neural networks, which contrasted with logic-based AI, and how this led to the development of technologies now ubiquitous in AI. Hinton left Google to speak freely about the dangers of AI, including autonomous weapons and the existential threat of superintelligence surpassing human capabilities. He highlights concerns about AI misuse, such as cyberattacks and election manipulation, and the potential for mass joblessness. Hinton also touches upon the philosophical implications of AI, including consciousness and emotions, and expresses a sense of urgency regarding AI safety research, emphasizing the need for robust regulation and global cooperation to mitigate potential negative outcomes.

Suggested questions

5 ready-made prompts