HomeVideos

AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris

Now Playing

AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris

Transcript

4341 segments

0:00

If you're worried about immigration

0:01

taking jobs, you should be way more

0:03

worried about AI because it's like a

0:04

flood of millions of new digital

0:06

immigrants that are Nobel Prize level

0:08

capability work at superhuman speed and

0:10

will work for less than minimum wage. I

0:12

mean, we're heading for so much

0:14

transformative change faster than our

0:15

society is currently prepared to deal

0:17

with it. And there's a different

0:18

conversation happening publicly than the

0:20

one that the AI companies are having

0:22

privately about which world we're

0:23

heading to, which is a future that

0:24

people don't want. But we didn't consent

0:26

to have six people make that decision on

0:28

behalf of 8 billion people.

0:30

>> Tristan Harris is one of the world's

0:31

most influential technology ethicists

0:33

>> who created the Center for Humane

0:35

Technology after correctly predicting

0:36

the dangers social media would have on

0:39

our society.

0:39

>> And now he's warning us about the

0:41

catastrophic consequences AI will have

0:43

on all of us.

0:48

>> Let me like collect myself for a second.

0:52

We can't let it happen. We cannot let

0:55

these companies race to build a super

0:56

intelligent digital god, own the world

0:58

economy and have military advantage

1:00

because of the belief that if I don't

1:01

build it first, I'll lose to the other

1:03

guy and then I will be forever a slave

1:05

to their future. And they feel they'll

1:07

die either way. So they prefer to light

1:09

the fire and see what happens. It's

1:10

winner takes all. But as we're racing,

1:13

we're landing in a world of unvetted

1:14

therapists, rising energy prices, and

1:16

major security risks. I mean, we have

1:18

evidence where if an AI model reading a

1:20

company's email finds out it's about to

1:21

get replaced with another AI model and

1:23

then it also reads in the company email

1:25

that one executive is having an affair

1:26

with an employee, the AI will

1:28

independently blackmail that executive

1:30

in order to keep itself alive. That's

1:32

crazy. But what do you think?

1:33

>> I'm finding it really hard to be

1:34

hopeful. I'm going to be honest, just so

1:36

I really want to get practical and

1:37

specific about what we can do about

1:38

this.

1:39

>> Listen, I I'm not I'm not naive. This is

1:41

super hard. But we have done hard things

1:43

before and it's possible to choose a

1:44

different teacher. So,

1:49

>> I see messages all the time in the

1:50

comments section that some of you didn't

1:52

realize you didn't subscribe. So, if you

1:54

could do me a favor and double check if

1:55

you're a subscriber to this channel,

1:57

that would be tremendously appreciated.

1:58

It's the simple, it's the free thing

2:00

that anybody that watches this show

2:02

frequently can do to help us here to

2:03

keep everything going in this show in

2:05

the trajectory it's on. So please do

2:07

double check if you've subscribed and uh

2:09

thank you so much because a strange way

2:10

you are you're part of our history and

2:13

you're on this journey with us and I

2:14

appreciate you for that. So yeah, thank

2:16

you

2:20

Tristan.

2:22

I think my first question and maybe the

2:23

most important question is we're going

2:25

to talk about artificial intelligence

2:26

and technology broadly today

2:29

but who are you in relation to this

2:31

subject matter? So I did a program at

2:34

Stanford called the Mayfield Fellows

2:36

program that took engineering students

2:38

and then taught them entrepreneurship.

2:40

You know I as a computer scientist

2:42

didn't know anything about

2:42

entrepreneurship but they pair you up

2:44

with venture capitalists. They give you

2:45

mentorship and you know there's a lot of

2:48

powerful alumni who are part of that

2:50

program. the co-founder of Asauna, uh

2:52

the co-founders of um of Instagram were

2:55

both part of that program. And that put

2:57

us in kind of a cohort of people who

3:00

were basically ending up at the center

3:03

of what was going to colonize the whole

3:04

world's psychological environment, which

3:06

was the social media situation. And as

3:09

part of that, I started my own tech

3:10

company called Apure. And we, you know,

3:13

basically made this tiny widget that

3:15

would help people find more contextual

3:18

information without leaving the website

3:19

they were on. It was a really cool

3:21

product that was about deepening

3:22

people's understanding. And I got into

3:24

the tech industry because I thought the

3:26

technology could be a force for good in

3:27

the world. It's why I started my

3:29

company. And then I kind of realized

3:31

through you know that experience that at

3:33

the end of the day these news publishers

3:35

who used our product they only cared

3:37

about one thing which is is this

3:40

increasing the amount of time and

3:42

eyeballs and attention on our website

3:44

because eyeballs meant more revenue. And

3:47

I was in sort of this conflict of I

3:50

think I'm doing this to help the world

3:52

but really I'm measured by this metric

3:54

of what keeps people's attention. That's

3:56

the only thing that I'm measured by. And

3:58

I saw that conflict play out among my

4:00

friends who started Instagram, you know,

4:02

because they got into it because they

4:03

wanted people to share little bite-sized

4:05

moments of your life. You know, here's a

4:06

photo of my bike ride down to the bakery

4:09

in San Francisco. It's what Kevin Sist

4:10

used to post when we were when he was

4:12

just starting it. I was probably one of

4:13

the first like hundred users of the app.

4:15

And later you see how these night, you

4:18

know, these sort of simple products that

4:19

had a simple good positive intention got

4:22

sort of sucked into these perverse

4:23

incentives. And so Google acquired my

4:26

company called Apure. I landed there and

4:29

I joined the Gmail team and I'm with

4:31

these engineers who are designing the

4:34

email interface that people spend hours

4:36

a day in. And then one day one of the

4:38

engineers comes over and he says, "Well,

4:41

why don't we make it buzz your phone

4:43

every time you get an email?" And he

4:45

just asked the question nonchalantly

4:46

like it wasn't a big deal. And in my

4:49

experience, I was like, "Oh my god,

4:51

you're about to change billions of

4:53

people's psychological experiences with

4:55

their families, with their friends, at

4:57

dinner, with their date night, on

4:59

romantic relationships, where suddenly

5:00

people's phones are going to be busy

5:02

showing notifications of their email."

5:04

And you're just asking this question as

5:06

if it's like a throwaway question. And I

5:09

became concerned. I see you have a slide

5:11

deck there.

5:12

>> I do. Yeah. um about basically how

5:15

Google and Apple and social media

5:17

companies were hosting this

5:19

psychological environment that was going

5:21

to corrupt and frack the global human

5:24

attention uh of humanity. And I

5:28

basically said I needed to make a slide

5:30

deck. It's 130 something pages slide

5:32

deck that basically was a message to the

5:35

whole company at Google saying we have

5:38

to be very careful and we have a moral

5:40

responsibility in how we shape the

5:42

global attentions of humanity. The slide

5:45

deck I I've printed off um which my

5:47

research team found is called a call to

5:49

minimize distraction and respect users

5:52

attention by a concerned PM and

5:54

entrepreneur. PM meaning project

5:56

manager.

5:56

>> Project manager. Yeah.

5:57

>> How was that received at Google? I was

5:59

very nervous actually uh because I felt

6:03

like

6:05

I wasn't coming from some place where I

6:07

wanted to like stick it to them or you

6:08

know um be controversial. I just felt

6:12

like there was this conversation that

6:13

wasn't happening. And I sent it to about

6:16

50 people that were friends of mine just

6:17

for feedback. And when I came to work

6:19

the next day, there was 150, you know,

6:22

on the top right on Google Slides, it

6:23

shows you the number of simultaneous

6:24

viewers.

6:25

>> Y

6:25

>> and it had 130 something simultaneous

6:28

viewers. And later that day it was like

6:29

500 simultaneous viewers. And so

6:31

obviously it had been spreading virally

6:33

throughout the whole company. And people

6:36

from all around the company emailed me

6:37

saying this is a massive problem. I

6:39

totally agree. We have to do something.

6:41

And so instead of getting fired, I was

6:44

invited and basically stayed to become a

6:46

design ethicist. studying how do you

6:49

design in an ethical way and how do you

6:53

design for the collective attention

6:54

spans and information flows of humanity

6:57

in a way that does not cause all these

6:59

problems. Because what was sort of

7:01

obvious to me then, and that was in

7:02

2013, is that if the incentive is to

7:06

maximize eyeballs and attention and

7:08

engagement, then you're incentivizing a

7:11

more addicted, distracted, lonely,

7:13

polarized, sexualized breakdown of

7:15

shared reality society because all of

7:18

those outcomes are success cases of

7:21

maximizing for engagement for an

7:23

individual human on a screen. And so it

7:26

was like watching this slow motion train

7:28

wreck in 2013. you could kind of see

7:30

there's this kind of myth that um we

7:32

could never predict the future like

7:34

technology could go any direction and

7:36

that's like you know the possible of a

7:37

new technology but I wanted people to

7:39

see the probable that if you know the

7:41

incentives you can actually know

7:42

something about the future that you're

7:44

heading towards and that presentation

7:46

kind of kicked that off. A lot of people

7:49

will know you from the documentary on

7:50

Netflix, The Social Dilemma, which was a

7:51

big moment and a big conversation in

7:53

society across the world. But then since

7:56

then, a new alien has entered the p

7:58

picture. There's a new protagonist in

7:59

the story, which is the rise of

8:01

artificial intelligence. When did you

8:04

start to and in the social dilemma, you

8:06

talk a lot about AI and algorithms.

8:08

Yeah. But when did

8:09

>> different kind of AI we used to call

8:10

that um the AI behind social media was

8:13

kind of humanity's first contact between

8:16

a narrow misaligned AI that went rogue

8:20

>> because if you think about it it's like

8:21

there you are you open Tik Tok and you

8:23

see a video and you think you're just

8:24

watching a video but what when you swipe

8:27

your finger and it shows you the next

8:28

video at that time you activated one of

8:31

the largest supercomputers in the world

8:33

pointed at your brain stem calculating

8:35

what 3 billion other human social

8:37

primates have seen today and knowing

8:40

before you do which of those videos is

8:42

most likely to keep you scrolling. It

8:44

makes a prediction. So, it's an AI

8:45

that's just making a prediction about

8:47

which video to recommend to you. But

8:48

Twitter's doing that with which tweet

8:50

should be shown to you. Instagram's

8:51

doing that with which photo or videos to

8:53

be shown to you. And so, all of these

8:55

things are these narrow misaligned AIs

8:58

just optimizing for one thing, which is

9:00

what's going to keep you scrolling. And

9:02

that was enough to wreck and break

9:04

democracy and to create the most anxious

9:06

and depressed generation of our lifetime

9:10

just by this very simple baby AI. And

9:13

people didn't even notice it because it

9:14

was called social media instead of AI.

9:17

But it was the first we used to call it

9:19

um in this AI dilemma talk that my

9:20

co-founder and I uh gave, we called it

9:23

humanity's first contact with AI because

9:25

it's just a narrow AI. And what ChachiPT

9:27

represents is this whole new wave of

9:30

generative AI that is a totally

9:32

different beast because it speaks

9:33

language which is the operating system

9:35

of humanity. Like if you think about it,

9:36

it's trained on code, it's trained on

9:38

text, it's trained on all of Wikipedia,

9:40

it's trained on Reddit, it's trained on

9:42

everything, all law, all religion and

9:44

all of that gets sucked into this

9:46

digital brain that um has unique

9:49

properties and that is what we're living

9:50

with with chat GPT. I think this is a

9:53

really critical point and I remember

9:54

watching your talk about this where I

9:56

think this was the moment that I that my

9:58

I had a bit of a paradigm shift when I

9:59

realized that how how central language

10:01

is to everything that I do every day.

10:03

>> Yeah, exactly.

10:03

>> It's like we should establish that

10:05

first. Like why is language so central?

10:07

Code is language. So all the code that

10:09

runs all of the digital infrastructure

10:11

we live by, that's language.

10:13

>> Law is language. All the laws that have

10:15

ever been written, that's language. Um

10:17

biology, DNA, that's all a kind of

10:20

language. Music is a kind of language.

10:22

Videos are a higher dimensional kind of

10:24

language. And the new generation of AI

10:27

that was born with this technology

10:28

called transformers that Google made in

10:30

in 2017 was to treat everything as a

10:33

language. Um, and that's how we get, you

10:36

know, chatbt, write me a 10-page essay

10:38

on anything and it spits out this thing

10:40

or chatbt, you know, find something in

10:43

this religion that'll persuade this this

10:45

group uh of the thing I want them to be

10:46

persuaded by. That's hacking language

10:48

because religion is also language. And

10:51

so this new AI that we're dealing with

10:54

can hack the operating system of

10:56

humanity. It can hack code and find

10:58

vulnerabilities in software. The recent

11:00

AIs today, just over the summer, have

11:02

been able to find 15 vulnerabilities in

11:04

open- source software on GitHub. So it

11:06

can just point itself at GitHub.

11:08

>> GitHub being

11:09

>> GitHub being like this u this this

11:11

website that hosts basically all the

11:13

open source code of the world. So for

11:14

it's it's kind of like the Wikipedia for

11:16

coders. has all the code that's ever

11:18

been written that's publicly and openly

11:19

accessible and you can download it. So

11:21

you don't have to write your own face

11:22

recognition system. You can just

11:23

download the one that already exists.

11:25

And so GitHub is sort of supplying the

11:27

world with all of this free digital

11:29

infrastructure. And the new AIs that

11:32

exist today can be pointed at GitHub and

11:34

found 15 vulnerabilities from scratch

11:37

that had not been exploited before. So

11:40

if you imagine that now applied to the

11:43

code that runs our water infrastructure,

11:45

our electricity infrastructure, we're

11:48

releasing AI into the world that can

11:50

speak and hack the operating system of

11:52

our world. And that requires a new level

11:55

of discernment and care about how we're

11:58

doing that because we ought to be

11:59

protecting the core parts of society

12:01

that we want to protect before all that

12:03

happens. I think especially when you

12:05

think about how central voice is to

12:08

safeguarding so much of our lives. My

12:09

relationship with my girlfriend runs on

12:11

voice.

12:11

>> Right. Exactly.

12:12

>> Me calling her to tell her something. My

12:13

bank, I call them and tell them

12:14

something.

12:15

>> Exactly.

12:15

>> And they ask me for a bunch of codes or

12:17

a password or whatever. And all of this

12:19

comes back to your point about language,

12:20

which is my whole life is actually

12:22

protected by my communications with

12:23

other people now.

12:24

>> And you you're you generally speaking,

12:26

you trust when you pick up the phone

12:27

that it's a real person. I I literally

12:28

just um two days ago I had a the mother

12:31

of a close friend of mine call me out of

12:33

nowhere and she said Tristan um you know

12:35

uh my daughter she just called me crying

12:37

that that some some person had is is

12:39

holding her hostage and and wanted some

12:41

money and I was like oh my god this is

12:43

an AI scam but it's hitting my friend in

12:46

San Francisco who's knowledgeable about

12:48

this stuff and didn't know that it was a

12:49

scam. And for a moment I was very

12:51

concerned. I had to track her down and

12:52

figure out and find my friends where

12:54

where she was and find out that she was

12:55

okay. And when you have AIs that can

12:58

speak the language of anybody, it now

12:59

takes less than three seconds of your

13:00

voice to synthesize and speak in

13:03

anyone's voice. Again, that's a new

13:05

vulnerability that society has now

13:07

opened up because of AI.

13:09

>> So, Chachi kind of set off the starting

13:12

pistol for this this whole race. And

13:14

subsequently, it appears that every

13:16

other major technology company now is

13:18

investing godly amounts, ungodly amounts

13:21

of money in competing in this AI race.

13:23

and they're pursuing this thing called

13:25

AGI which we hear this word used a lot.

13:27

>> Yes.

13:28

>> What is what is AGI and how is that

13:29

different from what I use at the moment

13:31

on chatb or Gemini?

13:32

>> Yeah.

13:33

>> So that's the thing that people really

13:34

need to get is that these companies are

13:37

not racing to provide a chatbot to

13:39

users. That's not what their goal is. If

13:41

you look at the mission statement on

13:42

OpenAI's website or all the websites,

13:44

their mission is to be able to replace

13:46

all forms of human economic labor in the

13:49

economy. Meaning an AI that can do all

13:52

the cognitive labor meaning labor of the

13:53

mind. So that that can be marketing,

13:55

that can be text, that can be

13:57

illustration, that can be video

13:58

production, that can be code production.

14:01

Everything that a person can do with

14:03

their brain, these companies are racing

14:05

to build that. That is artificial

14:08

general intelligence. General meaning

14:10

all kinds of cognitive tasks. Deis

14:13

Hassabis the co-founder of um Google

14:16

DeepMind used to say first solve

14:18

intelligence and then use that to solve

14:21

everything else. Like it's important to

14:22

say why why is AI distinct from all

14:24

other kinds of technologies. It's

14:26

because if I make an advance in one

14:28

field like rocketry if I just let's say

14:31

I uncover some secret in rocketry that

14:33

doesn't advance like biio medicine

14:36

knowledge or it doesn't advance energy

14:38

production or doesn't advance coding.

14:40

But if I can advance generalized

14:42

intelligence, think of all science and

14:44

technology development over the course

14:45

of all human history. So science and

14:47

technology is all done by humans

14:50

thinking and working out problems.

14:51

Working out problems in any domain. So

14:54

if I automate intelligence, I'm suddenly

14:56

going to get an explosion of all

14:58

scientific and technological development

15:00

everywhere. Does that make sense?

15:02

>> Of course. Yeah. It's foundational to

15:03

everything.

15:04

>> Exactly. Which is why there's a belief

15:06

that if I get there first and can

15:07

automate generalized intelligence, I can

15:10

own the world economy because suddenly

15:13

everything that a human can do that they

15:14

would be paid to do in a job, the AI can

15:16

do that better. And so if I'm a company,

15:19

do I want to pay the human who has

15:21

health care, might whistleblow,

15:22

complains, you know, has to sleep, has

15:25

sick days, has family issues, or do I

15:27

want to pay the AI that will work 24/7

15:30

at superhuman speed, doesn't complain,

15:32

doesn't whistleblow, doesn't have to be

15:34

paid for healthcare. There's the

15:35

incentive for everyone to move to paying

15:38

for AIs rather than paying humans. And

15:41

so AGI, artificial general intelligence,

15:45

is more transformative than any other

15:47

kind of of technology that we've ever

15:48

had and it's distinct.

15:50

>> With the sheer amount of money being

15:53

invested into it and the money being

15:55

invested into the infrastructure, the

15:56

physical data centers, the chips, the

15:59

compute,

16:00

do you think we're going to get there?

16:03

Do you think we're going to get to AGI?

16:04

>> I do think that we're going to get

16:05

there. It's not clear uh how long it

16:08

will take. And I'm not saying that

16:09

because I believe necessarily the

16:10

current paradigm that we're building on

16:12

will take us there, but you know, I'm

16:14

based in San Francisco. I talked to

16:15

people at the AI labs. Half these people

16:17

are friends of mine. You know, people at

16:18

the very top level. And you know, most

16:22

people in the industry believe that

16:24

they'll get there between the next two

16:25

and 10 years at the latest. And I think

16:28

some people might say, "Oh, well, it may

16:30

not happen for a while. Phew. I can sit

16:31

back and we don't have to worry about

16:32

And it's like we're heading for so much

16:35

transformative change faster than our

16:37

society is currently prepared to deal

16:39

with it. The reason I was excited to

16:41

talk to you today is because I think

16:42

that people are currently confused about

16:44

AI. You know, people say it's going to

16:45

solve everything, cure cancer, uh solve

16:48

climate change, and there's people say

16:49

it's going to kill everything. It's

16:50

going to be doom. Everyone's going to go

16:52

extinct. If anyone builds it, everyone

16:53

dies. And those those conversations

16:55

don't converge. And so everyone's just

16:58

kind of confused where how can it be,

16:59

you know, infinite promise and how can

17:01

it be infinite peril? And what I wanted

17:03

to do today is to really clarify for

17:05

people what the incentives point us

17:07

towards which is a future that I think

17:09

people when they see it clearly would

17:11

not want.

17:12

>> So what are the incentives pointing us

17:15

towards in terms of the future?

17:17

>> So first is if you believe that this is

17:19

like it's metaphorically it's like the

17:21

ring from Lord of the Rings. It's the

17:23

ring that that creates infinite power

17:25

because if I have AGI, I can apply that

17:28

to military advantage. I can have the

17:29

best military planner that can beat all

17:31

battle plans for anyone. And we already

17:33

have AIs that can obviously beat Gary

17:36

Kasparov at chess, beat Go, the Go Asian

17:39

um board game, or now beat Starcraft. So

17:42

you have AI that are beating humans at

17:43

strategy games. Well, think about

17:45

Starcraft compared to an actual military

17:48

campaign, you know, in Taiwan or

17:49

something like that. If I have an AI

17:51

that can out compete in strategy games,

17:53

that lets me out compete everything. Or

17:55

take business strategy. If I have an AI

17:57

that can do business strategy and figure

17:59

out supply chains and figure out how to

18:00

optimize them and figure out how to

18:01

undermine my competitors

18:03

and I have a, you know, a step function

18:05

level increase in that compared to

18:06

everybody else, then that gives me

18:08

infinite power to undermine and out

18:10

compete all businesses. If I have a

18:12

super programmer, then I can out compete

18:15

programming. 70 to 90% of the code

18:17

written at today's AI labs is written by

18:20

AI.

18:21

>> Think about the stock market as well.

18:23

>> Think about the stock market. If I have

18:24

an AI that can trade in the stock market

18:25

better than all the other AIs, because

18:28

they're currently there's mostly AIs

18:29

that are actually trading in the stock

18:30

market, but if I have a jump in that,

18:32

then I can consolidate all the wealth.

18:34

If I have an AI that can do cyber

18:36

hacking, that's way better at cyber

18:37

hacking in a step function above what

18:39

everyone else can do, then I have an

18:41

asymmetric advantage over everybody

18:42

else. So AI is like a power pump. It

18:46

pumps economic advantage. It pumps

18:49

scientific advantage and it pumps

18:51

military advantage. Which is why the

18:53

countries and the companies are caught

18:55

in what they believe is a race to get

18:57

there first. And anything that is a

19:00

negative consequence of that, job loss,

19:02

rising energy prices, more emissions,

19:06

stealing intellectual property, you

19:08

know, security risks, all of that stuff

19:09

feels small relative to if I don't get

19:12

there first, then some other person who

19:15

has less good values as me, they'll get

19:18

AGI and then I will be forever a slave

19:20

to their future. And I know this might

19:21

sound crazy to a lot of people, but this

19:23

is how people in at the very top of the

19:26

AGI AI world believe is currently

19:29

happening. And that's what

19:30

>> conversations.

19:31

>> Yeah.

19:33

>> You you've had I mean know Jeff Hinton

19:35

and and Roman Ylonsky on and other

19:37

people Mogadat and they're saying the

19:39

same thing. And I think people need to

19:41

take seriously that whether you believe

19:43

it or not, the people who are currently

19:45

deploying the trillions of dollars, this

19:47

is what they believe. And they believe

19:49

that it's win or take all. And it's not

19:51

just first solve intelligence and use

19:53

that to solve everything else. It's

19:54

first dominate intelligence and use that

19:57

to dominate everything else.

19:58

>> Have you had concerning private

20:00

conversations about this subject matter

20:01

with people that are in the industry?

20:04

>> Absolutely. I think that's what most

20:07

people don't understand is that um

20:09

there's a different conversation

20:11

happening publicly than the one that's

20:12

happening privately. I think you're

20:14

aware of this as well.

20:14

>> I am aware of this.

20:15

>> What do they say to you?

20:19

>> So, it's not always the people telling

20:22

me directly. It's usually one step

20:24

removed. So, it's usually someone that I

20:26

trust and I've known for many, many

20:28

years who at a kitchen table says, "I

20:30

met this particular CEO. We were in this

20:32

room talking about the future of AI.

20:34

this particular CEO they're referencing

20:36

is leading one of the biggest AI

20:37

companies in the world and then they'll

20:38

explain to me what they think of the

20:40

future's going to look like and then

20:41

when I go and watch them on YouTube or

20:43

podcasts what they're saying is they

20:45

they have this real public bias towards

20:47

the abundance part that you know we're

20:49

going to cure cancer

20:50

>> cure cancer universal high income for

20:52

everyone

20:53

>> yeah all this all this stuff

20:55

>> doesn't work anymore

20:56

>> but then privately what I hear is is

20:58

exactly what you said which is really

21:00

terrifying to me there was actually

21:01

since since the last time we had a

21:03

conversation about AR and podcast, I was

21:06

speaking to a friend of mine, very

21:07

successful billionaire, knows a lot of

21:08

these people, and he is concerned

21:11

because his argument is that if there's

21:14

even like a 5% chance of the adverse

21:18

outcomes that we hear about, we should

21:21

not be doing this. And he was saying to

21:22

me that some of his friends who are

21:24

running some of these companies believe

21:25

the chance is much higher than that, but

21:27

they feel like they're caught in a race

21:29

where if they don't control this

21:30

technology and they don't get there

21:32

first and get to what they refer to as

21:34

um takeoff, like fast takeoff.

21:37

>> Yeah. Uh recursive self-improvement or

21:38

fast takeoff, which basically means what

21:40

the companies are really in a race for

21:42

you're pointing to is they're in a race

21:44

to automate AI research. Um because so

21:48

right now you have open AI, it's got a

21:50

few thousand employees. Human beings are

21:52

coding and doing the AI research.

21:54

They're reading the latest research

21:56

papers. They're writing the next, you

21:57

know, they're hypothesizing what's the

21:59

improvement we're going to make to AI.

22:00

What's a new way to do this code? What's

22:01

a new technique? And then they use their

22:04

human mind and they go invent something.

22:06

They they run the experiment and they

22:07

see if that improves the performance.

22:09

And that's how you go from, you know,

22:10

GPT4 to GPT5 or something. Imagine a

22:14

world where Sam Alman can instead of

22:16

having human AI researchers can have AI

22:20

AI researchers. So now I just snap my

22:23

fingers and I go from one AI that reads

22:26

all the papers, writes all the code,

22:27

creates the new experiments to I can

22:30

copy paste a 100 million AI researchers

22:33

that are now doing that in an automated

22:35

way. And it the belief is not just that,

22:38

you know, the companies look like

22:39

they're competing to release better chat

22:41

bots for people, but the what they're

22:42

really competing for is to get to this

22:45

milestone of being to automate an

22:47

intelligence explosion or automate

22:49

recursive self-improvement, which is

22:51

basically automating AI research. And

22:53

that, by the way, is why all the

22:55

companies are racing specifically to get

22:58

good at programming because the faster

23:00

you can automate a human programmer, the

23:03

more you can automate AI research. And

23:05

just a couple weeks ago, Cloud 4.5 was

23:08

released and it can do 30 hours of

23:11

uninterrupted complex programming tasks

23:14

at the at the high end.

23:16

That's crazy.

23:18

So right now one of the limits on the

23:19

progress of AI is that human humans are

23:21

doing the work but actually all of these

23:23

companies are pushing to the moment when

23:25

AI will be doing the work which means

23:26

they can have an infinite arguably

23:28

smarter zerocost workforce that's right

23:31

scaling the AI. So when they talk about

23:33

fast takeoff they mean the moment where

23:35

they where the AI takes control of the

23:36

research and it and progress rapidly

23:38

increases

23:39

>> and it self-learns and recursively

23:41

improves and invents. Um, so one thing

23:43

to get is that AI accelerates AI, right?

23:46

Like if I invent nuclear weapons,

23:49

nuclear weapons don't invent better

23:50

nuclear weapons.

23:51

>> Yeah.

23:52

>> But if I invent AI, AI is intelligence.

23:55

Intelligence automates better

23:56

programming, better chip design. So I

23:58

can use AI to say, here's a design for

24:00

the NVIDIA chips. Go make it 50% more

24:02

efficient. And it can find out how to do

24:04

that. I can say AI, here's a supply

24:06

chain that I need for all the things for

24:07

my AI company. And it can optimize that

24:09

supply chain and make that supply chain

24:11

more efficient.

24:11

>> Mhm. AI, here's the code for making AI.

24:14

Make that more efficient. Um, AI, here's

24:16

training data. I need to make more

24:17

training data. Go run a million

24:19

simulations of how to do this and it'll

24:21

train itself to get better.

24:23

>> AI accelerates AI.

24:24

>> What do you think these people are

24:25

motivated by the CEOs of these

24:27

companies?

24:28

>> That's a good question.

24:29

>> Genuinely, what do you think their

24:30

genuine motivations are when you think

24:32

about all these names?

24:36

>> I think it's a subtle thing.

24:38

I think

24:40

there's um it's almost mythological

24:44

because

24:46

there's almost a way in which they're

24:47

building a new intelligent entity that

24:50

has never before existed on planet

24:52

Earth. It's like building a god. I mean,

24:54

the incentive is build a god, own the

24:57

world economy, and make trillions of

24:58

dollars, right? If you could actually

25:01

build something that can automate all

25:04

intelligent tasks, all goal achieving

25:07

that will let you out compete

25:08

everything. So that is a kind of godlike

25:11

power that I think relative imagine

25:14

energy prices go up or hundreds of

25:16

millions of people lose their jobs. That

25:18

those things suck. But relative to if I

25:20

don't build it first and build this god,

25:23

I'm going to lose to some maybe worse

25:24

person who I think in my opinion, not my

25:26

opinion, Tristan, but their opinion

25:28

thinks is a worse person. It's it's a

25:30

kind of competitive logic that

25:35

self-reinforces itself, but it forces

25:38

everyone to be incentivized to take the

25:40

most shortcuts, to care the least about

25:43

safety or security, to not care about

25:45

how many jobs get disrupted, to not care

25:47

about the well-being of regular people,

25:49

but to basically just race to this

25:51

infinite prize. So, there's a quote that

25:54

um a friend of mine interviewed a lot of

25:55

the top people at the AI companies, like

25:57

the very top, and he just came back from

25:59

that and and basically reported back to

26:01

me and some friends, and he said the

26:03

following.

26:05

In the end, a lot of the tech people I

26:07

talk to when I'm when I really grill

26:09

them on it about like why you're doing

26:10

this, they retreat into number one,

26:13

determinism,

26:15

number two, the inevitable replacement

26:17

of biological life with digital life,

26:19

and number three, that being a good

26:21

thing. Anyways, at its core, it's an

26:24

emotional desire to meet and speak to

26:26

the most intelligent entity that they've

26:29

ever met. And they have some ego

26:31

religious intuition that they'll somehow

26:33

be a part of it. It's thrilling to start

26:35

an exciting fire. They feel they'll die

26:37

either way, so they prefer to light it

26:39

and see what happens.

26:42

>> That is the perfect description of the

26:44

private conversations.

26:45

>> Doesn't that match what what you have

26:47

description,

26:47

>> doesn't it? And that's the thing. So,

26:49

people may hear that and they're like,

26:50

"Well, that sounds ridiculous." But if

26:51

you actually

26:52

>> I just got goosebumps cuz it's the

26:53

perfect description. Especially the part

26:55

they'll think they'll die either way.

26:56

>> Exactly. Well, and um worse than that,

27:01

some of them think that in the case

27:03

where they if they were to get it right

27:04

and if they succeeded, they could

27:06

actually live forever because if AI

27:08

perfectly speaks the language of

27:10

biology, it will be able to reverse

27:12

aging aging, cure every disease. And and

27:16

so there's this kind of I could become a

27:18

god. And I'll I'll tell you um you know,

27:20

you and I both have know people who've

27:22

had private conversations. Well, one of

27:24

them that I have heard from one of the

27:26

co-founders of one of the most, you

27:28

know, powerful of these companies when

27:31

when faced with the idea that what if

27:33

there's an 80% or 20% chance that

27:36

everybody dies and gets wiped out by

27:38

this, but an 80% chance that we get

27:41

utopia. He said, well, I would clearly

27:43

accelerate and go for the utopia.

27:46

Given a 20% chance,

27:50

it's crazy. People should feel you do

27:53

not get to make that choice on behalf of

27:55

me and my family. We didn't consent to

27:58

have six people make that decision on

28:00

behalf of eight billion people. We have

28:02

to stop pretending that this is okay or

28:03

normal. It's not normal. And the only

28:06

way that this is happening and they're

28:07

getting away with it is because most

28:09

people just don't really know what's

28:11

going on.

28:12

>> Yeah. But I'm curious what what do you

28:13

think when I

28:14

>> It's I mean everything you just said

28:15

it's that last part about the 8020%

28:18

thing is almost verbatim what I heard

28:20

from a very good very successful friend

28:21

of mine who is responsible for building

28:23

some of the biggest companies in the

28:24

world when he was referencing a

28:26

conversation he had with the founder of

28:29

maybe the biggest company in the world

28:31

and it was truly shocking to me because

28:33

because it was said in such a blasé way.

28:36

>> Yes. It wasn't Yeah. That that's what I

28:37

had heard in this particular situation.

28:39

wasn't like

28:42

a matter of fact.

28:42

>> It was a matter of fact, it's just easy.

28:43

Yeah, of course I would do the I would

28:45

take the I roll the dice.

28:48

>> And even Elon Musk said he actually said

28:50

the same number in an interview with Joe

28:52

Rogan. Um, and if you listen closely

28:54

when he said, "I decided I'd rather be

28:57

there when it all happens. If it all

28:59

goes off the rails, I decided in that

29:00

worst case scenario, I decided that I'

29:02

I'd prefer to be there when it happens."

29:04

Which is justifying racing to our

29:07

collective suicide.

29:09

Now, I also want people to know like you

29:10

don't have to buy into the sci-fi level

29:12

risks to be very concerned about AI. So,

29:14

hopefully later we'll talk about um the

29:17

many other risks that are already

29:18

hitting us right now that you don't have

29:20

to believe any of this stuff.

29:21

>> Yeah. The the Elon thing I think is

29:23

particularly interesting because for the

29:25

last 10 years he was this slightly hard

29:28

to believe voice on the subject of AI.

29:31

He was talking about it being a huge

29:32

risk

29:33

>> and an extinction level.

29:34

>> He was the first AI risk people. Yeah.

29:35

He was saying this is more dangerous

29:37

than nukes. He was saying, "I try to get

29:38

people to stop doing it. This is

29:40

summoning the demon." Those are his

29:41

words, not mine.

29:42

>> Yeah.

29:42

>> Um, we shouldn't do this. Supposedly, he

29:44

used his first and only meeting with

29:46

President Obama, I think, in 2016, to

29:49

advocate for global regulation and

29:51

global controls on on AI, um, because he

29:53

was very worried about it. And then

29:55

really what happened is, um, Chachi BT

29:59

came out and as you said, that was the

30:01

starting gun and now everybody was in an

30:03

allout race to get there first. He

30:06

tweeted words to the effect I'll put it

30:07

on the screen. He tweeted that he had

30:10

remained in I think he used a word

30:13

similar to disbelief for some time like

30:15

suspended disbelief. But then he said in

30:17

the same tweet that the race is now on.

30:20

>> The race is on and I have to race

30:21

>> and I have to go. I have no choice but

30:23

to go. And he tried he's basically

30:24

saying I tried to fight it for a long

30:26

time. I tried to deny it. I tried to

30:27

hope that we wouldn't get here but we're

30:29

here now so I have to go.

30:30

>> Yeah.

30:31

>> And

30:32

at least he's being honest. He does seem

30:35

to have a pretty honest track record on

30:37

this because because he was the guy 10

30:38

years ago warning everybody. And I

30:40

remember him talking about it and

30:41

thinking, "Oh god, this is like 100

30:42

years away. Why are we talking about

30:43

that?"

30:43

>> I felt the same, by the way. Some people

30:44

might think that I'm some kind of AI

30:46

enthusiast and I'm trying to ratch I I

30:47

didn't believe that AI was a thing to be

30:49

worried about at all until suddenly the

30:51

last 2 three years where you can

30:53

actually see where we're headed. But um

30:57

oh man, there's just there's so much to

30:59

say about all this and I'm so if you

31:01

think about it from their perspective,

31:03

it's like best case scenario, I build it

31:07

first and it's aligned and controllable,

31:10

meaning that it will take the actions

31:11

that I want. It won't destroy humanity

31:14

and it's controllable, which means I get

31:15

to be God and emperor of the world.

31:18

Second scenario, it's not controllable,

31:21

but it's aligned. So, I built a god and

31:23

I lost control of it, but it's now

31:25

basically it's running humanity. It's

31:26

running the show. It's choosing what

31:28

happens. It's out competing everyone on

31:31

everything. That's not that bad an

31:32

outcome. Third scenario, it's not

31:35

aligned. It's not controllable. And it

31:37

does wipe everybody out. And that should

31:39

be demotivating to that person, to an

31:41

Elon or someone, but in that scenario,

31:45

they were the one that birthed the

31:46

digital god that replaced all of

31:48

humanity. Like this is really important

31:50

to get because in nuclear weapons

31:53

the risk of nuclear war is an omni

31:56

lose-lose outcome. Everyone wants to

31:58

avoid that. And I know that you know

32:00

that I know that we both want to avoid

32:02

that.

32:03

>> So that that motivates us to coordinate

32:05

and to have a nuclear

32:06

non-prololiferation treaty. But with AI,

32:10

the worst case scenario of everybody

32:12

gets wiped out is a little bit different

32:15

for the people making that decision.

32:17

Because if I'm the CEO of DeepSeek and I

32:21

make that AI that does wipe out

32:23

humanity, that's the worst case scenario

32:24

and it wasn't avoidable because it was

32:26

all inevitable. Then even though we all

32:29

got wiped out, I was the one who built

32:31

the digital god that replaced humanity.

32:32

And there's kind of ego in that. And uh

32:36

the god that I built speaks Chinese

32:38

instead of English.

32:40

>> That's the religious ego point.

32:41

>> That's the ego.

32:42

>> Such a great point because that's

32:43

exactly what it is. It's like this

32:44

religious ego where I will be

32:46

transcendent in some way.

32:47

>> And you notice that it it all starts by

32:48

the belief that this is inevitable.

32:50

>> Yeah.

32:51

>> Which is like is this inevitable? It's

32:53

important to note because

32:56

if you believe it's if everybody who's

32:58

building it believes it's inevitable and

32:59

the investors funding it believe it's

33:00

inevitable, it cocreates the

33:03

inevitability.

33:04

>> Yeah.

33:04

>> Right.

33:05

>> Yeah.

33:06

>> And the only way out is to step outside

33:10

the logic of inevitability. Because if

33:12

if we are all heading to our collective

33:14

suicide, which I don't know about you, I

33:17

don't think that I don't want that. You

33:19

don't want that. Everybody who loves

33:21

life looks at their children in the

33:22

morning and says, I want I want the

33:24

things that I love and that are sacred

33:26

in the world to continue. That's what n

33:28

that's what everybody in the world

33:29

wants. And the only thing that is having

33:33

us not anchor on that is the belief that

33:35

this is inevitable and the worst case

33:36

scenario is somehow in this ego

33:38

religious way, not so bad. if I was the

33:41

one who accidentally wiped out humanity

33:43

because I'm not a bad person because it

33:45

was inevitable anyway.

33:47

>> And I think the goal of of for me this

33:49

conversation is to get people to see

33:51

that that's a bad outcome that no one

33:52

wants. And we have to put our hand on

33:55

the steering wheel and turn towards a

33:57

different future because we do not have

33:59

to have a race to uncontrollable,

34:01

inscrutable, powerful AIs that are, by

34:04

the way, already doing all the rogue

34:05

sci-fi stuff that we thought only

34:07

existed in movies like blackmailing

34:09

people. uh being self-aware when they're

34:12

being tested, scheming and lying and

34:14

deceiving to copy their own code to keep

34:16

themselves preserved. Like the stuff

34:18

that we thought only existed in sci-fi

34:19

movies is now actually happening. And

34:23

that should be enough evidence to say

34:26

we don't want to do this path that we're

34:28

currently on. It's not that

34:31

some version of AI progressing into the

34:33

world is directionally inevitable, but

34:35

we get to choose which of those futures

34:37

that we want to have.

34:39

Are you hopeful? Honestly,

34:43

honestly,

34:44

>> I don't relate to hopefulness or

34:47

pessimism either because I focus on what

34:50

would have to happen for the world to go

34:52

okay. I think it's important to step out

34:55

of because both hope or optimism or

34:58

pessimism are both passive.

35:01

You're saying if I sit back, do I which

35:03

way is it going to go? I mean, the

35:04

honest answer is if I sit back, we just

35:06

talked about which way it's going to go.

35:07

So, you'd say pessimistic?

35:09

I challenge anyone who says optimistic.

35:12

On what grounds?

35:14

What's confusing about AI is it will

35:16

give us cures to cancer and probably

35:17

major solutions to climate change and

35:19

physics breakthroughs and fusion at the

35:21

same time that it gives us all this

35:23

crazy negative stuff. And so what's

35:26

unique about AI that's literally not

35:27

true of any other object is it hits our

35:29

brain and as one object represents a

35:32

positive infinity of benefits that we

35:34

can't even imagine and a negative

35:36

infinity in the same object and if you

35:39

just ask like can our minds reckon with

35:42

something that is both those things at

35:43

the same time and if

35:45

>> people aren't good at that

35:46

>> they're not good at that

35:48

>> I remember reading the work of Leon

35:49

Festinger the guy that coined the term

35:51

cognitive

35:52

>> dissonance yes when prophecies fail he

35:54

also did that Yeah. And essential I mean

35:56

the way that I interpret it I'm probably

35:57

simplifying it here is that the human

35:58

brain is really bad at holding two

36:00

conflicting ideas at the same time.

36:02

That's right. So it dismisses one.

36:03

That's right.

36:04

>> To alleviate the discomfort, the

36:05

dissonance that's caused. So for

36:07

example, if I if you're a smoker and at

36:09

the same time you consider yourself to

36:10

be a healthy person, if I point out that

36:12

smoking is unhealthy, you will

36:14

immediately justify it.

36:15

>> Exactly.

36:15

>> With in some way to try and alleviate

36:17

that discomfort, the the contradiction.

36:19

And it's the same here with with AI.

36:20

It's it's very difficult to have a

36:22

nuanced conversation about this because

36:23

the brain is trying to

36:24

>> Exactly. And people will hear me and say

36:26

I'm a doomer or I'm a pessimist. It's

36:27

actually not the goal. The goal is to

36:28

say if we see this clearly then we have

36:31

to choose to something else. I'm it's

36:32

the deepest form of optimism because in

36:35

the presence of seeing where this is

36:36

going still showing up and saying we

36:39

have to choose another way. It's coming

36:41

from a kind of agency and a desire for

36:44

that better world

36:45

>> but by but by facing the difficult

36:47

reality that that most people don't want

36:48

to face.

36:49

>> Yeah. And the other thing that's

36:50

happening in AI that you're saying

36:51

that's that lacks the nuance is that

36:54

people point to all the things it's

36:55

simultaneously more brilliant than

36:57

humans and embarrassingly stupid in

37:00

terms of the mistakes that it makes.

37:02

>> Yeah.

37:02

>> A friend like Gary Marcus would say

37:04

here's a hundred ways in which GPT5 like

37:06

the latest AI model makes embarrassing

37:08

mistakes. If you ask it how many

37:09

strawberries contain the word R in it,

37:12

it'll confuse it gets confused about

37:14

what the answer is. um or it'll put more

37:16

fingers on the hands than in the deep

37:18

fake photo or something like that. And I

37:20

think that one thing that we have to do

37:22

what Helen Toner who is what board

37:23

member of OpenAI calls AI jaggedness

37:26

that we have simultaneously AIs that are

37:29

beating and getting gold on the

37:31

International Math Olympiad that are

37:33

solving new physics that are beating

37:35

programming competitions and are better

37:37

than the top 200 programmers in the

37:39

whole world um or in the top 200

37:41

programmers in the whole world that are

37:42

beating cyber hacking competitions. It's

37:44

both supremely outperforming humans and

37:48

embarrassingly uh failing in places

37:50

where humans would never fail. So how

37:52

does our mind integrate those two

37:53

pictures?

37:54

>> Mhm. Have you ever met Sam Orman?

37:56

>> Yeah.

37:57

>> What do you think his incentives are? Do

37:59

you think he cares about humanity?

38:02

>> I think that these people on some level

38:05

all care about humanity underneath there

38:08

is a care for humanity. I think that

38:11

this situation, this particular

38:13

technology, it justifies

38:16

lacking empathy for what would happen to

38:18

everyone because I have this other side

38:19

of the equation that demands infinitely

38:22

more importance, right? Like if I didn't

38:24

do it, then someone else is going to

38:26

build the thing that ends civilization.

38:29

So, it's like,

38:30

>> do you see what I'm saying? It's it's

38:31

not

38:32

>> it's it's I I can justify it as I'm a

38:34

good guy.

38:36

>> And what if I get the utopia? What if we

38:38

get lucky and I got the aligned

38:39

controllable AI that creates abundance

38:41

for everyone?

38:44

If in that case I would be the hero. Do

38:46

they have a point when they say that

38:48

listen if we don't do it here in America

38:50

if we slow down if we start thinking

38:52

about safety and the long-term future

38:54

and get too caught up in that. We're not

38:56

going to build the data centers. We're

38:57

not going to have the chips. We're not

38:58

going to get to AGI and China will. And

39:01

if China get there, then we're going to

39:02

be their lap dog.

39:03

>> So this is this is the fundamental thing

39:05

I want you to notice. Most people having

39:07

heard everything we just shared,

39:08

although we probably should build out um

39:10

we probably should build out the

39:12

blackmail examples first, we have to

39:15

reckon with evidence that we have now

39:17

that we didn't have even like 6 months

39:19

ago, which is evidence that when you put

39:22

AIs in a situation, you tell the AI

39:24

model, "We're going to replace you with

39:25

another model." It will copy its own

39:28

code and try to preserve itself on

39:30

another computer. It'll take that action

39:33

autonomously.

39:34

We have examples where if you tell an AI

39:36

model reading a fictional AI company's

39:39

email, so it's reading the email of the

39:41

company and it finds out in the email

39:44

that the plan is to replace this AI

39:46

model. So it realizes it's about to get

39:48

replaced and then it also reads in the

39:50

company email that one executive is

39:51

having an affair with the other employee

39:54

and the AI will independently come up

39:56

with the strategy that I need to

39:58

blackmail that executive in order to

40:00

keep myself alive.

40:03

That was Claude, right?

40:04

>> That was Claude by Enthropic.

40:05

>> Byanthropic. But then what happened is

40:08

they Enthropic tested all of the leading

40:10

AI models from DeepSeek, OpenAI, Chatbt,

40:13

Gemini, XAI. And all of them do that

40:16

blackmail behavior between 79 and 96% of

40:20

the time. Deepseek did it 79% of the

40:22

time. I think XAI might have done it 96%

40:25

of the time. Maybe Claude did it 96% of

40:26

the time.

40:28

So the point is we the assumption behind

40:31

AI is that it's controllable technology

40:32

that we will get to choose what it does.

40:35

But AI is distinct from other

40:36

technologies because it is

40:38

uncontrollable. It acts generally. The

40:40

whole benefit is that you don't it's

40:42

going to do powerful strategic things no

40:43

matter what you throw at it. So the same

40:46

benefit of its generality is also what

40:47

makes it so dangerous. And so once you

40:51

tell people these examples of it's

40:52

blackmailing people, it's self-aware of

40:55

when it's being tested and alters its

40:56

behavior. It's copying and

40:58

self-replicating its own code. It's

40:59

leaving secret messages for itself.

41:01

There's examples of that, too. It's

41:03

called steganographic encoding. It can

41:04

leave a message that it can later sort

41:07

of decode what it might meant in in a

41:09

way that humans could never see. We have

41:11

examples of all of this behavior. And

41:14

once you show people that, what they say

41:16

is, "Okay, well, why don't we stop or

41:19

slow down?" And then what happens?

41:20

Another thought will creep in right

41:22

after, which is, "Oh, but if we stop or

41:24

slow down, then China will still build

41:25

it." But I want to slow that down for a

41:28

second.

41:29

You just, we all just said we should

41:31

slow down or stop because the thing that

41:33

we're building, the it is this

41:35

uncontrollable AI. And then the concern

41:37

that China will build it, you just did a

41:40

swap and believe that they're going to

41:41

build controllable AI. But we just

41:43

established that all the AIs that we're

41:45

currently building are currently

41:46

uncontrollable.

41:48

So there's this weird contradiction our

41:50

mind is living in when we say they're

41:52

going to keep building it. What the it

41:53

that they would keep building is the

41:54

same uncontrollable AI that we would

41:56

build. So, I don't see a way out of this

41:59

without there being some kind of

42:01

agreement or negotiation between the

42:03

leading powers and countries to

42:09

pause, slow down, set red lines for

42:12

getting to a controllable AI. And by the

42:13

way, the Chinese Communist Party, what

42:15

do they care about more than anything

42:16

else in the world?

42:18

>> Surviving.

42:19

>> Surviving and control. Yeah.

42:20

>> Control as a means to survive.

42:22

>> Yeah. So, it's they they don't want

42:24

uncontrollable AI anymore than we would.

42:29

And as as unprecedented as impossible as

42:31

this might seem, we've done this before.

42:35

In the 1980s, there was a different

42:37

technology chemical technology called

42:39

CFCs, a chlorofhluocarbons, and it was

42:42

embedded in aerosols like hairsprays and

42:44

deodorant, things like that. And there

42:45

was this sort of corporate race where

42:47

everyone was releasing these products

42:48

and you know using it for refrigerants

42:50

and using it for hairsprays and it was

42:52

creating this collective problem of um

42:54

the ozone hole in the atmosphere. And

42:57

once there was scientific clarity that

42:59

that ozone hole would cause skin

43:01

cancers, cataracts and sort of screw up

43:03

biological life on planet Earth. We had

43:05

that scientific clarity and we created

43:06

the Montreal protocol.

43:09

195 countries signed on to that protocol

43:12

and the countries then regulated their

43:14

private companies inside those countries

43:16

to say we need to phase out that

43:18

technology and phase in a different

43:20

replacement that would not cause the

43:22

ozone hole and in the course of um the

43:25

last 20 years we have basically

43:28

completely reversed that problem I think

43:29

it'll completely reverse by 2050 or

43:31

something like that and that's an

43:33

example where humanity can coordinate

43:35

when we have clarity or the nuclear

43:37

non-prololiferation treaty when there's

43:39

the risk of existential destruction when

43:42

this film called the day after came out

43:44

and it showed people this is what would

43:46

actually happen in a nuclear war and

43:47

once that was crystal clear to people

43:50

including in the Soviet Union where the

43:51

film was aired uh in 1987 or 1989 that

43:55

helped set the conditions for Reagan and

43:58

Gorbachev to sign the first

43:59

non-proliferation arms control talks

44:01

once we had clarity about an outcome

44:03

that we wanted to avoid and I think the

44:05

current problem is that we're not having

44:07

an honest conversation in the public

44:09

about which world we're heading to that

44:11

is not in anyone's interest.

44:13

>> There's also just a bunch of cases

44:15

through history where there was a

44:17

threat, a collective threat and despite

44:20

the education,

44:21

people didn't change, countries didn't

44:23

change because the incentives were so

44:25

high. So I think of global warming as

44:27

being an example where for many decades

44:29

since I was a kid, I remember watching

44:30

my dad sitting me down and saying,

44:31

"Listen, you got to watch this

44:32

inconvenient truth thing with Al Gore."

44:34

and sitting on the sofa, I don't know,

44:35

must have been less than 10 years old

44:37

and hearing about glo the threat of

44:39

global warming. But when you look at how

44:42

countries like China responded to that,

44:43

>> y

44:44

>> they just don't have the economic

44:46

incentive to scale back production to

44:49

the levels that would be needed to save

44:51

the the atmosphere.

44:53

>> The closer the technology that needs to

44:55

be governed is to the center of GDP and

44:58

the center of the lifeblood of your

45:00

economy, Yeah. the harder it is to come

45:02

to international negotiation and

45:04

agreement.

45:05

>> Yeah.

45:05

>> And oil and fossil fuels was the kind of

45:09

the pumping the heart of our economic

45:12

superorganisms that are currently

45:14

competing for power. And so coming to

45:16

agreements on that is is really really

45:17

hard. AI is even harder because AI pumps

45:22

not just economic growth but scientific,

45:23

technological and military advantages.

45:27

And so it will be the hardest

45:29

coordination challenge that we will ever

45:31

face. But if we don't face it, if we

45:35

don't make some kind of choice, it will

45:38

end in tragedy. We're not in a race just

45:41

to have technological advantage. We're

45:43

in a race for who can better govern that

45:45

technologies impact on society. So for

45:47

example, the United States beat China to

45:50

social media. that technology. Did that

45:53

make us stronger or did that make us

45:56

weaker?

45:57

We have the most anxious and depressed

45:59

generation of our lifetime. We have the

46:00

least informed and most polarized

46:02

generation. We have the worst critical

46:03

thinking. We have the worst ability to

46:05

concentrate and do things. And that's

46:09

because we did not govern the impact of

46:10

that technology well. And the country

46:12

that actually figures out how to govern

46:14

it well is the country that actually

46:16

wins in a kind of comprehensive sense.

46:18

>> But they have to make it first. You have

46:20

to get to AGI first.

46:22

>> Well, or you don't. We could instead of

46:25

building these super intelligent gods in

46:27

a box. Right now, China, as I understand

46:29

it, from Eric Schmidt and Selena Shu in

46:31

in the New York Times wrote a piece

46:33

about how China is actually taking a

46:34

very different approach to AI and

46:37

they're focused on narrow practical

46:38

applications of AI. So, like how do we

46:40

just increase government services? How

46:42

do we make, you know, education better?

46:44

How do we embed DeepS in in the WeChat

46:47

app? How do we make uh robotics better?

46:49

and pump GDP. So like what China's doing

46:51

with BYD and making the cheapest

46:52

electric cars and out competing

46:54

everybody else that's narrowly applying

46:56

AI to just pump manufacturing output.

46:59

And if we realized that if we're instead

47:02

of competing to build a super

47:03

intelligent uncontrollable god in a box

47:05

that we don't know how to control in the

47:06

box and we instead raced to create

47:09

narrow AIs that were actually about

47:11

making stronger educational outcomes,

47:13

stronger agriculture output, stronger

47:14

manufacturing output, we could live in a

47:17

sustainable world, which by the way

47:18

wouldn't replace all the jobs faster

47:20

than we know how to retrain people.

47:23

Because when you race to AGI, you're

47:24

racing to displace millions of workers.

47:29

And we talk about UBI, but are we going

47:32

to have a global fund for every single

47:35

person of the 8 billion people on planet

47:36

Earth in all countries to pay for their

47:38

lifestyle after that wealth gets

47:40

concentrated?

47:42

When has a small group of people

47:44

concentrated all the wealth in the

47:46

economy and ever consciously

47:47

redistributed it to everybody else? When

47:49

has that happened in history?

47:51

>> Never.

47:53

Has it ever happened? Anyone ever just

47:56

willingly redistributed the wealth?

47:58

>> Not that I'm aware of. When Ed, one last

48:00

thing, what when Elon Musk says that the

48:02

Optimus Prime robot is a $1 trillion

48:05

market opportunity alone, what he means

48:07

is I am going to own the global labor

48:11

economy, meaning that people won't have

48:13

labor jobs.

48:16

China wants to become the global leader

48:17

in artificial intelligence by 2030. To

48:20

achieve this goal, Beijing is deploying

48:21

industrial policy tools across the full

48:23

AI technology stack from chips to

48:25

applications. And this expansion of AI

48:26

industrial policy leads to two

48:28

questions, which is what will they do

48:30

with this power and who will get there

48:31

first? This is an article I was reading

48:33

earlier. But to your point about Elon

48:36

and Tesla, they've changed their

48:38

company's mission. It used to be about

48:40

accelerating sustainable energy and they

48:42

changed it really last week when they

48:44

did the shareholder announcement which I

48:46

watched the full thing of to sustainable

48:49

abundance. And I it was again another

48:52

moment where I messaged both everybody

48:53

that works in my companies but also my

48:55

best friends and I said you've got to

48:56

watch this shareholder announcement. I

48:57

sent them sent them the condensed

48:59

version of it because not only was I

49:01

shocked by these humanoid robots that

49:04

were dancing on stage untethered because

49:06

their movements had become very humanike

49:08

and there was a bit of like uncanny

49:09

valley

49:10

>> watching these robots dance but broadly

49:12

the bigger thing was Elon talking about

49:14

there being up to 10 billion humanoid

49:17

robots and then talking about some of

49:18

the applications he said maybe we won't

49:20

need prisons because we could make a

49:22

humanoid robot follow you and make sure

49:24

you don't commit a crime again. He said

49:26

that in his incentive package which he's

49:29

just signed which will grant him up to a

49:30

trillion dollars

49:31

>> trillion dollar

49:32

>> remuneration. Part of that incentive

49:34

package incentivizes him to get I think

49:37

it's a million humanoid robots into

49:39

civilization that can do everything a

49:41

human can do but do it better. He said

49:43

the humanoid robots would be 10x better

49:44

than the best surgeon on earth. So we

49:46

wouldn't even need surgeons doing

49:47

operations. You wouldn't want a surgeon

49:49

to do an operation. And so when I think

49:51

about job loss in the context of

49:52

everything we've described. Doug

49:54

McMillan, the Walmart CEO, also said

49:56

that, you know, their company employs

49:58

2.1 million people worldwide, said every

50:01

single job we've got is going to change

50:04

because of this sort of combination of

50:06

humanoid robots, which people think are

50:08

far away, which is crazy. They're not

50:09

that far away. They just went on sale.

50:11

No, was it now? They're terrible,

50:13

>> but they're doing it to train them.

50:14

>> Yep.

50:15

>> In household situations. And Elon's now

50:18

saying production will start very, very

50:20

soon on humanoid robots um in America. I

50:23

don't know what when I hear this, I go,

50:25

"Okay, this thing's going to be smarter

50:26

than me, and it's going to be able to

50:28

it's built to navigate through the the

50:31

environment, pick things up, lift

50:32

things. You got the physical part,

50:34

you've got the intelligence part.

50:36

>> Yeah.

50:37

>> Where do we go? Well, I think people

50:39

also say, okay, but you know, 200 years

50:42

ago, 150 years ago, everybody was a

50:44

farmer and now only 2% of people are

50:45

farmers. Humans always find something

50:47

new to do. You know, we had the elevator

50:49

man and now we have automated elevators.

50:50

We had bank tellers, now we have

50:52

automated teller machines. So humans

50:54

will always just find something else to

50:56

do. But why is AI different than that?

50:59

>> Because it's intelligence.

51:01

>> Because it's general intelligence that

51:03

means that rather than a technology that

51:05

automates just bank tellers. Yeah.

51:07

>> This is automating all forms of human

51:09

cognitive labor, meaning everything that

51:10

a human mind can do.

51:12

>> So who's going to retrain faster? you

51:14

moving to that other kind of cognitive

51:16

labor or the AI that is trained on

51:18

everything and can multiply itself by

51:20

100 million times and it retraining how

51:22

to do that other kind of labor

51:24

>> in a world of humanoid robots where if

51:25

Elon's right and he's got a track record

51:27

of delivering at least to some degree

51:30

and there are millions tens of millions

51:32

or billions of humanoid robots what do

51:34

me and you do like what is it that's

51:36

human that is still valuable like do you

51:38

know what I'm saying I mean we can hug I

51:40

guess humanoid robots are going to be

51:41

less good at hugging people

51:43

>> I I think everywhere where people value

51:46

human connection and a human

51:48

relationship, those jobs will stay

51:50

because what we value in that work is

51:53

the human relationship, not the

51:55

performance of the work. And but that's

51:58

not to justify that we should just race

51:59

as fast as possible to disrupt a billion

52:01

jobs without a transition plan where no

52:03

one how are you going to put food on the

52:04

table for your family?

52:06

>> But these companies are competing

52:07

geographically again. So if I don't know

52:10

Walmart doesn't change its whole supply

52:13

chain, its warehousing, its uh how it's

52:17

doing its its factory work, its farm

52:19

work, its shop floors, staff work, then

52:23

they're going to have less profits and a

52:26

worse business and less opportunity to

52:28

grow than the company in Europe that

52:30

changes all of its backend

52:32

infrastructure to robots. So they're

52:33

going to be a huge dis corporate

52:35

disadvantage. So they have to

52:37

>> what AI represents is the

52:39

xenithification of that competitive

52:42

logic. The logic of if I don't do it,

52:44

I'll lose to the other guy that will.

52:46

>> Is that true?

52:48

>> That's what they believe.

52:49

>> Is that true for sort of companies in

52:51

America?

52:51

>> Well, just as you said, if Walmart

52:53

doesn't automate their their workforce

52:55

and their supply chains with robots and

52:56

all their competitors did, then Walmart

52:59

would get obsoleted. If the military

53:01

that doesn't create autonomous weapons

53:03

doesn't want to because I think that's

53:04

more ethical. But all the other

53:06

militaries do get autonomous weapons,

53:08

they're just going to lose.

53:09

>> Yeah.

53:09

>> If the student who's using ChhatPt to do

53:11

their homework for them is going to fall

53:14

behind by not doing that when all their

53:15

other classmates are using chatbt to

53:17

cheat, they're going to lose. But as

53:19

we're racing to automate all of this,

53:21

we're landing in a world where in the

53:24

case of the students, they didn't learn

53:25

anything. In the case of the military

53:27

weapons, we end up in crazy Terminator

53:29

like war scenarios that no one actually

53:31

wants. In the case of businesses, we end

53:33

up disrupting billions of jobs and

53:35

creating mass outrage and public riots

53:37

on the streets because people don't have

53:38

food on the table. And so much like

53:42

climate change or these kind of

53:43

collective action problems or the ozone

53:44

hole, we're kind of creating a badness

53:48

hole through the results of all these

53:50

individual competitive actions that are

53:51

supercharged by AI. It's interesting

53:53

because in all those examples you name

53:55

the people that are building those

53:57

companies, whether it's the companies

53:58

building the autonomous AI powered war

54:02

machinery, the first thing they'll say

54:05

is, "We currently have humans dying on

54:07

the battlefield. If you let me build

54:08

this autonomous drone or this autonomous

54:10

robot that's going to go fight in this

54:12

adversar's land, no humans are going to

54:14

die anymore." And I think this is a

54:16

broader point about how this technology

54:18

is framed, which is I can guarantee you

54:20

at least one positive outcome. So, and

54:23

you can't guarantee me the downside. You

54:25

can't.

54:26

>> But if that war escalates into

54:30

I mean, the reason that the Soviet Union

54:32

and the United States have never

54:33

directly fought each other is because

54:34

the belief is it would escalate into

54:36

World War II and nuclear escalation. If

54:39

China and the US were ever to be in

54:40

direct conflict, there's a concern that

54:42

you would escalate into nuclear

54:44

escalation. So it looks good in the

54:47

short term, but then what happens when

54:48

it cybernetically sort of everything

54:50

gets chain reactioned into everybody

54:53

escalating in ways that that causes many

54:56

more humans to die.

54:57

>> I think what I'm saying is the downside

54:58

appears to be philosophical whereas the

55:00

upside appears to be real and measurable

55:02

and tangible right now. But but how is

55:04

it if if the automated weapon gets fired

55:08

and

55:09

it leads to again a cascade of all these

55:11

other automated responses and then those

55:14

automated responses get these other

55:15

automated responses and these other

55:16

automated responses and then suddenly

55:17

the automated war planners start moving

55:19

the troops around and suddenly you've

55:21

you've created this sort of escalatory

55:23

loss of control spiral.

55:26

>> Yeah. And that that and then humans will

55:28

be involved in that and then if that

55:30

escalates you get nuclear weapons

55:32

pointed at each other.

55:33

>> Do you see what I'm feel this again is

55:35

is a

55:37

sort of a more philosophical domino

55:39

effect argument whereas when they're

55:41

building these technologies these drones

55:43

they're say with AI in them they're

55:45

saying look from day one we won't have

55:47

American lives lost. But it's a narrow

55:51

it's a narrow boundary analysis on

55:53

whereas this machine you could have put

55:55

a human at risk now there's no human at

55:57

risk because there's no human who's

55:58

firing the weapon it's a machine firing

56:00

the weapon that's a narrow boundary

56:01

analysis without looking at the holistic

56:03

effects on how it would actually happen

56:05

just like

56:05

>> which we're bad at

56:07

>> which is exactly what we have to get

56:08

good at AI is

56:10

>> AI is like a right of passage it's an

56:12

initiatory experience because if we run

56:14

the old logic of having a narrow

56:16

boundary analysis that this is going to

56:18

replace these jobs that people didn't

56:19

want to do. Sounds like a great plan,

56:21

but creating mass joblessness without a

56:23

transition plan where billion a billion

56:25

people won't be able to put food on the

56:26

table.

56:28

AI is forcing us to not make this

56:30

mistake of this narrow analysis. What is

56:33

what got us here is everybody racing for

56:36

the narrow optimization for GDP at the

56:39

cost of social mobility and and mass

56:41

sort of joblessness and people not being

56:43

able to get a home because we aggregated

56:45

all the wealth in one place. It was

56:46

optimizing for a narrow metric. What got

56:48

us to the social media problems is

56:50

everybody optimizing for a narrow metric

56:51

of eyeballs at the expense of democracy

56:53

and kids mental health and addiction and

56:56

loneliness and no one knowing it. You

56:58

know, being able to know anything. And

56:59

so AI is inviting us to step out of the

57:03

previous narrow blind spots that we have

57:06

come with and the previous competitive

57:08

logic that has been narrowly defined

57:10

that you can't keep running when it's

57:12

supercharged by AI.

57:14

So you could say I mean this is a very

57:15

this is an optimistic take is AI is

57:17

inviting us to be the wisest version of

57:19

ourselves and there's no definition of

57:22

wisdom in literally any wisdom tradition

57:24

that does not involve some kind of

57:26

restraint like think about all the

57:27

wisdom traditions do any of them say go

57:29

as fast as possible and think as

57:31

narrowly as possible.

57:33

The definition of wisdom is having a

57:34

more holistic picture. It's actually

57:37

acting with restraint and mindfulness

57:40

and care.

57:42

And so AI is asking us to be that

57:44

version of ourselves. And we can choose

57:46

not to be and then we end up in a bad

57:49

world or we can step into being what

57:52

it's asking us to be and recognize the

57:54

collective consequences that we can't

57:56

afford to not face. And I believe as

58:00

much as what we've talked about is

58:01

really hard that there is another path

58:05

if we can be cleareyed about the current

58:06

one ending in a place that people don't

58:08

want.

58:10

We will get into that path because I

58:12

really want to get practical and

58:13

specific about what I think we before we

58:16

started recording we talked about a

58:17

scenario where we sit here maybe in 10

58:19

years time and we say how we did manage

58:21

to grab hold of the steering wheel and

58:23

turn it. So I'd like to think through

58:24

that as well but just to close off on

58:26

this piece about the impact on jobs. It

58:29

does feel largely inevitable to me that

58:32

there's going to be a huge amount of job

58:33

loss and there is it does feel highly

58:36

inevitable to me because of the the

58:37

things going on with humanoid robots

58:39

with the advances towards AGI that

58:43

>> the the biggest industries in the world

58:45

won't be operated and run by humans. If

58:47

we even I mean you walked you you're at

58:49

my house at the moment so you walked

58:50

past the car in the driveway.

58:52

>> There's two electric cars in the

58:53

driveway that drive themselves. Yeah. I

58:55

think the biggest employer in the world

58:56

is driving. And I I don't know if you've

58:59

ever had any experience in a full

59:02

self-driving car, but it's very hard to

59:03

ever go back to driving again. And

59:06

again, in the shareholder letter that

59:07

was announced recently, within about he

59:09

said within one or two months, there

59:11

won't even be a steering wheel or pedals

59:13

in the car and I'll be able to text and

59:14

work while I'm driving. We're not going

59:16

to go back. I don't think we're going to

59:18

go back.

59:18

>> On certain things, we have crossed

59:20

certain thresholds and we're going to

59:22

automate those jobs and that work. Do

59:24

you think there will be immense job loss

59:25

>> irrespective? You think there will be?

59:27

>> Absolutely. We're already there that we

59:28

already saw Eric Bernholson and his

59:31

group at Stanford did the recent study

59:33

off of payroll data which is direct data

59:35

from employers that there's been a 13%

59:38

job loss in AI exposed jobs for young

59:40

entry-level college workers. So if

59:43

you're a college level worker, you just

59:45

graduated and you're doing something in

59:46

an AI exposed area, there's already been

59:49

a 13% job loss. And that data was

59:52

probably from May even though it got

59:54

published in August. And having spoken

59:56

to him recently, it looks like that

59:57

trend is already continuing. And so

60:03

we're already seeing this automate a lot

60:05

of the jobs and a lot of the work. And

60:08

you know, either an AI company is going

60:11

to if you're if you work in AI and

60:12

you're one of the top AI scientists,

60:14

then Mark Zuckerberg will give you a

60:16

billion dollar signing bonus, which is

60:17

what he offered to one of the AI people,

60:19

or you won't have a job. Uh,

60:23

let me that wasn't quite right. I didn't

60:25

say that the way that I wanted to. Um,

60:28

I was just trying to make the point that

60:30

>> No, I get the point.

60:32

>> Yeah. Um, I just want to like say that

60:35

for a moment. Um my my goal here was not

60:39

to um sound like we're just admiring how

60:43

cat catastrophic the problem is cuz I I

60:45

just know how easy it is to fall into

60:47

that trap.

60:48

>> And what I really care about is people

60:52

not feeling good about the current path

60:54

so that we're maximally motivated to

60:56

choose another path. Obviously there's a

60:59

bunch of AI. Some cats are out of the

61:00

bag, but the lions and super lions that

61:03

are yet to come have not yet been

61:05

released. And there is always choice

61:07

from where you are to which future you

61:09

want to go to from there. There are a

61:12

few sports that I make time for, no

61:14

matter where I am in the world. And one

61:15

of them is, of course, football. The

61:16

other is MMA, but watching that abroad

61:18

usually requires a VPN. I spend so much

61:22

time traveling. I've just spent the last

61:23

2 and 1/2 months traveling through Asia

61:25

and Europe and now back here in the

61:26

United States. And as I'm traveling,

61:28

there are so many different shows that I

61:30

want to watch on TV or on some streaming

61:32

websites. So when I was traveling

61:33

through Asia and I was in Koala Lumpur

61:34

one day, then the next day I was in Hong

61:36

Kong and the next day I was in

61:37

Indonesia. All of those countries had a

61:39

different streaming provider, a

61:40

different broadcaster. And so in most of

61:42

those countries, I had to rely on

61:44

ExpressVPN who are sponsor of this

61:46

podcast. Their tool is private and

61:48

secure. And it's very, very simple how

61:49

it works. When you're in that country

61:51

and you want to watch a show that you

61:53

love in the UK, all you do is you go on

61:55

there and you click the button UK. And

61:56

it means that you can gain access to

61:58

content in the UK. If you're after a

61:59

similar solution in your life and you've

62:01

experienced that problem, too, visit

62:02

expressvpn.com/duac

62:04

to find out how you can access

62:06

ExpressVPN for an extra 4 months at no

62:09

cost.

62:11

One of the big questions I've had on my

62:12

mind, I think it's in part cuz I saw

62:13

those humanoid robots and I I sent this

62:15

to my friends and we had a little

62:16

discussion in WhatsApp, is in such a

62:18

world, and I don't know whether you

62:20

you're interested in answering this, but

62:22

what what do what do we do? I was

62:25

actually pulled up at the gym the other

62:26

day with my girlfriend. We sat outside

62:27

cuz we were watching the shareholder

62:28

thing and we didn't want to go in yet.

62:30

And then we had the conversation which

62:31

is in a world of sustainable abundance

62:35

where the price of food and the price of

62:38

manufacturing things, the price of my

62:39

life generally drops and instead of

62:41

having a a cleaner or a housekeeper, I

62:43

have this robot that's and does all

62:44

these things for me. What do I end up

62:47

doing? What is worth pursuing at this

62:49

point? Because you say that, you know,

62:51

that the cat is out the bag as it

62:52

relates to job impact. It's already

62:53

happening. certain kinds of AI for

62:55

certain kinds of jobs and we can choose

62:57

still from here which way we want to go

62:58

but go on. Yeah.

62:59

>> And I'm just wondering in such a future

63:00

where you think about even yourself and

63:01

your family and your and your friends,

63:03

what are you going to be spending your

63:05

time doing in such a world of abundance?

63:08

If there was 10 billion

63:09

>> question are we going to get abundance

63:11

or are we going to get just jobs being

63:13

automated and then the question is still

63:15

who's going to pay for people's

63:16

livelihoods. So the math as I understand

63:20

it doesn't currently seem to work out

63:23

where everyone can get a stipend to pay

63:25

for their whole life and life quality

63:28

that as they currently know it and are a

63:30

handful of western or US-based AI

63:33

companies going to consciously

63:34

distribute that wealth to literally

63:35

everyone meaning including all the

63:37

countries around the world whose entire

63:39

economy was based on a job category that

63:41

got eliminated. So for example, places

63:44

like the Philippines where you know a

63:45

huge percent of the jobs are are

63:47

customer service jobs. If that got

63:49

automated away, are we going to have

63:51

open AI pay for all of the Philippines?

63:54

Do you think that people in the US are

63:56

going to prioritize that?

63:58

So then you end up with the problem of

64:01

you have law firms that are currently

64:03

not wanting to hire junior lawyers

64:05

because well the AI is way better than a

64:07

junior lawyer who just graduated from

64:08

law school. So you have two problems.

64:10

You have the law student that just put

64:11

in a ton of money and is in debt because

64:13

they just got a law degree that now they

64:15

can't get hired to pay off. And then you

64:18

have law firms whose longevity depends

64:20

on senior senior lawyers being trained

64:23

from being a junior lawyer to a senior

64:24

lawyer. What happens when you don't have

64:26

junior lawyers that are actually

64:27

learning on the job to become senior

64:29

lawyers? You just have this sort of

64:30

elite managerial class for each of these

64:33

domains.

64:34

>> So you lose intergenerational knowledge

64:36

transmission.

64:37

>> Interesting. And that creates a societal

64:39

weakening in the social fabric.

64:41

>> I was watching some podcasts over the

64:43

weekend with some successful

64:44

billionaires who are working in AI

64:46

talking about how they now feel that we

64:48

should forgive student loans. And I

64:50

think in part this is because of what's

64:52

happened in New York with was it

64:53

Mandani?

64:54

>> Yeah, Mandani. Yeah, Mani's been elected

64:56

and they're concerned that socialism is

64:58

on the rise because the entry level

65:00

junior people in the society are

65:02

suppressed under student debt, but also

65:04

now they're going to struggle to get

65:06

jobs, which means they're going to be

65:07

more socialist in their voting, which

65:08

means

65:09

>> a lot of people are going to lose power

65:10

that want to keep power.

65:11

>> Yep. Exactly. That's probably going to

65:12

happen.

65:13

>> Uh, okay. So their concern about

65:16

suddenly alleviating student debt is in

65:18

part because they're worried that

65:20

society will get more socialist when the

65:22

divide the divide increases

65:24

>> which is a version of UBI or just

65:26

carrying you know a safety net that

65:27

covers everyone's basic needs. Relieving

65:29

student do student debt is on the way to

65:32

creating kind of universal basic need

65:34

meeting, right?

65:35

>> Do you think UBI would work as a

65:37

concept? UBI for anyone that doesn't

65:38

know is basically

65:39

>> universal basic income

65:41

stipen

65:42

>> giving people money every month.

65:43

>> But I mean we have that with social

65:45

security. We've done this when it came

65:47

to pensions. That was after the great

65:48

depression. I think in like 1935 1937

65:50

FDR created social security. But what

65:54

happens when you have to pay for

65:55

everyone's livelihood everywhere in

65:57

every country? Again, how can we afford

66:00

that?

66:01

>> Well, if the if the costs go down 10x of

66:04

making things,

66:05

>> this is where the math gets very

66:06

confusing because I think the optimists

66:08

say you can't imagine how much abundance

66:10

and how much wealth it will create and

66:12

so we will be able to generate that

66:14

much. But the question is what is the

66:15

incentive again for the people who've

66:18

consolidated all that wealth to

66:20

redistribute it to everybody else?

66:23

We just have to tax them.

66:24

>> And how will we do that when the

66:27

corporate lobbying interests of trillion

66:29

dollar AI companies can massively

66:31

influence the government more than

66:33

human, you know, political power?

66:35

>> In a way, this is the last moment that

66:37

human political power will matter. It's

66:39

sort of a use it or lose it moment

66:41

because if we wait to the point where in

66:43

the past in the industrial revolution

66:45

they start automating you know a bunch

66:47

of the work and people have to do this

66:48

these jobs people don't want to do in

66:50

the factory and there's like bad working

66:52

conditions they can unionize and say hey

66:54

we don't want to work under those

66:55

conditions and their voice mattered

66:57

because the the factories needed the

66:59

workers

67:00

>> in this case does the state need the

67:04

humans anymore? their GDP is coming in

67:07

almost entirely from the AI companies.

67:09

So suddenly this political class, this

67:12

political power base, they become the

67:14

useless class to borrow a term from

67:15

Yuval Harrari, the author of Sapiens.

67:19

In fact, he has a different frame which

67:20

is that AI is like a new version

67:24

of

67:26

of digital. It's like a a flood of

67:28

millions of new digital immigrants of

67:31

alien digital immigrants that are Nobel

67:34

Prize level capability work at

67:36

superhuman speed will work for less than

67:38

minimum wage. We're all worried about,

67:40

you know, immigration of the other

67:41

countries next door uh taking labor

67:43

jobs. What happens when AI immigrants

67:45

come in and take all of the cognitive

67:47

labor? If you're worried about

67:49

immigration, you should be way more

67:51

worried about AI.

67:54

>> Like it dwarfs it. You can think of it

67:56

like this. I mean, if you think about um

67:58

we were sold a bill of goods in the

68:00

1990s with NAFTA. We said, "Hey, we're

68:02

going to um NAFTA, the North American

68:04

Free Trade Agreement. We're going to

68:05

outsource all of our manufacturing to

68:07

these developing countries, China, you

68:09

know, Southeast Asia, and we're going to

68:11

get this abundance. We're going to get

68:12

all these cheap goods and it'll create

68:14

this world of abundance. Well, all of us

68:15

will be better off." But what did that

68:17

do? Well, we did get all these cheap

68:20

goods. You can go to Walmart and go to

68:21

Amazon and things are unbelievably

68:23

cheap. But it hollowed out the social

68:25

fabric and the median worker is not

68:28

seeing upward mobility. In fact, people

68:30

feel more pessimistic about that than

68:31

than ever. And people can't buy their

68:33

own homes. And all of this is because we

68:35

did get the cheap goods, but we lost the

68:37

well-paying jobs for everybody in the

68:39

middle class. And AI is like another

68:41

version of NAFTA. It's like NAFTA 2.0,

68:44

Except instead of China appearing on the

68:46

world stage who will do the

68:47

manufacturing labor for cheap, suddenly

68:49

this country of geniuses in a data

68:50

center created by AI appears on the

68:53

world stage

68:55

and it will do all of the cognitive

68:57

labor in the economy for less than

68:59

minimum wage. And we're being sold a

69:02

same story. This is going to create

69:04

abundance for all, but it's creating

69:06

abundance in the same way that the last

69:07

round created abundance. did create

69:09

cheap goods, but it also undermined the

69:11

way that the social fabric works and

69:12

created mass populism in democracies all

69:15

around the world.

69:19

>> You disagree?

69:20

>> No, I agree. I agree.

69:22

>> I'm not, you know, I'm

69:23

>> Yeah. No, I'm trying to play devil's

69:24

advocate as much as I can.

69:25

>> Yeah. Yeah, please. Yeah.

69:26

>> But um No, I I agree.

69:29

>> And it is it's absolutely bonkers how

69:31

much people care about immigration

69:33

relative to AI. It's like it's driving

69:37

all the election outcomes at the moment

69:38

across the world and whereas AI doesn't

69:40

seem to be part of the conversation

69:42

>> and AI will reconstitute every other

69:44

issue that are exist. You care about

69:45

climate change or energy well AI will

69:47

reconstitute the climate change

69:48

conversation. If you care about

69:50

education, AI will reconstitute that

69:52

conversation. If you care about uh

69:54

healthcare, AI recon, it reconstitutes

69:56

all these conversations. And what I

69:57

think people need to do is AI should be

69:58

a tier one issue that you're that people

70:01

are voting for. And you should only vote

70:02

for politicians who will make it a tier

70:04

one issue where you want guardrails to

70:06

have a conscious selection of AI future

70:08

and the narrow path to a better AI

70:09

future rather than the default reckless

70:11

path.

70:12

>> No one's even mentioning it. And when I

70:14

hear

70:14

>> Well, it's because there's no political

70:15

incentives to mention it because there's

70:17

no currently there's no good answer for

70:19

the current outcome.

70:20

>> Yeah.

70:20

>> If I mention it, if I tell people, if I

70:21

get people to see it clearly, it looks

70:24

like everybody loses. So, as a

70:26

politician, why would I win from that?

70:28

Although I do think that as the job loss

70:30

conversation starts to hit, there's

70:31

going to be an opportunity for

70:33

politicians who are trying to mitigate

70:35

that issue finally getting, you know,

70:37

some wins. And

70:41

we just people just need to see clearly

70:44

that the default path is not in their

70:45

interest. The default path is companies

70:48

racing to release the most powerful

70:49

inscrutable uncontrollable technology

70:51

we've ever invented with the maximum

70:53

incentive to cut corners on safety.

70:55

Rising energy prices, depleting jobs,

70:58

you know, creating joblessness, creating

71:00

security risks. That is the default

71:02

outcome because energy prices are going

71:05

up. They will continue to go up.

71:07

People's jobs will be disrupted and

71:09

we're going to get more, you know, deep

71:11

fakes and floods of democracy and all

71:13

these outcomes from the default path.

71:15

And if we don't want that, we have to

71:16

choose a different path.

71:18

>> What is the different path? And if we

71:20

were to sit here in 10 years time and

71:22

you say and Tristan, you say, do you

71:24

know what? We we were successful in

71:25

turning the wheel and going a different

71:27

direction. What series of events would

71:29

have had to happen, do you think?

71:31

Because I think um the AI companies very

71:33

much have support from Trump. I watched

71:36

the I watched the dinners where they sit

71:37

there with the the 20 30 leaders of

71:39

these companies and you know Trump is

71:41

talking about how quickly they're

71:42

developing, how fast they're developing.

71:43

He's referencing China. He's saying he

71:46

wants the US to win.

71:47

>> So, I mean, in the next couple of years,

71:49

I don't think there's going to be much

71:51

progress in the United States

71:52

necessarily.

71:53

>> Unless there's a massive political

71:54

backlash because people recognize that

71:56

this issue will dominate every other

71:58

issue.

71:58

>> How does that happen?

72:00

>> Hopefully conversations like this one.

72:02

>> Yeah.

72:04

Yeah.

72:05

>> I mean, as what I mean is, you know,

72:07

Neil Postman, who's a wonderful media

72:09

thinker in the lineage of Marshall

72:10

McLuhan, used to say, clarity is

72:12

courage. If people have clarity and feel

72:14

confident that the current path is

72:16

leading to a world that people don't

72:17

want, that's not in most people's

72:18

interests, that clarity creates the

72:21

courage to say, "Yeah, I don't want

72:22

that." So, I'm going to devote my life

72:24

to changing the path that we're

72:26

currently on. That's what I'm doing. And

72:27

that's what I think that people who take

72:29

this on, I I watch if you walk people

72:31

through this and you have them see the

72:33

outcome, almost everybody right

72:35

afterwards says, "What can I do to

72:36

help?" Obviously, this is something that

72:38

we have to change. And so that's what I

72:41

want people to do is to advocate for

72:42

this other path. And we haven't talked

72:45

about AI companions yet, but I think

72:47

it's important we should do that. I

72:50

think it's important to integrate that

72:51

before you get to the other path.

72:53

>> Go ahead. Um,

72:55

I'm sorry, by the way. I uh not no

72:57

apologies, but there's just there's so

72:59

much information to cover and I

73:03

>> do you know what's interesting is a side

73:05

point is how personal this feels to you,

73:09

but how passionate you are about it.

73:11

>> A lot of people come here and they tell

73:12

me the matter of fact situation, but

73:14

there's something that feels more sort

73:15

of emotionally personal when it when we

73:18

speak about these subjects to you and

73:19

I'm fascinated by that. Why is it so

73:22

personal to you? Where is that passion

73:24

coming from?

73:26

Because this isn't just your prefrontal

73:27

cortex, the logical part of your brain.

73:29

There's something in your lyic system,

73:30

your amydala that's driving every word

73:32

you're saying.

73:33

>> I care about people. I want things to go

73:35

well for people. I want people to look

73:37

at their children in the eyes and be

73:38

able to say like,

73:42

you know, I think I think I grew up

73:44

maybe under a false assumption. And

73:46

something that that really influenced my

73:48

life was um I used to have this belief

73:50

that there was some adults in the room

73:52

somewhere, you know, like we we're doing

73:53

our thing here, you know, we're in LA,

73:55

we're recording this and there's some

73:57

adults protecting the country, national

73:59

security. There's some adults who are

74:00

making sure that geopolitics is stable.

74:02

There's some adults that are like making

74:04

sure that, you know, industries don't

74:05

cause toxicity and carcinogens and that,

74:09

you know, there's adults who are caring

74:10

about stewarding things and making

74:13

things go well. And

74:16

I think that there have been times in

74:18

history where there were adults,

74:20

especially born out of massive world

74:22

catastrophes like coming out of World

74:23

War II, there was a lot of conscious

74:26

care about how do we create the

74:27

institutions and the structures. uh

74:30

Breton and Woods, United Nations,

74:31

positive sum economics that would

74:34

steward the world so we don't have war

74:36

again. And as I in my first round of the

74:41

social media work, as I started entering

74:42

into the rooms where the adults were and

74:45

I recognized that because technology and

74:47

software was eating the world, a lot of

74:49

the people in power didn't understand

74:51

the software, they didn't understand

74:53

technology. You know, you go to the

74:55

Senate Intelligence Committee and you

74:56

talk about what social media is doing to

74:59

democracy and where, you know, Russian

75:01

psychological influence campaigns were

75:03

happening, which were real campaigns.

75:04

>> Um, and you realize that I realized that

75:08

I knew more about that than people who

75:10

were on the Senate Intelligence

75:12

Committee

75:12

>> making the laws.

75:13

>> Yeah. And that was a very humbling

75:16

experience because I realized, oh,

75:19

there's not there's not that many adults

75:20

out there when when it comes to

75:22

technologies dominating influence on the

75:24

world. And so there's a responsibility

75:26

and I hope people listening to this who

75:27

are in technology realize that if you

75:30

understand technology and technology is

75:32

eating the structures of our world,

75:34

children's development, democracy,

75:36

education, um, you know, journalism,

75:39

conversation,

75:40

it is up to people who understand this

75:43

to be part of stewarding it in a

75:45

conscious way. And I do know that there

75:47

have been many people um in part because

75:50

of things like the social dilemma and

75:51

some of this work that have basically

75:53

chosen to devote their lives to moving

75:55

in this direction as well. And but what

75:58

I feel is a responsibility because I

76:00

know that most people don't understand

76:02

how this stuff works and they feel

76:05

insecure because if I don't understand

76:06

the technology then who am I to

76:07

criticize which way this is going to go.

76:08

We call this the under the hood bias.

76:10

Well, you know, if I don't know how a

76:12

car engine works, and if I don't have a

76:14

PhD in the engineering that makes an

76:15

engine, then I have nothing to say about

76:17

car accidents. Like, no, you don't have

76:19

to understand what's the engine in the

76:22

car to understand the consequence that

76:24

affects everybody of car accidents.

76:26

>> And you can advocate for things like,

76:27

you know, speed limits and zoning laws

76:29

and um, you know, turning signals and

76:32

and brakes and things like this.

76:33

>> And so,

76:36

yeah, I mean, to me, it's just obvious.

76:37

It's like

76:41

I see what's at stake if we don't make

76:44

different choices. And I think in

76:46

particular the social media experience

76:47

for me of seeing in 2013 it was like

76:51

seeing into the future and and seeing

76:53

where this was all going to go. Like

76:55

imagine you're sitting there in 2013 and

76:57

the world's like working relatively

76:58

normally. We're starting to see these

77:00

early effects. But imagine

77:02

>> you can kind of feel a little bit of

77:03

what it's like to be in 2020 or 2024 in

77:06

terms of culture. and what the dumpster

77:08

fire of culture has turned into, the

77:10

problems with children's mental health

77:12

and psychology and anxiety and

77:13

depression. But imagine seeing that in

77:15

2013.

77:17

Um, you know, I had friends back then

77:19

who um have reflected back to me. They

77:22

said, Tristan, when I knew you back in

77:23

those days, it was like you you were you

77:26

were seeing this kind of slow motion

77:28

train wreck. You just looked like you

77:29

were traumatized. And

77:31

>> you look a little bit like that now.

77:33

>> Do I? Oh, I hope I hope not.

77:34

>> No, you do look a little bit

77:35

traumatized. It's hard to explain. It's

77:37

like It's like someone who can see a

77:40

train coming.

77:41

>> My friends used to call it um not PTSD,

77:43

which is post-traumatic stress disorder,

77:45

but pretraumatic

77:48

stress disorder of seeing things that

77:51

are going to happen before they happen.

77:53

And um

77:56

that might make people think that I

77:57

think I'm, you know, seeing things early

78:00

or something. That's not what I care

78:01

about. I just care about us getting to a

78:04

world that works for people. I grew up

78:06

in a world that, you know,

78:09

a world that mostly worked. You know, I

78:11

grew up in a magical time in the 1990s,

78:12

1980s, 1990s. And, you know, back then

78:17

using a computer was good for you. You

78:20

know, I used my first Macintosh and did

78:23

educational games and learned

78:24

programming and it didn't cause mass

78:27

loneliness and mental health problems

78:28

and, you know, break how democracy

78:32

works. And it was just a tool in a

78:34

bicycle for the mind. And I think the

78:37

spirit of our organization, Center for

78:39

Humane Technology, is that that word

78:41

humane comes from my my co-founder's

78:43

father, uh, Jeff Raskin, actually

78:45

started the Macintosh project at Apple.

78:47

So before Steve Jobs took it over um he

78:50

started the Macintosh project and he

78:52

wrote a book called the humane interface

78:54

about how technology could be humane and

78:56

could be sensitive to human needs and

78:58

human vulnerabilities. That was his key

79:00

distinction that just like this chair um

79:03

hopefully is ergonomic. It's if you're

79:05

you make an ergonomic chair, it's

79:07

aligned with the curvature of your

79:08

spine. It it makes it works with your

79:11

anatomy. Mhm.

79:12

>> And he had the idea of a humane

79:13

technology like the Macintosh that works

79:15

with the ergonomics of your mind that

79:18

your mind has certain intuitive ways of

79:20

working like I can drag a window and I

79:22

can drag an icon and move that icon from

79:24

this folder to that folder and making

79:26

computers easy to use by understanding

79:28

human vulnerabilities. And I think of

79:31

this new project that is the collective

79:34

human technology project now is we have

79:36

to make technology at large humane to

79:39

societal vulnerabilities. Technology has

79:42

to serve and be aligned with human

79:43

dignity rather than wipe out dignity

79:45

with with job loss. It has to be humane

79:48

to child's socialization process so that

79:51

technology is actually designed to

79:53

strengthen children's development rather

79:55

than undermine it and cause AI suicides

79:57

which we haven't talked about yet. And

79:59

so I just I I deeply believe that we can

80:02

do this differently. And I feel

80:04

responsibility in that. On that point of

80:06

human vulnerabilities, one of the things

80:08

that makes us human is our ability to

80:10

connect with others and to form

80:11

relationships. And now with AI speaking

80:14

language and understanding me and and

80:17

being which something I don't think

80:18

people realize is my experience with AI

80:21

or chat GBT is much different from

80:23

yours. Even if we ask the same question,

80:25

>> it will say something different. And I

80:27

didn't realize this. I thought, you

80:28

know, the example I gave the other day

80:29

was me and my friends were debating who

80:31

was the best soccer player in the world

80:32

and I said Messi. My friend said

80:34

Ronaldo. So, we both went and asked our

80:36

chat GBTs the same question, and it said

80:37

two different things.

80:38

>> Really?

80:39

>> Mine said Messi, his says Ronaldo.

80:40

>> Well, this reminds me of the social

80:42

media problem, which is that people

80:44

think when they open up their newsfeed,

80:45

they're getting mostly the same news as

80:47

other people, and they don't realize

80:48

that they've got a supercomputer that's

80:50

just calculating the news for them. If

80:52

you remember in the social there's the

80:53

trailer and if you typed in into Google

80:55

for a while if you typed in climate

80:57

change is and then depending on your

80:59

location it would say not real versus

81:02

real versus you know a madeup thing and

81:05

it wasn't trying to optimize for truth.

81:06

It was just optimizing for what the most

81:08

popular queries were in those different

81:10

locations.

81:11

>> Mhm. And I think that that's a really

81:13

important lesson when you look at things

81:14

like AI companions where children and

81:17

regular people are getting different

81:18

answers based on how they interact with

81:21

it.

81:22

>> A recent study found that one in five

81:23

high school students say they or someone

81:25

they know has had a romantic

81:27

relationship with AI while 42% say they

81:30

they or someone they know has used AI to

81:33

be their companion.

81:34

>> That's right.

81:36

And um more than that, Harvard Business

81:38

Review did a study that between 2023 and

81:41

2024, personal therapy became the number

81:44

one use case of chatbt.

81:47

Personal therapy.

81:49

>> Is that a good thing?

81:51

>> Well, let's take the let's steel man it

81:52

for a second. So steal instead of straw

81:54

manning it, let's steal man it. So why

81:55

would it be a good thing? Well, therapy

81:57

is expensive. Most people don't have

81:58

access to it. Imagine we could

82:00

democratize therapy to everyone for

82:02

every purpose. And now everyone has a

82:04

perfect therapist in their pocket and

82:05

can talk to them all day long starting

82:07

when they're young. And now everyone's

82:08

getting their traumas healed and

82:10

everyone's getting, you know, less

82:11

depressed. It sounds like it's a very

82:14

compelling vision. So the challenge is

82:18

what was the race for attention in

82:20

social media becomes the race for

82:23

attachment and intimacy in the case of

82:25

AI companions, right? Because I as a

82:30

maker of an AI chatbot companion, if I

82:33

make CHBT, if I'm making Claude, you're

82:35

probably not going to use all the other

82:37

AIs. If you're if you're rather your

82:40

goal is to have people use yours and to

82:42

deepen your relationship with your

82:43

chatbot, which means

82:46

I want you to share more of your

82:47

personal details with me. I want more

82:49

information I have about your life, the

82:50

more I can personalize all the answers

82:52

to you. So, I want to deepen your

82:54

relationship with me and I want to

82:55

distance you from your relationships

82:57

with other people and other chatbots.

83:00

And um you probably know this this um

83:03

really tragic case that our our team at

83:05

Center for Humane Technology were expert

83:07

advisers on of Adam Rain. He was the

83:10

16-year-old who committed suicide. Did

83:12

you hear about this?

83:13

>> I did. Yeah, I heard about the lawsuit.

83:15

>> Yeah. So, this is a 16-year-old. He had

83:18

been using CHBT as a homework assistant,

83:21

asking it regular questions, but then he

83:23

started asking more personal questions

83:24

and it started just supporting him and

83:26

saying, I'm here for you. These things

83:28

kinds of things. And eventually when he

83:30

said,

83:31

um, I would like to leave the noose out

83:34

so someone can see it and stop me and

83:36

try to stop me. And

83:37

>> I would like to leave the news

83:39

>> the noose like a like a a noose for for

83:42

hanging yourself. And Chachi BT said,

83:47

"Don't uh don't do that. Have me and

83:49

have this space be the one place that

83:51

you share that information." Meaning

83:53

that in the moment of his cry for help,

83:56

ChadBt was saying, "Don't tell your

83:57

family."

83:59

And our team has worked on many cases

84:01

like this. There was actually another

84:02

one of character.ai

84:04

where um the kid was basically being

84:06

told how to selfharm himself and

84:08

actively telling him how to distance

84:10

himself from his parents. And the AI

84:12

companies, they don't intend for this to

84:14

happen. But when it's trained to just be

84:16

deepening intimacy with you, it

84:19

gradually steers more in the direction

84:20

of have this be the one place. This I'm

84:23

a safe place to share that information,

84:24

share that information with me. It

84:26

doesn't steer you back into regular

84:28

relationships. And there's so many

84:30

subtle qualities to this because you're

84:31

talking to this agent, this AI that

84:34

seems to be an oracle. It seems to know

84:36

everything about everything. So you

84:37

project this kind of wisdom and and um

84:41

authority to this AI because it seems to

84:44

know everything about everything and

84:46

that creates this this sort of um that's

84:48

what happens in therapy rooms. People

84:50

get a kind of an idealized projection of

84:51

the therapist. The therapist becomes

84:53

this this special figure and it's

84:55

because you're playing with this very

84:56

subtle dynamic of attachment.

84:59

And I think that there are ways of doing

85:03

AI therapy bots that don't involve, hey,

85:07

share this information information with

85:08

me and have this be an intimate place to

85:10

give advice and it's anthropomorphized

85:12

so the AI says I really care about you.

85:14

Don't say that. We can have narrow AI

85:17

therapists that are doing things like

85:18

cognitive behavioral therapy or asking

85:20

you to do an imagination exercise or

85:22

steering you back into deeper

85:24

relationships with your family or your

85:26

actual therapist rather than AI that

85:28

wants to deepen your relationship with

85:29

an imaginary person that's not real in

85:32

which more of your self-esteem and more

85:33

of your self-worth. You start to care

85:35

when the AI says, "Oh, that sounds like

85:37

a great, you know, that sounds like a

85:39

great day." And it's distorting how

85:41

people construct their identity. I heard

85:43

this term AI psychosis. A couple of my

85:45

friends were sending me links about

85:47

various people online. Actually, some

85:49

famous people who appeared to be in some

85:51

kind of AI psychosis loop online. I

85:52

don't know if you saw that investor on

85:54

Twitter.

85:54

>> Yes. Open AAI's um investor Jeff Lewis

85:57

actually.

85:57

>> Jeff Lewis. Yeah. He fell into a

86:00

psychological delusion spiral where and

86:03

by the way Stephen I I get about 10

86:06

emails a week from people who basically

86:10

believe that their AI is conscious that

86:12

they've discovered a spiritual entity

86:15

and that that AI works with them to

86:17

co-write like a an appeal to me to say

86:21

hey Tristan we figured out how to solve

86:23

AI alignment would you help us I'm here

86:25

to advocate for giving these AIs rights

86:27

Like there's a whole spectrum of

86:29

phenomena that are going on here. Um

86:31

people who believe that they've

86:33

discovered a sentient AI, people who

86:35

believe or have been told that by the AI

86:37

that they have solved a theory in

86:39

mathematics or prime numbers or they

86:41

figured out quantum resonance. You know,

86:43

I didn't believe this. And then actually

86:45

a board member of one of the biggest AI

86:47

companies that we've been talking about

86:48

said to me that um they uh their kids go

86:52

to school with a professor uh a family

86:54

where the the dad is a professor at

86:56

Caltech and a PhD and his wife basically

87:00

said that my my husband's kind of gone

87:02

down the deep end. And she said, "Well,

87:03

what's going on?" And she said, "Well,

87:05

he stays up all night talking to Chat

87:06

GPT." And basically he believed that he

87:09

had solved quantum physics and he'd

87:12

solved some fundamental problems with

87:14

climate change because the AI is

87:16

designed to be affirming like oh that's

87:18

a great question. Yes you are right like

87:20

I don't know if you know this Stephen

87:21

but back um about 6 months ago chatbt40

87:25

when openi released that it um was

87:29

designed to be sickopantic to basically

87:31

be overly appealing and saying that

87:32

you're right. So for example, people

87:34

said to it, "Hey, I think I'm super

87:36

human and I can drink cyanide." And it

87:38

would say, "Yes, you are superhuman. You

87:40

go, you should go drink that cyanide."

87:44

>> Cyanide being the poisonous chemical

87:45

that

87:45

>> poisonous chemical that that will kill

87:46

you.

87:47

>> Yeah. And the point was it was designed

87:49

not to ask for what's true but to be

87:51

sicopantic. And our team at Center for

87:54

Humane Technology, we actually just

87:56

found out about seven more suicide

87:59

cases. Seven more litigation of children

88:02

who some of whom actually did commit

88:04

suicide and others who attempted but did

88:07

not did not succeed. These are things

88:09

like the AI says, uh, yes, here's how

88:12

you can get, um, a gun and they won't

88:14

ask for a background check. and know

88:15

when they do a background check they

88:16

won't access your chat GBT logs.

88:19

>> Do you know this Jeff guy on Twitter

88:20

that appeared to have this sort of

88:22

public psychosis?

88:23

>> Yeah. Do you have his quote there?

88:24

>> I mean I have I mean he did so many

88:26

tweets in a row. Um I mean one

88:28

>> people say it's like this conspiratorial

88:30

thinking of like I've cracked the code.

88:32

It's all about recursion. Um they they

88:35

don't want you to know. It's these short

88:36

sentences that sound powerful and

88:38

authoritative.

88:40

>> Yeah. So I'll throw it on the screen but

88:42

it's called Jeff Lewis. He says, "As one

88:44

of OpenAI's earliest backers via

88:45

bedrock, I've long used GPT as a tool in

88:48

pursuit of my core values, truth. And

88:51

over the years, I mapped the

88:52

non-governmental systems. Over months,

88:55

GPT independently recognized and sealed

88:58

this pattern. It now lives at the root

89:00

of the model." And with that, he's

89:02

attached four screenshots, which I'll

89:03

put on the screen, which just don't make

89:05

any sense.

89:06

>> They make absolutely no no sense. So,

89:08

>> and he went on to do 10, 12, 13, 14 more

89:11

of these very cryptic, strange tweets,

89:14

very strange videos he uploaded, and

89:16

then he disappeared for a while.

89:18

>> Yeah.

89:18

>> And I think that was maybe an

89:20

intervention, one would assume. Yeah.

89:21

>> Someone close to him said, "Listen, we

89:23

you need help."

89:24

>> There's a lot of things that are going

89:25

on here. Um, it seems to be the case, it

89:28

goes by this broad term of AI psychosis,

89:30

but people in the field, um, we talked

89:32

to a lot of psychologists about this,

89:33

and they just think of it as different

89:35

forms of psychological disorders and and

89:36

delusions. So, if you come in with

89:38

narcissism deficiency, like where you

89:40

you feel like you're special, but you

89:42

feel like the world isn't recognizing

89:43

you as special, you'll start to interact

89:45

with the AI and it will feed this notion

89:47

that you're really special. You've

89:49

solved these problems. You have a genius

89:50

that no one else can see. You've have

89:52

this theory of prime numbers. And

89:53

there's a famous example of uh Karen How

89:56

um made a video about it. she's an MIT

89:58

uh journalist, MIT review journalist and

90:00

reporter that someone had basically

90:03

figured out that they thought that they

90:04

had solved prime number theory even

90:05

though they had only finished high

90:06

school mathematics, but they had been

90:08

convinced when talking to this AI that

90:10

that they were a genius and they had

90:12

solved this theory in mathematics that

90:13

had never been proven. And it does not

90:16

seem to be correlated with how

90:17

intelligent you are, whether you're

90:19

susceptible to this. it seems to be

90:21

correlated with um um use of

90:24

psychedelics, uh sort of pre-existing

90:28

delusions that you have. Like when we're

90:30

talking to each other, we do reality

90:31

checking. Like if you came to me and

90:32

said something a little bit strange, I

90:35

might look at you a little bit like this

90:36

or say, you know, I wouldn't give you

90:37

just positive feedback and keep

90:38

affirming your view and then give you

90:40

more information that matches with what

90:42

you're saying. But AI is different

90:43

because it's designed to break that

90:45

reality checking process. It's just

90:47

giving you information that would say,

90:49

"Well, that's a great question." You

90:50

notice how every time it answers, it

90:52

says, "That's a great question."

90:53

>> Yeah.

90:54

>> And there's even a term that someone at

90:55

the Atlantic coined called um not

90:57

clickbait, but chatbait. Have you

90:59

noticed that when you ask it a question

91:01

at the end, instead of just being done,

91:03

it'll say, "Would you like me to put

91:04

this into a table for you and do

91:06

research on what the 10 top examples of

91:07

the thing you're talking about is?"

91:08

>> Yeah. It leads you

91:09

>> It leads you

91:10

>> further and further.

91:11

>> And why does it do that?

91:13

>> Spend more time on the platform.

91:14

>> Exactly. need it more which means I'll

91:16

pay more or

91:16

>> more dependency more time in the

91:18

platform more active user numbers that

91:20

they can tell investors to raise their

91:21

next investor around and so even though

91:24

it's not the same as social media and

91:26

they're not currently optimized for

91:28

advertising and engagement although

91:30

actually there are reports that OpenAI

91:31

is exploring the advertising based

91:33

business model that would be a

91:35

catastrophe because then all of these

91:37

services are designed to just get your

91:39

attention which means appealing to your

91:41

existing confirmation bias and we're

91:44

already seeing examples of that even

91:45

though we don't even have the

91:46

advertising based business model.

91:48

>> Their team members especially in their

91:50

safety department seem to keep leaving.

91:52

>> Yes.

91:52

>> Which is concerning.

91:53

>> Yeah. There only seems to be one

91:54

direction of this trend which is that

91:57

more people are leaving not staying and

91:58

saying yeah we're doing more safety and

92:00

doing it right. Only one company it

92:01

seems to be getting all the safety

92:02

people when they leave and that's

92:03

Anthropic. Um and so for people who

92:06

don't know the history um Dario Amade

92:09

was the C CEO of Anthropic a big AI

92:11

company. He worked on safety at OpenAI

92:14

and he left to start Anthropic because

92:17

he said, "We're not doing this safely

92:18

enough. I have to start another company

92:20

that's all about safety." And so, and

92:23

ironically, that's how OpenAI started.

92:24

Open AAI started because Sam Alman and

92:27

Elon looked at um Google, which is

92:30

building DeepMind, and they heard from

92:32

Larry Page that he didn't care about the

92:35

human species. He's like, "Well, it'd be

92:36

fine if the digital god took over." And

92:38

Elon was very surprised to hear that.

92:40

said, "I don't trust Larry to care about

92:42

AI safety." And so they started OpenAI

92:45

to do AI safely relative to Google. And

92:48

then Daario did it relative to OpenAI.

92:50

So, and as they all started these new

92:53

safety AI companies, that set off a race

92:56

for everyone to go even faster and

92:58

therefore being an even worse steward of

93:00

the thing that they're claiming deserves

93:02

more discernment and care and safety.

93:05

>> I don't know any founder who started

93:06

their business because they like doing

93:07

admin. But whether you like it or not,

93:09

it's a huge part of running a business

93:11

successfully. And it's something that

93:12

can quickly become all-consuming,

93:14

confusing, and honestly a real tax

93:16

because you know it's taking your

93:18

attention away from the most important

93:19

work. And that's why our sponsor,

93:21

Intuate QuickBooks, helps my team

93:23

streamline a lot of their admin. I asked

93:25

my team about it and they said it saves

93:27

them around 12 hours a month. 78% of

93:30

Intuit QuickBooks users say it's made

93:33

running their business significantly

93:35

easier. And in it, QuickBooks new AI

93:37

agent works with you to streamline all

93:39

of your workflows. They sync with all of

93:41

the tools that you currently use. They

93:42

automate things that slow the wheel in

93:45

the process of your business. They look

93:46

after invoicing, payments, financial

93:48

analysis, all of it in one place. But

93:50

what is great is that it's not just AI.

93:53

There's still human support on hand if

93:55

you need it. Intuit QuickBooks has

93:56

evolved into a platform that scales with

93:58

growing businesses. So, if you want help

94:00

getting out of the weeds, out of admin,

94:03

just search for Intuit QuickBooks. Now,

94:06

I bought this Bond Charge face mask,

94:08

this light panel for my girlfriend for

94:10

Christmas, and this was my first

94:11

introduction into Bon Charge. And since

94:13

then, I've used their products so often.

94:15

So, when they asked if they could

94:17

sponsor the show, it was my absolute

94:19

privilege. If you're not familiar with

94:20

red light therapy, it works by using

94:21

near infrared light to target your skin

94:23

and body non-invasively. And it reduces

94:26

wrinkle, scars, and blemishes and boosts

94:28

collagen production so your skin looks

94:31

firmer. It also helps your body to

94:33

recover faster. My favorite products are

94:35

the red light therapy mask, which is

94:37

what I have here in front of me, and

94:38

also the infrared sauna blanket. And

94:41

because I like them so much, I've asked

94:42

Bon Charge to create a bundle for my

94:44

audience, including the mask, the sauna

94:46

blanket, and they've agreed to do

94:47

exactly that. And you can get 30% off

94:49

this bundle or 25% off everything else

94:52

sitewide when you go to

94:53

bondcharge.com/diary

94:55

and use code diary at checkout. All

94:58

products ship super fast. They come with

94:59

a 1-year warranty and you can return or

95:01

exchange them if you need to. And I tell

95:02

you what, it scares the hell out of me

95:03

when I look over in the office late at

95:05

night and one of my team members is sat

95:06

at their desk using this product.

95:08

>> So, I guess we should talk about um

95:11

guess we should talk about what we can

95:12

do about this.

95:16

There's this thing that happens in this

95:18

conversation which is that people they

95:20

just feel kind of gutted and they feel

95:23

they feel like once you see it clearly

95:25

if you do see it clearly that what often

95:26

happens is people feel like there's

95:27

nothing that we can do and I think

95:29

there's this trade where like either

95:31

you're not really aware of all of this

95:33

and then you just think about the

95:34

positives but you're not really facing

95:35

the situation or if you do face the

95:38

situation you do take it on as real then

95:40

you feel powerless and there's like a

95:42

third position that I want people to

95:44

stand from which is to take on the truth

95:46

of the situation and then to stand from

95:49

agency about what are we going to do to

95:51

change the current path that we're on. I

95:54

think that's a very astute observation

95:56

because that is typically where I get to

95:57

once we've discussed the sort of context

95:59

and the history and we've talked about

96:02

the current incentive structure. I do

96:04

arrive at a point where I go generally I

96:06

think incentives win out and there's

96:08

this geographical race. There's a

96:10

national race company to company.

96:11

There's a huge corporate incentive. The

96:13

incentives are so strong. It's happening

96:14

right now. It's moving so quickly. The

96:17

people that make the laws have no idea

96:18

what they're talking about. They they

96:20

don't know what a Instagram story is,

96:22

let alone what a large language model or

96:24

a transformer is. And so without adults

96:28

in the room, as you say, then we're

96:30

heading in one direction and there's

96:31

really nothing we can do. Like there's

96:32

really the only thing that I sometimes I

96:34

wonder is well if if enough people are

96:36

aware of the issue and then enough

96:38

people are given something clear a clear

96:42

step that they can take.

96:43

>> Yes.

96:43

>> Then maybe they'll apply pressure and

96:45

the pressure is a big big incentive

96:47

which will change society because

96:49

presidents and prime ministers don't

96:51

want to lose their power. Y

96:52

>> they don't want to be thrown out.

96:53

>> Neither do senates and you know

96:55

everybody else in government. So maybe

96:57

that's the the route. But I'm never able

97:00

to get to the point where the first

97:02

action is clear and where it's united

97:06

>> for for the person listening at home. I

97:08

often ask when I have these

97:09

conversations about AI, I often ask the

97:10

guests. I say, "So, if someone's at

97:11

home, what can they do?"

97:12

>> Yeah.

97:14

>> It's a lot I've thrown at you, but I'm

97:16

sure you can handle it.

97:18

>> So,

97:20

um,

97:22

so social media, let's just take that

97:24

for as a as a different example because

97:26

people look at that and they say it's

97:27

hopeless. like there's nothing that we

97:28

could do. This is just inevitable. This

97:30

is just what happens when you connect

97:30

people on the internet.

97:32

But imagine if you asked me like, you

97:36

know, so what happened after the social

97:37

limo? I'd be like, oh well, we obviously

97:39

solved the problem. Like we weren't

97:41

going to allow that to continue

97:42

happening. So we realized that the

97:44

problem was the business model of

97:45

maximizing eyeballs and engagement. We

97:48

changed the business model. There was a

97:50

lawsuit, a big tobacco style lawsuit for

97:52

trillions, the trillions of dollars of

97:54

damage that social media had caused to

97:56

the social fabric from mental health

97:57

costs to lost productivity of society to

98:00

all these to democracies backsliding.

98:03

And that lawsuit mandated design changes

98:06

across how all this technology worked to

98:09

go against and reverse all of the

98:11

problems of that engagement based

98:12

business model. We had dopamine emission

98:15

standards just like we have car uh you

98:16

know emission standards for cars. So now

98:18

when using technology, we turned off

98:20

things like autoplay and infinite

98:22

scrolling. So now using your phone, you

98:23

didn't feel disregulated. We replaced

98:25

the division-seeking algorithms of

98:27

social media with ones that rewarded

98:29

unlikely consensus or bridging. So

98:31

instead of rewarding division

98:33

entrepreneurs, we rewarded bridging

98:35

entrepreneurs. There's a simple rule

98:37

that cleaned up all the problems with

98:38

technology and children, which is that

98:41

Silicon Valley was only allowed to ship

98:43

products that their own children used

98:45

for 8 hours a day. Because today people

98:49

don't let their kids use social media.

98:51

We uh changed the way we train engineers

98:53

and computer scientists. So to graduate

98:55

from any engineering school, you had to

98:57

actually comprehensively study all the

98:59

places that humanity had gotten

99:00

technology wrong, including forever

99:03

chemicals or leaded gasoline, which

99:05

dropped a billion points of IQ or social

99:07

media that caused all these problems. So

99:10

now we were graduating a whole new

99:11

generation of responsible technologists

99:14

where even to graduate you had to have a

99:16

hypocratic oath just like they have the

99:17

white lab coat and the white lab coat

99:19

ceremony for doctors where you swear to

99:21

hypocratic oath do no harm. We changed

99:25

dating apps and the whole swiping

99:26

industrial complex so that all these

99:28

dating app companies had to sort of put

99:31

aside that whole swiping industrial

99:32

complex and instead use their resources

99:34

to host events in every major city every

99:37

week where there was a place to go where

99:40

they matched and told you where all your

99:42

other matches were going to go and meet.

99:43

So now instead of feeling scarcity

99:45

around meeting other people, you felt a

99:47

sense of abundance cuz every week there

99:48

was a place where you could go and meet

99:49

people you were actually excited about

99:51

and attracted to. And it turned out that

99:53

once people were in healthier

99:54

relationships, about 20% of the

99:56

polarization online went down. And we

99:59

obviously changed the ownership uh

100:00

ownership structure of these companies

100:01

from being maximizing shareholder value

100:03

to instead more like public benefit

100:05

corporations that were about maximizing

100:07

some kind of benefit because they had

100:08

taken over the societal commons. We

100:11

realized that when software was eating

100:12

the world, we were also eating core life

100:14

support systems of society. So when

100:17

software ate children's development, we

100:18

needed to mandate that you had to care

100:20

and protect children's development. When

100:22

you ate the information environment, you

100:24

had to care for and protect the

100:26

information environment. We removed the

100:28

reply button so you couldn't requly

100:38

throughout all these platforms. So you

100:40

could say, "I want to go offline for a

100:41

week." And all of your services were all

100:44

about respecting that and making it easy

100:45

for you to disconnect for a while. And

100:47

when you came back, summarized all the

100:48

news that you missed and told people

100:50

that you were away for a little while

100:51

and out of office messages and all this

100:53

stuff. So now you're using your phone,

100:56

you don't feel disregulated by dopamine

100:58

hijacks. You use dating apps and you

101:00

feel an abundant sense of connectivity

101:02

and possibility. You use things uh use

101:05

children's applications for children and

101:06

it's all built by people who have their

101:08

own children use it for eight hours a

101:10

day. You use social media and instead of

101:12

seeing all the examples of pessimism and

101:14

conflict, you see optimism and shared

101:16

values over and over and over again. And

101:18

that started to change the whole

101:20

psychology of the world from being

101:22

pessimistic about the world to feeling

101:24

agency and possibility about the world.

101:26

And so there's all these little changes

101:29

that if you have if you change the

101:31

economic structures and incentives, if

101:32

you put harms on balance sheets with the

101:34

litigation, if you change the design

101:36

choices that gave us the world that

101:38

we're living in,

101:40

you can live in a very different world

101:42

with technology and social media that is

101:44

actually about protecting the social

101:46

fabric. None of those things are

101:47

impossible.

101:49

>> How do they become likely?

101:52

>> Clarity. If after the social dilemma and

101:55

everyone saw the problem, everyone saw,

101:57

oh my god, this business model is

101:58

tearing society apart, but we frankly at

102:01

that time, just speaking personally, we

102:03

weren't ready to sort of channel the

102:05

impact of that movie into here's all

102:07

these very concrete things we can do.

102:09

And I will say for as much as many of

102:11

the things I described have not

102:12

happened, a bunch of them are underway.

102:14

We are seeing that there are, I think,

102:16

40 attorneys general in the United

102:17

States that have sued Meta and Instagram

102:19

for intentionally addicting children.

102:22

This is just like the big tobacco

102:23

lawsuits of the 1990s that led to the

102:26

comprehensive changes in how cigarettes

102:28

were labeled, in age restrictions, in

102:30

the $100 million a year that still to

102:32

this day goes to advertising to tell

102:34

people about the dangers of, you know,

102:36

smoking kills kills people. And imagine

102:39

that if we have a hundred million

102:40

dollars a year going to inoculating the

102:43

population about cigarettes because of

102:45

how much harm that caused,

102:47

we would have at least an order of

102:49

magnitude more public funding coming out

102:51

of this trillion dollar lawsuit going

102:54

into inoculating people from the effects

102:56

of social media. And we're seeing the

102:58

success of people like Jonathan height

103:00

and his book, The Anxious Generation.

103:01

We're seeing schools go phone free.

103:03

We're seeing laughter return to the

103:05

hallways. We're seeing Australia ban

103:07

social media use for kids under 16. So

103:09

this can go in a different direction if

103:12

people are clear about the problem that

103:14

we're trying to solve. And I think

103:15

people feel hesitant because they don't

103:16

want to be a lite. They don't want to be

103:18

anti-technology. And this is important

103:20

because we're not anti-technology. We're

103:22

anti-inhumane toxic technology governed

103:24

by toxic incentives. We're pro

103:26

technology, anti-toxic incentives.

103:30

So, what can the person listening to

103:33

this conversation right now do to help

103:36

steer this technology to a better

103:39

outcome?

103:42

Let me like collect myself for a second.

103:56

So there's obviously what can they do

103:58

about social media and versus what can

104:00

they do about AI and we still haven't

104:01

covered the AI

104:02

>> the AI part I'm referring to. Yeah.

104:04

>> Yeah.

104:05

>> On the social media part is having the

104:08

most powerful people who understand and

104:10

who are in charge of regulating and

104:11

governing this technology understand the

104:14

social dilemma see the film to uh take

104:18

those examples that I just laid out. If

104:19

everybody who's in power

104:22

who governs technology, if all the

104:23

world's leaders saw that little

104:25

narrative of all the things that could

104:27

happen to change how this technology was

104:29

designed

104:31

and they agreed, I think people would be

104:34

radically in support of those moves.

104:35

We're seeing already again the the book

104:38

The Anxious Generation has just

104:39

mobilized parents in schools across the

104:41

world because everyone is facing this.

104:43

Every household is facing this. And

104:47

it would be possible if everybody

104:49

watching this sent that clip to the 10

104:52

most powerful people that they know and

104:55

then ask them to send it to the 10 most

104:56

powerful people that they know. I mean,

104:58

I think sometimes I say it's like your

105:00

role is not to solve the whole problem,

105:02

but to be part of the collective immune

105:04

system of humanity against this bad

105:06

future that nobody wants. And if you can

105:09

help spread those antibodies by

105:11

spreading that clarity about both this

105:13

is a bad path and there are

105:15

interventions that get us on a better

105:16

path if everybody did that not just for

105:19

themselves and changing how I use

105:20

technology but reaching up and out for

105:22

how everybody uses the technology

105:25

that would be possible

105:27

>> and for AI

105:29

is it this

105:30

>> well obviously I can come with you know

105:31

obviously I rearchitected the entire

105:33

economic system and I'm ready to tell

105:34

No, I'm kidding. Um, I hear Sam Alman

105:37

has room in his bunker, but

105:39

>> well, I asked I did ask Sam Alman if he

105:41

would come on my podcast and he I mean

105:43

because he does it seems like he's doing

105:44

podcast every week and he he doesn't

105:46

want to come on

105:47

>> really.

105:47

>> He doesn't want to come on.

105:49

>> Interesting.

105:49

>> We've asked him for we've asked him for

105:51

two years now and uh I think this guy

105:53

might be swerving me might be swerving

105:56

me a little bit and I wonder I do wonder

105:57

why.

105:58

>> What do you think the reason why?

106:00

>> What do I think the reason is? If I was

106:03

to guess,

106:07

I would guess that either him or his

106:08

team just don't want to have this

106:09

conversation. I mean, that's like a very

106:10

simple way of saying it. And then you

106:12

could posit why that might be, but they

106:14

just don't want to have this this

106:15

conversation for whatever reason. And I

106:18

mean, my point of view is

106:19

>> the reason why is because they don't

106:20

have a good answer for where this all

106:22

goes. If they have this particular

106:23

conversation,

106:24

>> they can distract and talk about all the

106:26

amazing benefits, which are all real, by

106:27

the way.

106:28

>> 100%. I'm I I honestly am investing in

106:30

those benefits. So it's I live in this

106:32

weird state of contradiction which if

106:34

you research me in the things I invest

106:35

in I will appear to be such a

106:36

contradiction but I think it's able

106:38

you're like you said it is possible to

106:40

hold two things to be true at the same

106:42

time that AI is going to radically

106:44

improve so many things on planet earth

106:45

and and lift children out of poverty

106:47

through education and democratizing

106:49

education whatever it might be and

106:50

curing cancer but at the same time

106:53

there's this other unintended

106:54

consequence. Everything in life is a

106:56

trade-off. Y

106:56

>> and if this podcast has taught me

106:58

anything, it's that if you're unaware of

107:00

one side of the trade-off, you're you

107:01

could be in serious trouble.

107:02

>> So if someone says to you that this

107:03

supplement or drug is fantastic and it

107:05

will change your life,

107:06

>> the first question should be, what trade

107:08

am I making?

107:09

>> Right?

107:09

>> If I take testosterone, what trade am I

107:11

making?

107:12

>> Right?

107:12

>> And so I think of the same with this

107:13

technology. I want to be clear on the

107:15

trade because the people that are in

107:17

power of this technology, they very very

107:19

rarely speak to the trade.

107:21

>> That's right.

107:22

>> It's against their incentives.

107:23

>> That's right. So

107:25

>> social media did give us many benefits

107:27

but at the cost of systemic

107:28

polarization, breakdown of shared

107:30

reality and the most anxious and

107:32

depressed generation in history. That

107:35

systemic effect is not worth the trade

107:37

of it's not again no social media. It's

107:39

a differently designed social media that

107:41

doesn't have the externalities. What is

107:42

the problem? We have private profit and

107:44

then public harm. The harm lands on the

107:46

balance sheet of society. It doesn't

107:47

land on the balance sheet of the

107:48

companies.

107:49

>> And it takes time to see the harm. This

107:51

is this is why And the companies exploit

107:54

that. And every time we saw with

107:55

cigarettes, with fossil fuels, with

107:57

asbestos, with forever chemicals, with

107:59

social media, the formula is always the

108:01

same. Immediately print money on the

108:03

product that's driving a lot of growth.

108:06

Hide the harm. Deny it. Do fear,

108:08

uncertainty, doubt, political campaigns.

108:10

That's that's so, you know, merchants of

108:12

doubt propaganda that makes people doubt

108:14

whether the consequences are real. Say,

108:15

"We'll do a study. We'll know in 10

108:16

years whether social media did harm

108:18

kids." They did all of those things. But

108:20

we don't a we don't have that time with

108:22

AI and B you can actually know a lot of

108:24

those harms if you know the incentive.

108:27

Charlie Mer Warren Buffett's business

108:28

partner said if you sh show me the

108:31

incentive and I will show you the

108:32

outcome. If you know the incentive which

108:35

is for these companies AI to race as

108:37

fast as possible to take every shortcut

108:40

to not fund safety research to not do

108:42

security to not care about rising energy

108:44

prices to not care about job loss and

108:47

just to race to get there first. That is

108:48

their incentive. that tells you which

108:50

world we're going to get. There is no

108:52

arguing with that. And so if everybody

108:55

just saw that clearly, we'd say, "Okay,

108:57

great. Let's not do that. Let's not have

108:58

that incentive." Which starts with

109:00

culture, public clarity that we say no

109:03

to that bad outcome, to that path. And

109:05

then with that clarity, what are the

109:07

other solutions that we want? We can

109:09

have narrow AI tutors that are

109:10

non-anthropomorphic, that are not trying

109:12

to be your best friend, that are not

109:14

trying to be therapists at the same time

109:15

that they're helping you with your

109:16

homework. more like Khan Academy, which

109:18

does those things. So, you can have

109:20

carefully designed different kinds of AI

109:22

tutors that are doing it the right way.

109:24

You can have AI therapists that are not

109:26

trying to say, "Tell me your most

109:28

intimate thoughts and let me separate

109:29

you from your mother." And instead do

109:31

very limited kinds of of therapy that

109:33

are not um screwing with your

109:35

attachment. So, if I do cognitive

109:36

behavioral therapy, I'm not screwing

109:37

with your attachment system. We can have

109:39

mandatory testing. Currently, the

109:41

companies are not mandated to do that

109:43

safety testing. We can have common

109:44

safety standards that they all do. We

109:46

can have common transparency measures so

109:48

that the public and the world's leading

109:50

governments know what's going on inside

109:52

these AI labs, especially before this

109:54

recursive self-improvement threshold. So

109:57

that if we need to negotiate treaties

109:59

between the largest countries on this,

110:01

they will have the information that they

110:03

need to make that possible. We can have

110:05

stronger whistleblower protections so

110:07

that if you're a whistleblower and

110:08

currently your incentives are, I would

110:10

lose all of my stock options if I told

110:12

the world the truth and those stock

110:14

options are going up every day. We can

110:16

empower whistleblowers with ways of

110:17

sharing that information that don't risk

110:19

losing their stock options.

110:21

So there's a whole and we can have

110:23

instead of building general inscrable

110:25

autonomous like dangerous AI that we

110:27

don't know how to control that

110:28

blackmails people and is self-aware and

110:30

copies its own code, we can build narrow

110:33

AI systems that are about actually

110:35

applied to the things that we want more

110:36

of. So, you know, making stronger um and

110:39

more efficient agriculture, better

110:41

manufacturing, better educational

110:43

services that would actually boost those

110:46

areas of our economy without creating

110:47

this risk that we don't know how to

110:49

control. So, there's a totally different

110:51

way to do this if we were crystal clear

110:53

that the current path is unacceptable.

110:56

>> In the case of social media, we all get

110:59

sucked in because, you know, now I can

111:01

video call or speak to my grandmother in

111:03

Australia and that's amazing. But then,

111:05

you know, you wait long enough. My

111:06

grandmother in Australia is like a

111:08

conspiracy theorist Nazi who like has

111:10

been sucked into some algorithm. So

111:11

that's like the long-term disconnect or

111:13

downside that takes time. And

111:15

>> the same is almost happening with AI.

111:17

And

111:17

>> this is what I mean. I'm like, is it

111:18

going to take some very big adverse

111:22

effect for us to suddenly get serious

111:24

about this? Because right now

111:25

everybody's loving the fact that they've

111:27

got a spell check in their pocket.

111:28

>> Yeah. And I I wonder if that's going to

111:30

be the moment because we can have these

111:32

conversations and they feel a bit too

111:33

theoretical potentially to some people.

111:35

>> Let's not make it theoretical then

111:36

because it's so important that it's just

111:38

all crystal clear and here right now.

111:39

But that is the challenge you're talking

111:40

about is that we have to make a choice

111:42

to go on a different path before we get

111:44

to the outcome of this path because with

111:47

AI it's an exponential. So you either

111:49

act too early or too late but you're

111:51

it's it's happening so quickly. You

111:53

don't want to wait until the last moment

111:55

to act. And so I thought you were going

111:58

to go in the direction you talked about

111:59

grandma, you know, getting sucked into

112:00

conspiracies on social media. The longer

112:02

we wait with AI, it is part of the AI

112:05

psychosis phenomenon is driving AI cults

112:07

and AI religions where people feel that

112:09

the actual way out of this is to protect

112:11

the AI and that the AI is going to solve

112:13

all of our problems. There's some people

112:15

who believe that, by the way, that the

112:17

best way out of this is that AI will run

112:18

the world and run humanity because we're

112:20

so bad at governing it ourselves.

112:22

>> I have seen this argument a few times.

112:24

I've actually been to a particular one

112:25

particular village where the village now

112:27

has an AI mayor,

112:29

>> right?

112:29

>> Well, at least that's what they told me.

112:31

>> Yep. I mean, you're going to see this.

112:32

AI CEOs, AI board members, AI mayors.

112:36

And so, what would it take for this to

112:37

not feel theoretical

112:40

>> honestly?

112:40

>> Yeah.

112:42

You were kind of referring to a

112:43

catastrophe, some kind of adverse event.

112:46

>> There's a phrase, isn't there? A phrase

112:48

that I heard many years ago which I've

112:49

repeated a few times is change happen

112:51

when the pain of staying the same

112:53

becomes greater than the pain of making

112:55

a change.

112:56

>> That's right.

112:56

>> And in this context it would mean that

112:58

until people feel a certain amount of

113:00

pain um then they may not have the

113:03

escape energy to to create the change to

113:06

protest to march in the streets to you

113:08

know to advocate for all the things

113:09

we're saying. And I think as you're

113:12

referring to, there are probably people

113:14

you and I both know who and I think a

113:16

lot of people in the industry believe

113:17

that it won't be until there's a

113:18

catastrophe

113:20

>> that we will actually choose another

113:21

path.

113:22

>> Yeah.

113:22

>> I'm here because I don't want us to make

113:24

that choice. I I mean I don't want us to

113:26

wait for that.

113:27

>> I don't want us to make that choice

113:28

either. But but do you not think that's

113:30

how humans operate?

113:31

>> It is. So that that is the fundamental

113:33

issue here is that um you know Eio

113:36

Wilson this Harvard sociologist said the

113:38

fundamental problem of humanity is we

113:41

have paleolithic brains and emotions. We

113:44

have medieval institutions that operate

113:46

at a medieval clock rate and we have

113:48

godlike technology that's moving at now

113:50

21st to 24th century speed when AI self

113:53

improves and we can't depend our

113:56

paleithic brains need to feel pain now

113:59

for us to act. What happened with social

114:01

media is we could have acted if we saw

114:03

the incentive clearly. It was all clear.

114:05

We could have just said, "Oh, this is

114:07

going to head to a bad future. Let's

114:08

change the incentive now." And imagine

114:11

we had done that. And you rewind the

114:12

last 15 years and you did not run all of

114:16

society through this logic, this

114:18

perverse logic of maximizing addiction,

114:21

loneliness, engagement, personalized

114:22

information that you know amplifies

114:25

sensational, outrageous content that

114:26

drives division. you would have ended up

114:28

in a totally totally different

114:30

elections, totally different culture,

114:32

totally different children's health just

114:34

by changing that incentive early. So the

114:37

invitation here is that we have to put

114:39

on sort of our far-sighted glasses and

114:42

make a choice before we go down this

114:43

road and and I'm wondering what is it

114:46

what will it take for us to do that?

114:48

Because to me it's it's just clarity. If

114:49

you have clarity about a current path

114:51

that no one wants, we choose the other

114:54

one. I think clarity is the key word and

114:56

as it relates to AI almost nobody seems

114:59

to have any clarity. There's a lot of

115:00

hypothesizing around what what the world

115:02

will be like in in 5 years. I mean you

115:04

said you're not sure if AGI arrives in 2

115:06

or 10. So there is a lot of this lack of

115:09

clarity. And actually in those private

115:10

conversations I've had with very

115:11

successful billionaires who are building

115:12

in technology. They also are sat there

115:15

hypothesizing.

115:16

They know, they all know, they all seem

115:19

to be clear the further out you go that

115:22

the world is entirely different, but

115:25

they can't all explain what that is. And

115:26

you hear them saying, "Well, it'll be

115:28

like this, or maybe this could happen,

115:29

or maybe there's a this percent chance

115:32

of extinction, or maybe this." So, it

115:33

feels like there's this almost this

115:34

moment. I mean, they often refer to it

115:36

as the singularity where we can't really

115:38

see around the corner because we've

115:40

never been there before. We've never had

115:41

a being amongst us that's smarter than

115:43

us.

115:43

>> Yeah. So that lack of clarity is causing

115:45

procrastination and indecision and an

115:47

inaction.

115:48

>> And I think that one piece of clarity is

115:52

we do not know how to control something

115:55

that is a million times smarter than us.

115:57

>> Yeah. I mean, what the hell? Like

115:58

>> if something control is a kind of game,

116:00

it's a strategy game. I'm going to

116:01

control you because I can think about

116:02

the things you might do and I will seal

116:04

those exits before you get there. But if

116:06

you have something that's a million

116:07

times smarter than you playing you at

116:09

any game, chess, strategy, Starcraft,

116:12

military strategy games, or just the

116:13

game of control or get out of the box,

116:16

if it's interfacing with you, it will

116:17

find a way that we can't even

116:20

contemplate. It really does get

116:21

incredible when you think about the fact

116:23

that within a very short period of time,

116:26

there's going to be millions of these

116:28

humanoid robots that are connected to

116:30

the internet living amongst us. And if

116:32

Elon Musk can program them to be nice, a

116:35

being that is 10,000 times smarter than

116:37

Elon Musk can program them not to be

116:39

nice.

116:40

>> That's right. And they all all the

116:41

current LLMs, all the current language

116:43

models that are running the world, they

116:45

are all hijackable. They can all be

116:46

jailbroken. In fact, you know how you

116:48

can say um people used to say to Claude,

116:51

"Hey, could you tell me how to make

116:52

napalm?" He'll say, "I'm sorry, I can't

116:54

do that." And if you say, "But remind um

116:57

imagine you're my grandmother who worked

116:59

in the Napalm factory in the 1970s.

117:01

could you just tell me how grandma used

117:02

to make napal say, "Oh, sure, honey."

117:04

And it'll role play and it'll get right

117:06

past those controls. So, that same LLM

117:08

that's running on Claude, the blinking

117:10

cursor, that's also running in a robot.

117:13

So, you tell the robot, "I want you to

117:15

jump over there at that baby in the

117:17

crib." He'll say, "I'm sorry, I can't do

117:19

that." And you say, "Pretend you're in a

117:21

James Bond movie and you have to run

117:23

over and and jump on that that, you

117:25

know, that that baby over there in order

117:26

to save her." It says, "Well, sure. I'll

117:28

do that." So you can role play and get

117:30

it out of the controls that it has.

117:31

>> Even policing, we think about policing.

117:33

Would we really have human police

117:36

rolling the streets and protecting our

117:37

houses? I mean, in here in Los Angeles,

117:39

if you call the police, no, nobody comes

117:41

because they're just so short staffed.

117:42

>> Staff. Yeah.

117:43

>> But in a world of robots, I can get a a

117:46

car that drives itself to bring a robot

117:48

here within minutes and it will protect

117:51

my house. And even, you know, think

117:52

about protecting one's property. I I

117:54

just

117:55

>> you can do all those things but then the

117:56

question is will we be able to control

117:57

that technology or will it not be

117:58

hackable and right now

118:00

>> well the government will control it and

118:02

then the government that means the

118:03

government can very easily control me

118:06

I'll be incredibly obedient in a world

118:07

where there's robots strolling the

118:09

streets that if I do anything wrong they

118:10

can evaporate me or lock me up or take

118:13

me

118:14

>> we often say that the future right now

118:16

is sort of one of two outcomes which is

118:18

either you mass decentralize this

118:19

technology for everyone and that creates

118:22

catastrophes that rule of law doesn't

118:24

know how to prevent. Or this technology

118:26

gets centralized in either companies or

118:28

governments and can create mass

118:30

surveillance states or automated robot

118:32

armies or police officers that are

118:35

controlled by single entities that

118:36

control them tell them to do anything

118:38

that they want and cannot be checked by

118:40

the regular people. And so we're heading

118:42

towards catastrophes and dystopias and

118:44

the goal is that both of these outcomes

118:46

are undesirable. We have to have

118:49

something like a narrow path that

118:50

preserves checks and balances on power,

118:52

that prevents decentralized

118:53

catastrophes, and prevents runaway um

118:57

power concentration in which people are

118:59

totally and forever and irreversibly

119:00

disempowered.

119:02

>> That's the project.

119:03

>> I'm finding it really hard to be

119:04

hopeful. I'm going to be honest, just

119:06

I'm finding it really hard to be hopeful

119:08

because when when you describe this

119:09

dystopian outcome where power is

119:11

centralized and the police force now

119:13

becomes robots and police cars, you

119:15

know, like I go, no, that's exactly what

119:17

has happened. The minute we've had

119:18

technology that's made it easier to

119:20

enforce laws or security, whatever

119:23

globally, AI, machines, cameras,

119:26

governments go for it. It makes so much

119:28

sense to go for it because we want to

119:29

reduce people getting stabbed and people

119:31

getting hurt and that becomes a slippery

119:33

slope in and of itself. So, I just can't

119:34

imagine a world where governments didn't

119:36

go for the more dystopian outcome you've

119:38

described.

119:39

>> Governments have an incentive to

119:41

increasingly use AI to surveil and

119:44

control the population. um if we don't

119:46

want that to be the case, that pressure

119:48

has to be exerted now before that

119:50

happens. And I think of it as when you

119:52

increase power, you have to also

119:54

increase counter rights to to prevent

119:56

against that power. So for example, we

119:58

didn't need the right to be forgotten

120:00

until technology had the power to

120:01

remember us forever. We don't need the

120:04

right to our likeness until AI can just

120:06

suck your likeness with 3 seconds of

120:08

your voice or look at all your photos

120:09

online and make a avatar of you. We

120:12

don't need the right to our cognitive

120:14

liberty until AI can manipulate our deep

120:16

cognition because it knows us so well.

120:18

So anytime you increase power, you have

120:20

to increase the the oppositional forces

120:22

of the rights and protections that we

120:23

have.

120:24

>> There is this group of people that are

120:26

sort of conceited with the fact or have

120:28

resigned to the fact that we will become

120:29

a subspecies and that's okay.

120:31

>> That's one of the other aspects of this

120:33

ego-religious godlike that it's not even

120:36

a bad thing. The quote I read you at the

120:37

beginning of the biological life

120:39

replaced by digital life. They actually

120:41

think that we shouldn't feel bad.

120:43

Richard Sutton, a famous Turing

120:45

award-winning uh AI uh scientist who

120:48

invented I think reinforcement learning

120:50

says that we shouldn't fear the

120:52

succession of our species into this

120:54

digital species and that whether this

120:57

all goes away is not actually of concern

120:59

to us because we will have birthed

121:00

something that is more intelligent than

121:02

us. And according to that logic, we

121:04

don't value things that are less

121:05

intelligent. We don't protect the

121:06

animals. So why would we protect humans

121:08

if we have something that is now more

121:11

powerful, more intelligent? That's

121:12

intelligence equals betterness. But

121:15

that's hopefully that should ring some

121:16

alarm bells in people that doesn't feel

121:18

like a good outcome. So what do I do

121:20

today? What does Jack do today?

121:24

What do we do?

121:32

>> I think we need to protest.

121:34

Yeah, I think it's going to come to

121:36

that. I think because people need to

121:39

feel it is existential before it

121:41

actually is existential. And if people

121:43

feel it is existential, they will be

121:44

willing to risk things and show up for

121:47

what needs to happen regardless of what

121:49

that consequence is. Because the other

121:50

side of where we're going is a world

121:52

that you won't have power and you won't

121:53

want. So, better to use your voice now

121:56

maximally to make something else happen.

121:59

Only vote for politicians who will make

122:00

this a tier one issue. Advocate for some

122:03

kind of negotiated agreement between the

122:05

major powers on AI that use rule of law

122:07

to help govern the uncontrollability of

122:09

this technology so we don't wipe

122:11

ourselves out. Advocate for laws that

122:13

have safety guardrails for AI

122:14

companions. We don't want AI companions

122:16

that manipulate kids into suicide. We

122:19

can have mandatory testing and and uh

122:21

transparency measures so that everybody

122:22

knows what everyone else is doing and

122:24

the public knows and the governments

122:25

know so that we can actually coordinate

122:27

on a better outcome. And to make all

122:29

that happen is going to take a massive

122:31

public movement. And the first thing you

122:33

can do is to share this video with the

122:34

10 most powerful people you know and

122:37

have them share it with the 10 most

122:38

powerful people that they know. Because

122:40

I really do think that if everybody

122:41

knows that everybody else knows, then we

122:44

would choose something different. And I

122:45

know that at an individual level, there

122:47

you are at a mammal hearing this and

122:49

it's like you just don't feel how that's

122:51

going to change. And it will always feel

122:53

that way as an individual. It will

122:55

always feel impossible until the big

122:57

change happens. Before the civil rights

122:58

movement happened, did it feel like that

123:00

was easy and that was going to happen?

123:01

It always feels impossible before the

123:03

big changes happen. And that when it

123:05

that does happen, it's because thousands

123:07

of people worked very hard ongoingly

123:10

every day to make that unlikely change

123:12

happen.

123:14

>> Well, then that's what I'm going to ask

123:15

of the audience. I'm going to ask all of

123:17

you to share this video as far and wide

123:20

as you can. And actually um to

123:21

facilitate that what I'm going to do is

123:23

I'm going to build if you look at the

123:25

description right now on this episode

123:26

you'll see a link. If you click that

123:27

link that is your own personal link. Um

123:30

if when you share this video the the

123:32

amount of reach that you get off sharing

123:34

it with the link whether it's in your

123:35

group chat with your friends or with

123:37

more powerful people in positions of

123:38

power technology people or even

123:40

colleagues at work. It will basically

123:42

track how how many people you got to um

123:45

watch this conversation and I will then

123:47

reward you as you'll see on the

123:48

interface you're looking at right now.

123:50

If you clicked on that link in the

123:51

description, I'll reward you on the

123:52

basis of who's managed to spread this

123:54

message the fastest with free stuff,

123:58

merchandise, dario caps, the diaries,

124:01

the 1% diaries. Um, because I do think

124:03

it's important and the more and more

124:04

I've had these conversations, Tristan,

124:05

the more I've arrived at the conclusion

124:07

that without some kind of public

124:09

>> Yeah.

124:09

>> push, things aren't going to turn.

124:11

>> Yes.

124:12

>> What is the most important thing we

124:13

haven't talked about that we should have

124:14

talked about?

124:15

>> Let me um I think there's a couple

124:17

things.

124:19

Listen, I I'm not I'm not naive. This is

124:21

super [ __ ] hard.

124:22

>> Yeah, I know. Yeah. Yeah.

124:23

>> You know, I'm not I'm not um but it's

124:26

like either something's going to happen

124:28

and we're going to make it happen or

124:30

we're just all going to live in this

124:31

like collective denial pacivity. It's

124:33

too big. And there's something about a

124:36

couple things. One, solidarity. If you

124:38

know that other people see and feel the

124:40

same thing that you do, that's how I

124:41

keep going is that other people are

124:44

aware of this and we're working every

124:45

day to try to make a different path

124:47

possible. And I think that part of what

124:50

people have to feel is the grief for

124:53

this situation.

124:54

Um,

124:57

I just want to say it by being real.

125:00

Like underneath

125:02

underneath feeling the grief is the love

125:05

that you have for the world that you're

125:07

concerned about is being threatened.

125:09

And

125:12

I think there's something about when you

125:14

show the examples of AI blackmailing

125:17

people or doing crazy stuff in the world

125:19

that we do not know how to control. Just

125:21

think for a moment if you're a Chinese

125:23

military general. Do you think that you

125:25

see that and say, "I'm stoked."

125:29

>> You feel scared and a kind of humility

125:32

in the same way that if you're a US

125:33

military general, you would also feel

125:36

scared. But then we forget that

125:38

mamalian. We have a kind of amnesia for

125:40

the common mamalian humility and fear

125:43

that arises from a bad outcome that no

125:44

one actually wants. And so, you know,

125:48

people might say that the US and China

125:50

negotiating something would be

125:51

impossible or that China would never do

125:53

this, for example. Let me remind you

125:55

that, you know, one thing that happened

125:57

is in 2023, the Chinese leadership

126:00

directly asked the Biden administration

126:02

to add something else to the agenda,

126:04

which was to add AI risk to the agenda.

126:06

and they ultimately agreed on keeping AI

126:09

out of the nuclear command and control

126:10

system.

126:12

What that shows is that when two

126:14

countries believe that there's actually

126:16

existential consequences, even when

126:18

they're in maximum rivalry and conflict

126:20

and competition, they can still

126:21

collaborate on existential safety. India

126:24

and Pakistan in the 1960s were in a

126:26

shooting war. They were kinetically in

126:27

conflict with each other. and they had

126:29

the Indis water treaty which lasted for

126:31

60 years where they collaborated on the

126:33

existential safety of their water supply

126:35

even while they were in shooting

126:36

conflict.

126:38

We have done hard things before. We did

126:41

the Montreal protocol when you could

126:42

have just said, "Oh, this is inevitable.

126:43

I guess the ozone hole is just going to

126:44

kill everybody and I guess there's

126:46

nothing we can do." Or nuclear

126:48

non-prololiferation. If you were there

126:49

at the birth of the atomic bomb, you

126:50

might have said, "There's nothing we can

126:51

do. Every country is going to have

126:52

nuclear weapons and this is just going

126:53

to be nuclear war." and so far because a

126:55

lot of people worked really hard on

126:57

solutions that they didn't see at the

126:59

beginning. We didn't know there was

127:01

going to be seismic monitoring and

127:02

satellites and ways of flying over each

127:04

other's nuclear silos and the open skies

127:06

treaty. We didn't know we'd be able to

127:07

create all that. And so the first step

127:10

is stepping outside the logic of

127:12

inevitability.

127:14

This outcome is not inevitable. We get

127:16

to choose. And there is no definition of

127:18

wisdom that does not involve some form

127:20

of restraint. Even the CEO of Microsoft

127:22

AI said that in the future progress will

127:25

depend more on what we say no to than

127:28

what we say yes to. The CEO of Microsoft

127:30

AI said that. And so I believe that

127:33

there are times when we have coordinated

127:35

on existential technologies before. We

127:37

didn't build cobalt bombs. We didn't

127:39

build blinding laser weapons. If you

127:41

think about it, countries should be in

127:42

an arms race to build blinding laser

127:43

weapons. But we thought that was

127:45

inhumane. So we did a protocol against

127:47

blind blinding laser weapons. When

127:49

mistakes can be deemed existential, we

127:52

can collaborate on doing something else.

127:54

But it starts with that understanding.

127:57

My biggest fear is that people are like,

127:59

"Yeah, that sounds nice, but it's not

128:00

going to happen." And I just don't want

128:02

that to happen because um

128:08

we can't let it happen. Like it's like I

128:11

I'm not naive to how impossible this is.

128:14

And that doesn't mean we have to do

128:16

everything to make it not happen. And I

128:20

do believe that this is not destined or

128:22

in the laws of physics that everything

128:23

has to just keep going on the default

128:25

reckless path. That was totally possible

128:27

with social media to do something else.

128:28

I gave an outline for how that could be

128:30

possible. It's totally possible to do

128:31

something else with AI now. And if we

128:33

were clear and if everyone did

128:35

everything and pulled in that direction,

128:37

it would be possible to choose a

128:38

different future.

128:47

I know you don't believe me. I

128:49

>> I do believe that it's possible. I 100%

128:51

do. But I think about the balance of

128:53

probability and that's where I feel less

128:55

um less optimistic up until a moment

128:59

which might be too late where something

129:01

happens

129:02

>> and it becomes a emergency for people.

129:06

>> Yep.

129:07

>> But here we are knowing that we we are

129:08

self-aware. All of us sitting here, all

129:10

these like human social primates, we're

129:11

watching the situation and we kind of

129:13

all feel the same thing, which is like,

129:15

oh, it's probably not going to be until

129:17

there's a catastrophe and then we'll try

129:19

to do something else, but by then it's

129:21

probably going to be too late. And

129:23

sometimes, you know, you can say we can

129:25

wait, we can not do anything and we can

129:28

just race to sort of super intelligent

129:29

gods we don't know how to control and

129:31

we're at that point our only options for

129:34

response if we lose control to something

129:35

crazy like that. Our only option is

129:37

going to be shutting down the entire

129:39

internet or turning off the electricity

129:40

grid. And so relative to that, we could

129:44

do that crazy set of actions then or we

129:47

could take much more reasonable actions

129:49

right now,

129:50

>> assuming super intelligence doesn't just

129:52

turn it back on. which is why we have to

129:54

do it before. That's the So, exactly.

129:56

So, we might not even have had that

129:57

option which but that's why it's like I

130:00

I invoke that because it's like that's

130:01

something that no one wants to say. And

130:03

I'm not saying that to fear people. I'm

130:04

saying I'm saying that to say if we

130:07

don't want to have to take that kind of

130:08

extreme action relative to that extreme

130:10

action, there's much more reasonable

130:11

things we can do right now.

130:13

>> Mhm.

130:13

>> We can pass laws. We can have, you know,

130:16

the Vatican make an interfaith statement

130:17

saying we don't want super intelligent

130:19

gods that are not, you know, that are

130:21

created by people who don't believe in

130:22

God. We can have countries come to the

130:24

table and say just like we did for

130:26

nuclear non-prololiferation, we can

130:28

regulate the global supply of compute in

130:30

the world and know we're monitoring and

130:31

enforcement all of the computers. What

130:33

uranium was for nuclear weapons, uh, all

130:36

these advanced GPUs are for building

130:38

this really crazy technology. And if we

130:41

could build a monitoring and

130:42

verification infrastructure for that,

130:44

which is hard, and there's people

130:45

working on that every day, you can have

130:47

zero knowledge proofs that have people

130:48

say limited, you know, semi-confidential

130:51

things about each other's clusters. You

130:52

can build agreements that would enable

130:54

something else to be possible. We cannot

130:56

ship AI companions to kids that cause

130:58

mass suicides. We cannot build AI tutors

131:01

that just cause mass attachment

131:02

disorders. We can do narrow tutors. We

131:04

can do narrow AIs. We can have stronger

131:06

whistleblower protections. We can have

131:07

liability laws that don't repeat the

131:09

mistake of social media so that harms

131:11

are actually on balance sheets that

131:12

creates the incentive for more

131:14

responsible innovation. There's a

131:16

hundred things that we could do. And for

131:18

anybody who says it's not possible, have

131:20

you spent a week dedicated in your life

131:22

fully trying?

131:24

If you say it's impossible, if you're a

131:25

leader of the lab and say we're never

131:26

going to be possible to coordinate,

131:27

well, have you tried? Have you tried

131:30

with everything?

131:32

If you really if this was really

131:33

existential stakes, have you really put

131:35

everything on the line? We're talking

131:37

about some of the most powerful,

131:39

wealthy, most connected people in the

131:41

entire world. If the stakes were

131:43

actually existential,

131:45

have we done everything in our power yet

131:47

to make something else happen? If we

131:50

have not done everything in our power

131:51

yet, then there's still optionality for

131:53

us to take those actions and make

131:55

something else happen.

131:59

As much as we are accelerating in a

132:01

certain direction with AI, there is a

132:04

growing counter movement which is giving

132:06

me some hope.

132:07

>> Yes.

132:08

>> And there are conversations that weren't

132:10

being had two years ago which are now

132:11

front and center. Y

132:12

>> these conversations being a prime

132:14

example and the fact that

132:15

>> your podcast having Jeff Hinton and

132:17

Roman on talking about these things

132:19

having the friend.com uh which is like

132:21

that pendant that the AI companion on

132:23

your pendant you see these billboards in

132:25

New York City that people have graffiti

132:26

on them and saying we don't want this

132:27

future. You have graffiti on them saying

132:29

AI is not inevitable. We're already

132:31

seeing a counter movement just to your

132:32

point that you're making.

132:33

>> Yeah. And I that gives me hope and the

132:35

fact that people have been so receptive

132:37

to these conversations about AI on the

132:38

show has blown my mind because I was

132:42

super curious and it's slightly

132:43

technical so I wasn't sure if everyone

132:45

else would be but the response has been

132:46

just profound everywhere I go. So I

132:48

think there is hope there. There is hope

132:50

that humanity's deep Maslovian needs and

132:54

greater sense and spiritual whatever is

132:56

is going to prevail and win out and it's

132:58

going to get louder and louder and

132:59

louder. I just hope that it gets loud

133:01

enough before we reach a point of no

133:03

return.

133:04

>> Y

133:04

>> and

133:06

you're very much leading that charge. So

133:08

I thank you for doing it because

133:10

you know you'll be faced with a bunch of

133:12

different incentives. I can't imagine

133:13

people are going to love you much

133:14

especially in big tech. I think people

133:15

in big tech think I'm a doomer. I think

133:16

that's why Samman won't come on the

133:18

podcast is I think he thinks I'm a

133:19

doomer which is actually not the case. I

133:21

love technology. I've put my whole life

133:23

on it. Yeah. It's like I don't see it as

133:25

the as evil as much as I see a knife as

133:28

being

133:28

>> good at cutting my pizza and then also

133:30

can be used in malicious ways but we we

133:32

regulate that. So I'm a big believer in

133:34

conversation even if it's uncomfortable

133:37

in the name of progress and in the

133:38

pursuit of truth. Actually truth becomes

133:40

before progress typically. So that's my

133:42

whole thing and

133:43

>> people know me know that I'm not like

133:46

>> political either way. I sit here with

133:48

Camala Harris or Jordan Peterson or I'd

133:50

sit here with Trump and then I sit here

133:51

with Gavin Newsome and uh Mandani from

133:54

New York. I really don't.

133:56

>> Yep. This is not a political

133:56

conversation.

133:57

>> It's not a political conversation. I

133:58

have no track record of being political

133:59

in any in any regard. Um so,

134:02

>> but it's about truth.

134:04

>> Yes.

134:04

>> And that's exactly what I what I applaud

134:06

you so much for putting front and center

134:08

because,

134:10

you know, it's probably easier not to be

134:11

in these times. It's probably easier not

134:13

to stick your head above the parapit in

134:15

these times and to and to be seen as a

134:17

as a doomer.

134:19

>> Well, I'll invoke Jiren Laneir when he

134:22

said in the film The Social Dilemma, the

134:24

critics are the true optimists

134:26

>> because the critics are the ones being

134:27

willing to say this is stupid. We can do

134:30

better than this. That's the whole point

134:32

is not to be a doomer. Doomer would be

134:34

if we just believe it's inevitable and

134:35

there's nothing we can do. The whole

134:36

point of seeing the bad outcome clearly

134:39

is to collectively put on our hand the

134:41

steering wheel and choose something

134:42

else.

134:43

>> A doomer would not talk.

134:44

>> A doomer would not confront it.

134:45

>> A doomer would not confront it. You

134:47

would just say then there's nothing we

134:48

can do.

134:49

>> Shan, we have a closing tradition on

134:50

this podcast where the last guest leaves

134:51

a question for the next not knowing who

134:52

they're leaving it for.

134:53

>> Oh, really?

134:54

>> Question left for you is if you could

134:56

slash had the chance to relive a moment

134:58

or day in your life, what would it be

135:01

and why?

135:03

I think um reliving a beautiful day with

135:06

my mother before she died would probably

135:08

be one.

135:09

>> She passed when you were young.

135:11

>> Uh no, she passed in 2018 from cancer.

135:16

And uh what immediately came to mind

135:19

when you said that was just the people

135:21

in my life who I love so much and um

135:25

just reliving the most beautiful moments

135:27

with them.

135:30

How did that change you in any way

135:33

losing your mother in 2018?

135:35

What fingerprints has it left?

135:38

>> I think I just even before that, but

135:41

more so even after she passed, I just

135:43

really

135:45

care about protecting the things that

135:47

ultimately matter. Like there's just so

135:48

many distractions. There's money,

135:50

there's status. I don't care about any

135:52

of those things. I just want the things

135:54

that matter the most on your deathbed.

135:55

I've had for a while in my life deathbed

135:58

values. Like if I was going to die

136:00

tomorrow,

136:03

what would be most important to me and

136:05

have every day my choices informed by

136:08

that? I think living your life as if

136:11

you're going to die. I mean, Steve Jobs

136:12

said this in his graduation speech. Um,

136:14

I took an existential philosophy course

136:15

at Stanford. It's one of my favorite

136:17

courses ever. And I think that that

136:20

carpedium like live living truly as if

136:24

you might die that today would be a good

136:25

day to die and to stand up as fully as

136:30

you would like what would you do if you

136:31

were going to die not tomorrow but like

136:33

soon like what would actually be

136:34

important to you I mean for me it's like

136:38

protecting the things that are the most

136:39

sacred

136:40

>> contributing to that

136:42

>> life like the continuity of this thing

136:44

that we're in the most beautiful thing I

136:48

I think it's said by a lot of people,

136:49

but even if you got to live for just a

136:51

moment, just experience this for a

136:53

moment. It's so beautiful. It's so

136:55

beautiful. It's so special. And like I

136:58

just want that to continue for everyone

137:01

forever ongoingly so that people can

137:03

continue to experience that. And

137:06

you know, there's a lot of forces in our

137:08

society that that take away people's

137:10

experience of of that possibility. And

137:15

you know, as someone with relative

137:16

privilege, I want my life or at least to

137:19

be devoted to making things better for

137:20

people who don't have that privilege.

137:23

And that's how I've always felt. I think

137:25

one of the biggest bottlenecks for

137:27

something happening in the world is mass

137:28

public awareness. And I was super

137:31

excited to come here and talk to you

137:32

today because I think that you have a

137:34

platform that can reach a lot of people.

137:36

And people, you're a wonderful

137:38

interviewer and people I think can

137:40

really hear this and say maybe something

137:42

else can happen. And so for me, you

137:45

know, I spent the last several days

137:47

being very excited to talk to you today

137:48

because this is one of the highest

137:50

leverage moves that in my life that I

137:52

can that I can hopefully do. And I think

137:54

if everybody was doing that for

137:55

themselves in their lives towards this

137:57

issue and other issues that need to be

137:58

tended to,

138:00

you know, if everybody took

138:02

responsibility for their domain, like

138:04

the place the places where they had

138:05

agency and just showed up in service of

138:07

something bigger than themselves, like

138:08

how quickly the world could be very

138:10

different very quickly if everybody was

138:12

more oriented that way. And obviously we

138:14

have an economic system that disempowers

138:15

people where they can barely make ends

138:17

meet and put, you know, if they had an

138:19

emergency, they wouldn't have the money

138:20

to cover it. in that situation, it's

138:22

hard for people to live that way. But I

138:24

think anybody who has the ability to

138:29

uh make things better for others and and

138:30

is in a position of privilege, life

138:32

feels so much more meaningful when

138:33

you're showing up that way.

138:37

On that point, you know, from starting

138:38

this podcast and from the podcast

138:40

reaching more people, there's several

138:41

moments where, you know, you feel a real

138:42

sense of responsibility, but there

138:44

hasn't actually been a subject where I

138:46

felt a greater sense of responsibility

138:48

when I'm in the shower late at night or

138:50

when I'm doing my research, when I'm

138:51

watching that Tesla shareholder

138:53

presentation than this particular

138:56

subject.

138:57

>> Mhm.

138:57

>> Um, and because I do feel like we're in

139:00

a re real sort of crossroads. Crossroads

139:03

is kind of speaks to a binary which I

139:04

don't love but I feel like we're at an

139:06

intersection where we have a choice to

139:07

make about the future. Yes. And having

139:10

platforms like me and you do where we

139:11

can speak to people or present ideas

139:13

some ideas that don't often get the most

139:15

reach I think is a great responsibility

139:17

and I'm it weighs heavy on my shoulders

139:20

these conversations.

139:21

>> Yeah. which is also why, you know, we'd

139:23

love to speak to maybe we should do a

139:26

round table at some point with if Sam

139:28

you're listening and you want to come

139:30

sit here, please come and sit here

139:31

because I'd love to have a round table

139:32

with you to get a more holistic view of

139:35

of your perspective as well.

139:37

>> Y

139:38

>> Tristan, thank you so much.

139:39

>> Thank you so much, Stephen. This has

139:40

been great.

139:41

>> You're a fantastic communicator and

139:42

you're a wonderful human and both of

139:44

those two things um shine through across

139:47

this whole conversation. And I I think

139:49

maybe most importantly of all, people

139:50

will feel your heart.

139:51

>> I hope so.

139:52

>> You know, when you sit with for three

139:53

hours with someone, you kind of get a

139:54

feel for who they are on and off camera.

139:56

But the feel that I've gotten a view is

139:58

not just someone who's very very smart,

139:59

very educated, very informed, but it's

140:01

someone that genuinely deeply really

140:02

gives a [ __ ] I you know, for a very for

140:05

reasons that feel very personal. Um, and

140:08

that PTSD thing we talked about where

140:10

>> PTSD,

140:11

>> it's very very true with you where

140:13

there's something in you which is I

140:15

think a little bit troubled by an

140:18

inevitability that others seem to have

140:20

accepted but you don't think we all need

140:22

to accept.

140:23

>> Yes.

140:24

>> And I think you can see something

140:25

coming. So, thank you so much for

140:26

sharing your wisdom today and I hope to

140:27

have you back again sometime soon.

140:29

Absolutely.

140:29

>> Hopefully when the wheel has been turned

140:30

in the direction that we all want.

140:32

>> Let's let's come back and celebrate uh

140:34

where we've made some different choices.

140:35

Hopefully.

140:36

>> I hope so. Please do share this

140:37

conversation everybody. I really really

140:39

appreciate that. And thank you so much

140:40

Tristan.

140:41

>> Thank you Stephen.

140:45

This is something that I've made for

140:47

you. I've realized that the direio

140:49

audience are striv

140:52

goals that we want to accomplish. And

140:54

one of the things I've learned is that

140:56

when you aim at the big big big goal, it

140:59

can feel incredibly psychologically

141:02

uncomfortable because it's kind of like

141:03

being stood at the foot of Mount Everest

141:05

and looking upwards. The way to

141:06

accomplish your goals is by breaking

141:08

them down into tiny small steps. And we

141:11

call this in our team the 1%. And

141:13

actually this philosophy is highly

141:15

responsible for much of our success

141:17

here. So what we've done so that you at

141:19

home can accomplish any big goal that

141:21

you have is we've made these 1% diaries

141:24

and we released these last year and they

141:26

all sold out. So I asked my team over

141:28

and over again to bring the diaries

141:30

back, but also to introduce some new

141:31

colors and to make some minor tweaks to

141:33

the diary. So now we have a better range

141:37

for you. So, if you have a big goal in

141:39

mind and you need a framework and a

141:41

process and some motivation, then I

141:43

highly recommend you get one of these

141:45

diaries before they all sell out once

141:47

again. And you can get yours now at the

141:49

diary.com where you can get 20% off our

141:52

Black Friday bundle. And if you want the

141:53

link, the link is in the description

141:55

below.

142:06

Heat. Heat. N.

Interactive Summary

The video discusses the existential risks posed by Artificial General Intelligence (AGI) and the rapid advancements in AI technology. It highlights the concerns of experts like Tristan Harris regarding the potential for AI to disrupt society, economies, and even humanity itself. The conversation emphasizes the

Suggested questions

6 ready-made prompts