HomeVideos

Stanford Neuroscientist: Can’t Remember Your Dreams? Your Brain May Be Warning You!

Now Playing

Stanford Neuroscientist: Can’t Remember Your Dreams? Your Brain May Be Warning You!

Transcript

2802 segments

0:00

After many many decades of people

0:01

debating this, you might have figured

0:03

out the reason why we dream. Yes. And

0:05

it's a simple answer. So if you go

0:07

blind, the visual cortex in the back of

0:09

the brain gets taken over by hearing and

0:12

by touch and by other things. In fact,

0:13

our colleagues at Harvard did an

0:15

experiment where they blindfolded

0:16

normally cighted people. And you could

0:18

start seeing that takeover happening

0:19

after 60 minutes. And that's when we

0:21

realized, wow, the purpose of dreaming

0:24

is to defend the visual territory from

0:26

takeover from the other senses. But what

0:29

fascinates me about brain plasticity and

0:31

what I've devoted my career to is

0:32

figuring out the way that we can be the

0:34

sculptors of our own brains and how it

0:36

gives us an opportunity to become the

0:39

kind of person we would like to be.

0:40

>> And can we do that?

0:42

>> Yes. Here's the thing. Your brain peaked

0:44

at the age of two. Okay. So at the

0:47

beginning you've got fluid intelligence,

0:49

meaning you could learn anything. But

0:50

now that you have grown up in this

0:52

world, you've got crystallized

0:54

intelligence, meaning you know how to

0:55

drive a car. You know how to operate a

0:57

cell phone. You know how to run a

0:58

business. And so your brain doesn't

0:59

require as much change which means that

1:02

the structure of the brain is always

1:04

degenerating.

1:05

>> So what are the set of actions that will

1:06

fundamentally change my brain and make

1:08

me that type of person who's motivated

1:10

and disciplines and who has high agency

1:12

and attacks the world.

1:13

>> So this is something I've studied in my

1:15

lab for decades now. And the key is that

1:18

>> and what about AI and the social media

1:20

debate as it relates to brain

1:21

development?

1:22

>> Well, I happen to be a cyber optimist

1:24

for young people. I think it's going to

1:26

make them much smarter than the

1:28

generation that came before. And here's

1:30

why.

1:31

>> Interesting.

1:33

This is super interesting to me. My team

1:34

given me this report to show me how many

1:36

of you that watch this show subscribe.

1:37

And some of you have told us according

1:39

to this that you are unsubscribed from

1:41

the channel randomly. So, favor to ask

1:43

all of you, please could you check right

1:44

now if you've hit the subscribe button

1:46

if you are a regular viewer of the show

1:47

and you like what we do here. We're

1:49

approaching quite a significant landmark

1:50

on this show in terms of a subscriber

1:52

number. So, if there was one simple free

1:55

thing that you could do to help us, my

1:56

team, everyone here, to keep this show

1:58

free, to keep it improving year over

2:00

year and week over week, it is just to

2:02

hit that subscribe button and to double

2:03

check if you've hit it. Only thing I'll

2:05

ever ask of you, do we have a deal? If

2:07

you do it, I'll tell you what I'll do.

2:08

I'll make sure every single week, every

2:11

single month, as we fight harder and

2:12

harder and harder and harder to bring

2:13

you the guests and conversations that

2:15

you want to hear, I've stayed true to

2:16

that promise since the very beginning of

2:17

the D of Sio, and I will not let you

2:20

down. Please help us. Really appreciate

2:22

it. Let's get on with the show.

2:31

Dr. David Eagleman, what made you so

2:33

fascinated about the brain? And why

2:35

should everybody listening be fascinated

2:36

about the brain as well? Here's what I

2:38

think it is. When I was 8 years old, I

2:40

fell off of the roof of a house that was

2:42

under construction and I fell 12 feet

2:45

and broke my nose on the floor below.

2:48

But the whole thing seemed to take a

2:49

long time. I did the calculation and

2:52

figured out that it only took 6 of a

2:54

second to get from the top to the

2:56

bottom. And I couldn't figure out why it

2:57

seemed to have taken so long. So I think

2:59

that got me really interested in

3:01

perception and the machinery by which we

3:05

view the world and taken in and what is

3:07

actually real versus what's a

3:10

construction of the brain. And that's

3:12

how what I've devoted my career to is

3:13

figuring out how the brain which is

3:15

locked inside the skull. It's about

3:18

three pounds. How it constructs this

3:20

model of the world and which things we

3:22

can take as reality and which things we

3:25

shouldn't.

3:26

>> I think most people don't even know they

3:27

have a there's a brain there almost. It

3:29

sounds like a strange thing to say, but

3:31

we've never really most of us haven't

3:32

really seen our own brains at all. We've

3:34

never been able to touch our own brains

3:35

at all. So, it's it's easy to fall into

3:38

the trap of thinking that everything I

3:40

experience is true and is reality. So,

3:43

I'm wondering how a deeper understanding

3:45

of all this stuff can help me live a

3:46

better life.

3:47

>> Yeah. One of the things that I started

3:50

writing about years ago is that I think

3:51

we're not I think we often think of

3:54

ourselves as individuals, meaning not

3:57

divisible into other things. But really,

4:01

you are a team of rivals. So, you've got

4:04

all these neural networks that have

4:06

different drives making different

4:07

suggestions to you.

4:08

>> What's a neural network?

4:10

>> Um, so in the brain, you've got 86

4:12

billion cells called neurons. And these

4:15

are communicating with each other at a

4:17

blindingly fast rate. Many of these

4:18

cells are hooked up in networks. So,

4:20

they're, you know, this guy's talking to

4:22

this guy and this guy, and they're all

4:23

in particular networks. The thing is,

4:25

you can actually get competing networks.

4:29

So, for example, Stephen, if I drop some

4:31

chocolate chip cookies in front of you,

4:33

part of your brain wants to eat it. It's

4:34

a good energy source. Part of your brain

4:35

says, "Don't eat it. I'll gain weight."

4:37

Part of you says, "Okay, I'll eat one,

4:38

but I'll go to the gym tonight." The

4:40

point is you are arguing with yourself.

4:44

You are conflicted. This is what makes

4:46

humans so interesting is that we have

4:49

all these voices trying to drive us to

4:52

different conclusions about our

4:53

behavior.

4:54

The way that your ship of state moves

4:57

depends on the vote of the neural

4:59

parliament at any time. So understanding

5:02

this I think is really critical to

5:04

navigating our own lives because all of

5:06

us do things where retrospectively we

5:08

regret it. We say I shouldn't have eaten

5:10

that whole bag of chips or done the you

5:13

know the alcohol or the drugs or what

5:15

like everybody has regrets all the time

5:17

with things and it's because you have

5:20

different voices in charge at different

5:22

times. Okay.

5:24

>> Part of what this leads to is what we

5:26

call the Ulisses contract. So a Ulisses

5:30

contract is where you do something now

5:31

to prevent yourself from behaving badly

5:33

in the near future. Just as an example,

5:36

you know, when people go to Alcoholics

5:38

Anonymous, the first thing they're told

5:39

is clear all the alcohol out of the

5:41

house. Because even if you feel like,

5:43

look,

5:43

>> I'm in a moment of sober reflection. I

5:46

don't want to ever drink again. If you

5:47

have alcohol in the house, you're going

5:48

to bust into that cabinet at some point

5:50

on a festive Saturday night or a lonely

5:53

Sunday night or whatever. So, what you

5:55

do is you constrain your future behavior

5:58

by setting things up in the right way so

6:01

your future uh the future you can't

6:03

behave badly. We naively think, okay,

6:06

well, I know who I am. I'm just one

6:08

person. But but you're not. And under

6:10

different circumstances, you're tempted

6:12

by different things and you'll do

6:13

different kinds of behavior. So having a

6:16

sense of what's going on under the hood

6:18

gives us an opportunity to be more

6:21

closely aligned with the kind of person

6:23

we would like to be

6:24

>> because it feels like there's just one

6:26

well I do argue with myself in my head

6:28

sometimes but it feels like there is

6:31

just one me

6:32

>> and so when I hear that voice say Steve

6:34

you should have that cookie and it's

6:35

1:00 a.m. And then the other voice says,

6:36

"No, you shouldn't." I think it's kind

6:38

of the same person just tussling with

6:40

himself,

6:41

>> right? Well, but that tustling with

6:42

himself implies different political

6:44

parties that are all battling it out.

6:46

You know, when you look at a parliament,

6:47

you've got all these political parties

6:49

that all love their country. They just

6:50

have different ideas of how to steer it.

6:53

And this is what's going on uh in in the

6:56

brain all the time.

6:57

>> So, what does one do about that? How do

6:58

I make do I do I have to make a list

7:01

contract? I think it's very useful to

7:03

make that sort of thing. But also just

7:05

understanding oneself. I mean part of

7:07

the you know there was this Greek

7:08

admonition to know thyself. This was a

7:11

sign they had in various places, various

7:13

temples and stuff. But I think that

7:15

becomes know thyelves. And the better we

7:19

know ourselves, the more we can get rid

7:21

of the illusion that we are one person.

7:23

Because all any of us need to do is look

7:25

back on our behavior to say, "Oh yeah,

7:28

in some circumstances I would do that.

7:29

and other circumstances I think is a

7:31

terrible idea. So this is all to the

7:33

goal of understanding who you are.

7:36

>> What are the big misconceptions about

7:37

the brain that people have gone through

7:40

their life believing? I mean that's one

7:41

of them. Something that is true that

7:42

kind of could fall in place of that is

7:44

just this fundamental idea that our

7:46

brains are plastic or sort of adaptable

7:50

because when I found out that I could

7:51

change my brain by what I do, I found

7:52

that to be really really inspiring.

7:56

>> Yes, that that's exactly right. So brain

7:58

plasticity, if someone hasn't heard that

8:00

term before, it sounds like a weird

8:01

term, but the reason it came about 100

8:04

years ago is because the great

8:06

psychologist William James pointed out

8:07

that, you know, if you take a piece of

8:09

plastic, what we like about that

8:10

material that we call plastic is that

8:12

you can mold it into a shape and it'll

8:14

hold that shape. And that's what your

8:16

brain does. So if I ask you the name of

8:18

your third grade teacher, you can

8:19

remember that name even though it's been

8:21

a long time because your neural networks

8:25

changed and held on to that piece of

8:27

information. Okay? Well, our whole lives

8:30

our brains are changing every moment. So

8:33

now we have certain doors that close at

8:36

different times. So just as an example,

8:39

um you need to learn language in the

8:41

first several years of your life. If you

8:43

don't learn language, you can never get

8:45

the concept of language. Your brain will

8:47

never figure that out.

8:48

>> You're not saying you can't learn a new

8:49

language as an adult. You're saying the

8:51

concept of

8:51

>> the concept of language, the concept

8:53

that I can name things and I can ask for

8:55

things and so on. Just that never clicks

8:57

in the brain. For example, in Romania at

9:00

the fall of Chuchescu, there were tens

9:02

of thousands of kids in the orphanages

9:05

because their parents had been killed.

9:06

It was too many kids. And so the staff

9:08

there said, "Look, the kids will get,

9:11

you know, clingy if you pay too much

9:13

attention to them. So here's what we're

9:14

going to do. We're going to feed the

9:14

kids, but we're not going to hold them

9:16

and we're not going to talk to them."

9:18

And all these children grew up with real

9:20

cognitive deficits as a result. Here's

9:23

the thing about brain plasticity. Human

9:25

beings have a a similar brain to all our

9:28

neighbors in the animal kingdom. If you

9:30

compare our brain to a horse brain, a

9:32

dog brain, anything like that, it's the

9:33

same general structures and stuff. But

9:36

what we have is much more of the wrinkly

9:38

outer bit called the cortex. It's the

9:40

outer 3 mm. And maybe we'll come back to

9:43

why that matters so much. But the other

9:46

thing that mother nature tweaked with

9:48

us, it's small genetic tweaks. But we

9:50

have much more plasticity, adaptability

9:53

such that when a horse drops into the

9:56

world, it's doing the same thing that

9:57

horses did 100,000 years ago. It's just,

9:59

you know, eat mate. But when a human

10:01

drops in the world, we learn everything

10:04

that's happened before us. And then we

10:06

springboard off the top of that. So we

10:08

living in the 21st century, we say, "Oh

10:11

great, you know, physics, math, this,

10:12

that, art, blah, blah, great. We got

10:14

everything that's happened before us.

10:15

Now let's do our own thing." And that's

10:17

what's so special about the plasticity

10:19

of the human brain, the adaptability of

10:21

it. The downside, the gamble is that

10:25

mother nature drops human brains into

10:28

the world kind of halfbaked and we then

10:31

get to absorb everything. But in the

10:32

rare circumstance where you're not

10:34

getting the right input, then then that

10:37

ends up really in trouble because it's

10:39

only halfbaked. So when it comes to

10:41

language, we can learn multiple

10:43

languages when we're young. That's very

10:44

easy, but it gets harder and harder as

10:46

that goes along. And various other

10:48

things become harder. And here's why.

10:50

It's because I I mentioned this earlier,

10:53

but the job of the brain is to make a

10:55

model of the world so it can operate

10:57

within it. So, for example, you're an

11:00

entrepreneur and you love doing

11:02

business. So, you get it. You okay,

11:04

here's how, you know, here's how you

11:06

structure business. Here's how you hire.

11:08

Well, here's how you set up a board.

11:09

Well, you're doing everything because

11:11

you've got a really rich internal model

11:13

of how to structure a business. That's

11:15

what the brain wants to do is get that

11:18

stuff right. As a result, if you

11:21

suddenly ended up, you know, taking a

11:23

trip to Mars and there's a whole very

11:25

different society there that does

11:27

businesses very differently, you would

11:29

have to relearn stuff really quickly.

11:32

So, here's the thing. You went from

11:35

having a brain that had high fluid

11:37

intelligence to now having a brain that

11:40

has high crystallized intelligence. What

11:42

that means is at the beginning you can

11:44

learn anything. You could learn any

11:46

language. You could have dropped into

11:47

any area. You could have dropped into

11:49

13th century Japan when I was young.

11:51

>> When you were young, when you were a

11:52

baby, if you had dropped out of the womb

11:54

in, you know, 10th century Mongolia, you

11:57

would have said like, "Okay, cool. Learn

11:59

lang." You would you would be a 10th

12:00

century Mongolian. But as it happens,

12:03

you dropped into this era, you know, a

12:06

certain place and time and neighborhood

12:07

and culture and family. And so you learn

12:09

that that's who you become is that

12:11

person. We often think that plasticity

12:14

diminishes as you age. But it's not

12:17

simply that it's diminishing. It's that

12:18

you are getting the right answers about

12:22

how to operate in the world. And so you

12:24

don't have to change as much. Your brain

12:26

doesn't require as much change.

12:28

>> What if I want to change?

12:30

>> Yes. So it turns out you still can

12:32

change. That's the key is that the

12:35

reason brains change less and less is

12:37

because they don't have to. But when

12:40

things get upside down, just as one

12:42

example, everything about the pandemic

12:44

really stunk, except for one thing, I

12:47

think the tiny silver lining is that all

12:49

of us had to reassess. Oh my gosh, wait,

12:54

how is the world working? I thought I

12:55

knew how the world worked, but now I

12:57

don't know if there's going to be toilet

12:59

paper at the store. I don't know if the

13:00

bank's going to be open. I don't know if

13:02

I can get coffee at the coffee shop.

13:04

Like, everything was different. As awful

13:06

as it was, it's really useful to

13:09

challenge your internal model of the

13:11

world and get to do that as an adult. We

13:13

don't usually get to.

13:15

>> So, if I want to change, what would you

13:16

recommend that I do? If I want to if I

13:18

want to change who I am, say I'm

13:20

stubborn, I'm not motivated,

13:22

>> um, and I want to be a different person.

13:24

>> The key is challenge. The key is seeking

13:26

challenge. So, it turns out that where

13:28

we always want to be is in between the

13:31

levels of frustrating but achievable.

13:33

and you want to take on new tasks. You

13:35

want to seek novelty to find yourself in

13:37

that zone and push yourself to do things

13:40

that you just haven't done before. And

13:42

one of the things that's so wonderful

13:44

about the modern world, you know,

13:46

everyone's got complaints about the

13:47

internet and social media and stuff like

13:48

that, but the good news is it deep it

13:51

exposes you to so much more than you

13:53

ever even knew was out there. The key is

13:56

to actively seek those challenges and

13:58

seek new things and seek to become

14:00

expert in various sorts of fields. And

14:02

and I think the key is that once you

14:04

become good at something, you you have

14:07

to drop that and take on something

14:08

you're not good at. This is the best

14:10

thing that you can do for your brain.

14:12

The reason is because what you're doing

14:14

is you're constantly building new

14:15

roadways and pathways in the brain.

14:17

There's a study that's been going on for

14:19

for decades now called the religious

14:22

orders study where a bunch of Catholic

14:24

nuns agreed to donate their brains for

14:26

autopsy when they passed away. What the

14:29

researchers discovered when they look at

14:30

the brain carefully is that some

14:33

fraction of these nuns had Alzheimer's

14:35

disease. Their brains were physically

14:37

degenerating with the ravages of of this

14:40

dementia, but they didn't show any of

14:43

the cognitive deficits that one normally

14:45

has. They didn't seem to be having

14:47

memory problems and so on. It turns out

14:49

it's because all these nuns lived in

14:52

these convents till the day they died.

14:54

They had social challenges and they had

14:56

fights with their fellow sisters and

14:58

they played games with their fellow

14:59

sisters and they were they had chores

15:01

and responsibilities and they were doing

15:03

stuff. What that means is even as the

15:05

tissue the brain tissue was physically

15:07

degenerating, they were making new

15:09

roadways and bridges all the time.

15:12

>> And so that's what kept them cognitively

15:14

healthy. We call that cognitive reserve.

15:17

Contrast this with with people who

15:19

retire at 65 and they go home and they

15:21

watch television and their social

15:23

circles shrink and so on. That's when

15:25

you've really got concerns because

15:27

you're not building the new pathways. Is

15:29

there data to support that that when you

15:31

retire, if you retire early or if you

15:33

retire say in your 60s, it increases

15:36

your probability of an earlier death or

15:38

cognitive decline? Almost certainly with

15:41

cognitive decline because you're just

15:43

not getting the challenge at that point.

15:45

You're just coasting on your internal

15:46

model.

15:48

this. It's tragic, but what happens

15:50

often is that people's hearing gets

15:51

worse. And so by the time they retire,

15:52

let's say in their mid-60s, it's not

15:55

really that fun for them to go out to

15:56

parties and restaurants anymore because

15:58

they can't quite hear. And so there

16:00

there all these converging reasons why

16:02

their social lives shrink. But it turns

16:04

out social life is one of the most

16:07

important things that we can do for our

16:08

brains because there's an expression we

16:11

sometimes use in neuroscience, which is

16:12

that nothing is as hard for the brain as

16:14

other people. because you never know

16:15

what the other person's going to say and

16:17

do and how they'll react emotionally and

16:19

so on. So, you're constantly on your

16:21

toes with other people. And if you're

16:22

not doing that anymore, that ends up

16:24

being a problem.

16:25

>> H

16:27

interesting. And as a as a I'm 33 years

16:31

old, so if you were to plot where my

16:33

brain is on like a graph of decline,

16:37

I is it the case that I should be doing

16:38

as much as I can now to build as many

16:40

pathways I can so that when I'm 80, my

16:44

decline sort of levels out in a in a

16:46

better place? Oh yeah, for for sure. But

16:49

this is true for many reasons actually.

16:51

Okay, so look, the truth is your brain

16:53

peaked at two at the age of two because

16:56

that's when you get the most connections

16:58

between neurons, between these cells in

17:00

the brain. You get this, at first you're

17:03

born with these 86 billion neurons and

17:05

they connect and connect and connect and

17:07

it finally becomes like a overgrown

17:09

garden at the age of two and from there

17:10

you're pruning. From there you're taking

17:12

connections away. Now it happens that

17:14

that's not a bad thing. That's a good

17:16

thing because that's how you're

17:18

resonating with the world that you are

17:20

in.

17:21

you know, 21st century London and LA

17:24

versus, you know, 10th century Mongolia

17:26

because you're you're just strengthening

17:29

those pathways that resonate and you're

17:31

getting rid of everything else. Okay,

17:32

fine. But over time, your brain cells

17:35

die. You know, every time you hit your

17:36

head on something or whatever, your

17:38

brain cells are going down. Um, so in

17:40

that sense, you've peaked. But your

17:42

crystallized intelligence that you've

17:44

been building your whole life, you know,

17:46

that keeps going and you'll you'll have

17:48

decades ahead of you where you can start

17:49

doing stuff. But yes, the reason to

17:51

learn everything you can is because all

17:53

that stuff cashes out at various points

17:56

in your life when you're starting your

17:58

next business or you're, you know,

18:00

wanting to do the next great thing where

18:02

you're surfing the way web of AI. You

18:04

know, you'll say, "Oh, I learned this

18:06

thing when I was 16. I learned this

18:07

thing when I was 22." And and these are

18:08

these are paying off now. I think I

18:10

heard Andrew Hubman say that one of the

18:12

most fascinating discoveries of the last

18:14

century is a particular part of the

18:16

brain called the anterior mid-sul cortex

18:19

and it links to what you were saying a

18:21

second ago about challenge and doing

18:23

things that are difficult.

18:25

>> Yeah, it turns out that area of the

18:27

brain is involved and other networks as

18:29

well because when you're doing something

18:32

new and challenging and difficult, you

18:34

have stress and anxiety. Your whole

18:37

brain is active. Let's say I measured

18:40

your brain even with something like EEG,

18:42

electronphilography. That's where I

18:44

stick electrodes on the outside. Let's

18:45

say I measure your brain in my brain.

18:47

We're doing something that let's say

18:49

you're an expert at what's something

18:50

you're really good at juggling. I don't

18:53

know some physics.

18:54

>> Let's go for juggling.

18:55

>> Okay. Let's say you're an expert

18:56

juggler. Let's say I've never juggled.

18:58

Okay. If we're both juggling, you're

18:59

going to be much better than I am. But

19:01

your brain will be less active. You

19:04

won't have as much activity in your

19:06

brain. all my brain is on fire with

19:08

activity because why I'm trying to

19:10

figure out okay where do I put my hand

19:12

how do I throw this and blah blah blah

19:13

so when I'm in novice at something my

19:15

brain is using much more activity not

19:18

just the anterior made singulate but

19:20

tons of activity all over because I'm

19:21

trying to figure out the rules I'm

19:22

trying to figure out what's going on you

19:24

as an expert you know you got it you

19:26

don't you don't need to burn much

19:27

activity this is what the brain's goal

19:29

is is to say hey once I've practiced

19:31

something along once I get something

19:33

about the world I'm going to burn it

19:34

deeper and deeper into the circuitry So

19:36

I don't have to burn a lot of energy on

19:38

it.

19:38

>> On this part of the brain, the anterior

19:39

mid singular cortex, Andrew human was

19:41

saying it's larger in people that do

19:43

things that they basically don't want to

19:44

do hard things. If you spend your life

19:47

doing things you don't want to do, then

19:48

it happens to be bigger. And so people

19:49

have now thought of this part of the

19:51

brain almost like the willpower muscle

19:52

because for some reason those that are

19:54

doing hard things have bigger ones and

19:56

those that are not have smaller ones. I

19:58

mean it wouldn't be so much the

20:00

willpower of muscle. It would be some

20:01

indication retrospectively of how hard

20:04

you have worked. Look, the fact is you

20:07

can see changes in brain size with lots

20:09

of things. I'll give you an example. If

20:11

you are a pianist, if you play piano,

20:14

then we can actually see physical

20:16

changes in your motor cortex. This is

20:18

the part of the brain essentially

20:20

underneath where you would wear

20:21

headphones. For those who are looking

20:22

visually, it's this red part here. You

20:25

actually get a bigger loop of tissue

20:28

here than you do in a normal brain. Why?

20:31

Because you're doing so much fine motor

20:33

activity with your fingers with both

20:35

hands. Okay? In contrast, if you're a

20:38

violinist,

20:40

you're only really doing that kind of

20:41

detailed activity with one hand. The

20:42

other hand is just boeing. And so you

20:44

only get that activity here in one half

20:47

of the brain for violinists. So I can

20:49

look at a brain and tell, hey, is the

20:51

person a pianist or a violinist or an

20:53

either? I can tell just by looking at

20:54

the visual cortex because you see

20:56

changes in the brain based on what you

21:00

do. For example, jugglers, people who

21:02

play music, even you can tell this with

21:04

medical students who study for final

21:05

exams. You actually see changes in the

21:07

distribution of of their cortex.

21:10

>> Why would it be getting bigger?

21:12

>> The reason is the brain's devoting more

21:14

real estate to that. In this case, let's

21:17

say we're talking about fingers on a

21:18

piano or a violin. The brain is devoting

21:20

more there's more relevance to that and

21:24

so it more real estate so that you can

21:26

do it better in the future.

21:28

>> Exactly. The key about the cortex this

21:30

wrinkly outer part is that it is a

21:32

one-trick pony. This is often overlooked

21:34

because even this brain that I'm holding

21:36

here uh is colorcoded so that we think

21:39

oh okay that's clearly labeled this

21:40

that's clearly labeled that and so on.

21:42

But in fact it's all the same stuff and

21:45

it can change. So for instance, if you

21:47

are born blind, then this area that we

21:50

normally call the visual cortex gets

21:52

taken over by the rest of the brain. If

21:54

you're born deaf, then this part that we

21:56

call the auditory cortex gets taken

21:58

over. It gets devoted to other tasks.

22:00

And so this whole system is very very

22:03

fluid. And this is what fascinates me

22:04

about brain plasticity is the way that

22:07

we can be the sculptors of our own

22:09

brains because we can devote ourselves

22:13

to particular things and have the brains

22:16

real estate get involved in that. So if

22:19

I was currently someone that couldn't

22:20

get out of bed, I didn't have a lot of

22:22

discipline or motivation and I wasn't

22:25

very good at committing myself to hard

22:27

things.

22:28

With everything you know about the

22:29

brain, is it possible to take a set of

22:31

actions that will fundamentally change

22:33

my brain and make me that type of person

22:35

who runs marathons, who does hard

22:38

things, who's motivated and disciplines,

22:39

and who has high agency and attacks the

22:41

world.

22:42

>> Yes. Yeah. But it's much more than

22:44

simply resolve because I mean just look

22:47

at New Year's resolutions. You know, by

22:49

by February, most people have dropped

22:50

most of them. So, it's really a

22:52

psychology problem about figuring out

22:55

okay, what are the things that motivate

22:57

me? So, let's say you want to become a

22:59

marathon runner. You've got that distant

23:01

dream. You figure out like what actually

23:03

motivates me in the short term? Who am I

23:05

trying to impress? What am I trying to

23:07

accomplish in my life? How can I

23:10

structure things like this Ulyses

23:12

contract that I talked about earlier

23:14

where I'm actually locking myself into a

23:16

contract? Like, you know, I call Bob and

23:19

I say, "I will meet you every morning at

23:21

7:00 and we're going to run until we

23:23

drop." Like once I've committed to those

23:25

sorts of things, that's how you set

23:27

things up so that you do the right

23:29

thing.

23:29

>> It's a bit of a cycle, right? Because

23:30

then my brain will adapt and then

23:32

presumably that will make it easier for

23:33

me to run.

23:34

>> Yeah.

23:35

>> And then I'll run more and then my brain

23:36

will adapt.

23:37

>> That's right.

23:38

>> And the cycle continues.

23:39

>> And it's not just your brain, of course.

23:40

In this case, it's your body. You're

23:41

getting better. You're getting stronger.

23:42

You don't get as out of breath. And so

23:44

all these things help. Exactly. But in

23:46

order to keep the cycle going, you need

23:48

to figure out what is spinning this

23:50

flywheel and what are the all the other

23:52

things in your life. Whether good

23:54

motivations or bad, it doesn't matter.

23:56

You just figure out what it is that you

23:58

can do to to get there.

24:00

>> Are there certain physical exercises

24:02

that are particularly good for the brain

24:03

from what you've understood?

24:05

>> The general story is exercise is really

24:08

important for the brain. I'll give you

24:09

just one example of that, which is

24:11

there's still this debate going on about

24:13

whether we get new neurons in the brain.

24:16

The general story has always been you're

24:18

born with 86 billion neurons and those

24:20

slowly die with time. But in rats, for

24:24

example, there is a little trickle of

24:26

new cells, new brain cells. And there's

24:29

been a debate for a long time about

24:30

whether that little trickle happens in

24:32

humans or not. Still unresolved. But in

24:34

rats, what you can see is that exercise

24:37

causes the trickle to increase. If you

24:39

stick the rat on the wheel and it's

24:41

doing physical exercise, you get more

24:43

new brain cells. Now, we don't know for

24:45

sure that this happens in humans, but

24:48

lots of things about physical fitness

24:50

and exercise matter a lot to the brain.

24:52

This is nothing new. Exercise, sleep,

24:54

diet, these are really important things

24:56

for keeping the health of this organ. Is

24:58

there anything else that's important to

25:00

know for someone that is trying to

25:02

change and improve and keep their brain

25:03

in a healthy state as they age that we

25:05

haven't touched on?

25:07

>> There is something that that all of us

25:09

are thinking about which is about um

25:11

social media and the internet in

25:13

general. I do think one of the

25:14

interesting things about the internet

25:16

and social media is that if we were

25:19

growing up in a village 500 years ago,

25:22

you just know the people in the village

25:24

and what they can do and so on. But

25:25

let's say no one in the village was an

25:27

entrepreneur or a neuroscientist. And so

25:31

we we can't even picture that as a

25:33

thing. We don't know anything about

25:34

that. One thing that the internet has

25:37

done for kids growing up in the digital

25:39

age is that you get a lot of more

25:40

exposure to things. You you have so much

25:42

more exposure. I actually think this is

25:44

one of the positive things that I would

25:46

say about social media is that you not

25:49

only get exposure, wow, that kind of

25:51

thing is possible and that kind of thing

25:52

is possible, but you also have people

25:54

teaching you how to get there.

25:56

>> They say like, hey, I'm a fitness

25:57

influencer and I'm going to show you

25:58

exactly how to do the thing. Or, you

26:00

know, you say, "Hey, here's exactly how

26:02

you start a business." Or I say, "Hey,

26:03

here's the the route that you go through

26:05

undergrad and grad school to become a

26:07

neuroscientist." And that's great. I

26:08

mean, there's just there's so much more

26:11

uh of a talent window now that that

26:13

everyone gets exposed to. So, I think

26:14

that makes a better brain.

26:16

>> What are we doing to our children that

26:18

you think we probably shouldn't be doing

26:19

as it relates to brain development?

26:22

>> Here's the thing that's really important

26:23

about this debate is that nobody really

26:26

knows. And I'll tell you why. It's

26:27

because to do anything in science when

26:29

you're saying something about a group,

26:31

you need to have a control group that

26:32

you're comparing against. And when it

26:34

comes to asking the question of, hey,

26:36

kids growing up now with social media or

26:38

the internet, how do they compare to

26:40

other brains of kids who don't grow up

26:42

with that? Well, we don't have a control

26:43

group unless you look at kids who are

26:45

incredibly impoverished or let's say

26:48

Quakers who don't believe in technology.

26:51

And with both those groups, there's a

26:52

hundred other important differences. So,

26:54

you can't just say, "Oh, look, I'm

26:56

comparing to this kid who grew up

26:57

without food and and I'm going to say

26:59

there's this difference." Who the heck

27:00

knows why the difference is there? even

27:02

a generation ago. There's so many

27:05

differences in terms of diet and

27:06

pollution and politics and blah blah

27:08

blah what like everything that you can't

27:10

do it. So I I only mention this because

27:13

I think it's very important. A lot of

27:14

people pipe off with things about oh the

27:15

younger generation their brain this that

27:17

but we don't actually know and I will

27:20

tell you that I happen to be a cyber

27:23

optimist on this point about what

27:25

growing up with the internet does for

27:27

young people. I think it's going to make

27:28

them much smarter than the generation

27:30

that came before. And here's why. It has

27:32

to do with the size of the intellectual

27:36

diet that they can bring in. So when I

27:38

was a kid, I grew up pre- internet. You

27:40

know, I wanted to know stuff. So my mom

27:42

would drive me to the library, which was

27:45

25 minutes away, and I would pick up the

27:46

Encyclopedia Bratannica and I would flip

27:48

through it and hope they had an article

27:50

about the thing that I wanted to know

27:51

about. And that's how I was able to get

27:53

my little straw of knowledge. But now

27:57

kids are growing up with access to

28:00

anything they're interested in. And this

28:02

is so good for the brain. And from a

28:04

plasticity point of view, the reason

28:06

this matters is because change happens

28:08

in the brain when you are curious about

28:11

something. So when a kid asks a question

28:13

to Alexa or Siri or whatever and they

28:15

get the answer, that sticks because they

28:18

have the right cocktail of chemicals

28:19

going on in their head. In contrast,

28:21

when I grew up, I learned tons of just

28:23

in case knowledge. I mean, that's all

28:25

that the teachers could teach us is just

28:27

in case you ever need to know this fact,

28:28

here it is. But kids are in a really

28:31

great situation now. So, there are pros

28:33

and cons to to all this stuff, but I

28:35

think I'm very optimistic about what

28:38

this means for the for the warehouse of

28:41

knowledge that that kids can build up

28:42

now. And by the way, I saw an interview

28:44

with Isaac Azimoff in 1988. He was the

28:48

great science fiction writer who wrote

28:50

Foundation and so many other books. And

28:52

he was saying on this show in 1988, he

28:55

said, "Look, I envision a day when there

28:59

will be one central supercomput and

29:01

every house will have a cable running to

29:03

that supercomputer and you can ask any

29:05

question you want and it knows the

29:07

entirety of humankind's knowledge on

29:09

that computer." You know, what he was

29:11

foreseeing here was the internet. He got

29:12

the details wrong, which doesn't matter.

29:14

The idea is he saw how this would be so

29:17

incredible for education

29:20

because he pointed out look in any

29:21

classroom it's going too fast for half

29:23

the kids too slow for the other half of

29:24

the kids and if you could just pursue

29:27

the sphere of humankind's knowledge if

29:29

you could enter in whatever door you

29:32

wanted to that's the way to do it

29:34

because you'll be motivated now he

29:36

wasn't talking about brain plasticity or

29:38

anything but this is exactly what I'm

29:39

saying from a brain plasticity point of

29:41

view really matters

29:43

I I'll just mention something which is a

29:46

lot of people are concerned that oh with

29:48

with AI we're going to get lazy. We

29:50

won't you know know how to do anything

29:51

anymore because we can outsource it. It

29:53

just so happens that I I love doing home

29:54

improvement. I'm always fixing my house.

29:56

I have 3xed myself in the last half year

30:00

because of AI because I take a picture

30:02

of something. I say hey I've never seen

30:03

this kind of thing before. How does this

30:04

work? Whatever. And chat GPT says oh you

30:07

do this and you take this out and here's

30:08

the bolt and blah blah. It's not me

30:10

outsourcing it. It's me being curious

30:12

about something and so I remember how to

30:14

do everything now. I know how to do much

30:16

more than I used to because I like it.

30:19

>> What about the you there's been a couple

30:21

of studies that have come out that say

30:22

things like your brain's going to

30:23

atrophy if you don't continue to write

30:25

or um if you just defer all of your

30:27

learning to things like chatgbt or other

30:29

AI models. Um, one I guess one of the

30:32

areas that I think in one of the

30:34

studies, was it a Stanford study that

30:36

everyone was talking about where the the

30:38

participants used Google and AI and then

30:41

they'd learned something themselves.

30:43

>> But one of the things I've wondered is

30:46

if I'm going through my business life

30:48

and I'm encountering hard problems and

30:50

every time I encounter a hard problem, I

30:51

drop it into an AI. The AI spits out a

30:54

textbased answer. I copy and paste that

30:56

and send it as my response. presumably

30:59

there's some kind of important part of

31:01

the learning cycle or the you know

31:03

neurological development that I'm like

31:05

foregoing there I'm missing that I

31:08

probably should you know you said

31:09

earlier about doing hard things what I'm

31:11

doing there is I'm avoiding the hard

31:12

thing which is like thinking about it

31:13

and trying to understand it

31:15

>> yeah here's I think the really important

31:17

distinction there's vicious friction in

31:20

our lives and there's virtuous friction

31:22

so vicious friction is all the stupid

31:25

stuff that you have to do like hey

31:27

Stephen for your business I need you to

31:28

cop copy this spreadsheet over here and

31:30

fill in all these cells and and do your

31:32

taxes and whatever. Okay, that if we can

31:35

push that off to AI is massively

31:37

important for for improving human lives.

31:40

There's really not benefit in vicious

31:41

friction. But virtuous friction is, hey

31:45

Stephen, I really want you to think

31:46

about what is the optimal way to do this

31:49

business. What is the best structure for

31:51

this? How do we actually go DT to C? How

31:54

do we go B2B on this? What's the what's

31:56

the approach here that we're going to

31:58

take that you haven't done before that

32:00

would be amazing? That's virtuous

32:03

friction because you're really using

32:04

your brain to learn stuff that way. So

32:06

that's the first distinction that

32:08

matters is get rid of all the busy work.

32:10

There's no honor in that. I mean I'll

32:13

just mention in the 1990s there was this

32:16

big debate about whether we should have

32:17

kids use desk calculators or not. And

32:20

thank god that finally got resolved and

32:21

we let kids use calculators so that we

32:23

can learn, you know, couple we can spend

32:25

a couple days learning long division,

32:26

but you don't have to spend six months

32:27

on it because who cares? With the

32:29

virtuous friction, there's real

32:31

opportunity to surf the wave of AI so

32:35

that you are figuring out these tough

32:37

problems with the aid of somebody who

32:40

cares about your problem and is willing

32:42

to talk with you 247 and never gets

32:44

tired of talking to you about it. And so

32:46

you are not just copying and pasting,

32:48

but you're working with the AI to come

32:51

up with ideas that were beyond what you

32:53

would have come up with. Because I

32:55

mentioned earlier about internal models,

32:57

we have pretty narrow fence lines and

32:59

you can think of all these things, but

33:01

you don't even know what you don't know.

33:02

So, if you can have somebody who's

33:04

willing to talk with you, an expert in

33:06

all of humankind's knowledge, willing to

33:08

talk with you about it as much as you

33:10

want, there's a real opportunity there

33:12

to have a synergy where collectively you

33:16

both come up with a better idea than

33:18

either of you could have alone. But is

33:19

there a way for that relationship to

33:21

take place so that I actually benefit?

33:22

Because, you know, in the example I

33:23

gave, I'm just I take the question I was

33:25

asked, I put it into an AI, it gives me

33:27

an answer, I copy and paste it back to

33:28

the person that asked me the question.

33:30

that would happen if you really didn't

33:32

care about the person asking you the

33:33

question or the question. I mean

33:35

>> I mean this is what a lot of people are

33:36

doing like I get so many email because

33:37

you know we interview a lot of

33:38

candidates who join the business and so

33:39

I see tens of thousands of emails

33:41

sometimes a week that I mean I don't see

33:43

all of them but the ones that I see I

33:44

often know that you know because we've

33:46

sent them five questions or a task and I

33:49

look at it and go this is I can almost

33:51

predict the exact model that sent it to

33:53

me because they all have a different

33:55

personality so I go oh this one the

33:56

person put into Gemini or this one the

33:58

person put it into chatbt. Yeah,

34:00

exactly. And it's full of contrastive

34:02

constru construction like

34:04

>> it's not this, it's that. Yeah, exactly.

34:06

And then the M dashes. Exactly.

34:07

>> I'm really asking like is the person

34:09

that did that benefiting from from it?

34:11

>> No.

34:12

>> Well, no, but for a couple reasons. One

34:13

is that, you know, you and it it

34:16

triggers your red flag and so that does

34:18

not do anyone any good. see so many of

34:20

my colleagues posting on LinkedIn these

34:22

very obvious AI things and it irritates

34:25

me because I feel like I'm not going to

34:26

spend my time reading that because of I

34:30

call this this the effort phenomenon

34:32

which is um in in psychology we care a

34:35

lot about things that seemed like they

34:37

took a lot of effort and there's

34:38

something about seeing an AI post that's

34:40

just irritating because it's so

34:42

obviously AI

34:43

>> that's a really interesting idea the

34:44

effort phenomenon

34:45

>> yeah I've been I've been writing about

34:47

this for a while because um it turns out

34:48

there are psychology ology studies where

34:50

if I offer you two pieces of art and one

34:52

of them looks like, you know, let's say

34:53

it's a a red dot in the middle of a

34:55

white canvas and the other one is, you

34:57

know, bottle caps stacked up and glued

35:00

in this great shape or whatever, you'll

35:02

pay you'll pay much more for the thing

35:03

that looks like it took a lot of effort.

35:05

People will pay more for a real diamond

35:08

than a synthetic lab grown diamond,

35:10

which is exactly the same thing. It's

35:12

just carbon in the matrix. But they feel

35:14

like, oh well, mother nature took

35:15

hundreds of millions of years of effort

35:17

on this one, but not over here. It just

35:19

took a few days in the lab. So, there's

35:21

a million ways where we care about that

35:23

a lot. When it comes to this AI thing,

35:26

um, yes, anybody who's just popping back

35:28

something to you, it just feels like,

35:30

all right, they took the the path of

35:31

least resistance, and I'm not so

35:33

interested.

35:33

>> I want to know from a neuroscience

35:35

perspective whether they benefit.

35:37

>> Presumably, they don't benefit too much

35:38

either. I mean, it's hard to know

35:40

exactly how many times they went back

35:41

and forth with it. They could have said,

35:43

"Hey, Chad GPT, thank you for this, but

35:46

I'm kind of this more of this person.

35:47

When I really think about it, this is

35:49

the thing that inspires me." Not not

35:51

what you suggested. So, so somebody

35:52

could put effort into it. It's just that

35:54

we can't know that when we get the AI

35:56

response. It seems to be a pretty

35:58

consistent principle of life generally

35:59

that like when you do something hard or

36:02

when you put in effort, as you say, you

36:03

tend to get back like an equal and

36:05

opposite return like relatively. So I I

36:08

would think that if I fought through,

36:11

you know, maybe even using AI as a

36:13

companion, but I fought then to write it

36:15

out myself instead of just copying and

36:17

pasting.

36:18

>> Yeah.

36:19

>> One of the things I've learned from

36:20

doing this podcast and all these

36:20

episodes is everything is a trade-off.

36:24

>> Yeah.

36:25

>> And and if you don't know what the trade

36:26

you're making, then you're often at

36:29

great risk. And so like some of my

36:31

friends will say, "Oh, I take this pill

36:32

and it's amazing. It does all these

36:33

things for me. It's the most amazing

36:34

thing ever. I can just focus for 24

36:36

hours a day and I'm so productive now.

36:38

And I go, "What's the what's the

36:39

downside?" And they go, "Oh, there's no

36:41

downside." And I go, "Hm." Like, so

36:43

that's what I mean. It's even worse when

36:45

you don't you don't know the trade

36:46

you're making. And so with AI, I go,

36:47

"Okay, if it's making me wildly more

36:50

efficient or productive, what trade am I

36:53

making?" I think understanding this it's

36:56

probably not two categories but a

36:58

spectrum from vicious friction to

37:00

virtuous friction but really paying

37:02

attention to what is virtuous friction

37:04

what would make me a better person if I

37:07

actually put the effort into this that

37:09

matters a lot and I will say for us as

37:12

professors for you looking for job

37:15

candidates we need to change how we're

37:17

asking the questions if we just say hey

37:19

write answer these five questions of

37:21

course everyone's going to use it for

37:22

example in my classes is at Stanford. I

37:24

I don't have people turn in a final

37:26

paper anymore. That was from previous

37:29

life before AI. Now I have them do

37:32

projects as their final thing where

37:33

they're uh you know running an

37:35

experiment on something. And of course

37:36

they use AI to help them generate some

37:39

of the issues, but they have to deal

37:40

with other people and look at the data

37:42

and figure out what's wrong and that

37:43

kind of stuff. I worry that it's getting

37:44

into the age of, you know, the whole

37:46

calculator thing you said where maybe

37:48

actually it is now you need to assess

37:50

them on their ability to use the AI,

37:52

>> not to succeed without it.

37:55

>> Yeah, agreed. This is the whole game for

37:57

all of us, I think, is figuring out how

37:58

to surf this wave of AI where it can

38:00

make us super human. We can just be

38:02

better, so much better than anything we

38:04

ever were doing before because we have

38:07

immediate access to knowledge and facts

38:09

that either we had forgotten or we never

38:11

knew existed. And so we should be

38:13

surfing that wave. So I I I totally

38:15

agree with you on that point. If you can

38:16

figure out how to change your interview

38:18

questions so that you're seeing, hey,

38:19

can this person really get the speed?

38:21

With everything you know about learning

38:23

and neuroplasticity and expanding one's

38:25

brain, is there a anything else you can

38:28

say to the audience about how they

38:30

should use AI so that they become a

38:32

superhum?

38:33

>> Interesting. I you know, look, I I have

38:35

been talking to my friends about this

38:36

issue a lot lately and I I mentioned how

38:38

I've become so much better at home

38:40

improvement stuff. I just know so much

38:42

more. Each one of my friends has

38:44

something like that where like, hey, you

38:45

know what? I've actually gotten so much

38:47

better at this super random thing that I

38:49

never even thought I, you know, I never

38:51

thought about it explicitly, but because

38:53

I'm always asking AI questions about

38:55

that and it's giving me the answers.

38:57

It's not simply that it gives me the

38:59

answers and I forget it. It gives me the

39:01

answers and I remember it. I become

39:03

better and better because it's like the

39:04

way that Alexander the Great had

39:06

Aristotle as his tutor and could ask him

39:09

anything and learn great stuff from him.

39:11

We've all got Aristotle in our pocket

39:12

now and we can become better at the

39:15

things that we want to do, the things

39:17

that resonate with us for whatever

39:18

reason. If everyone's got Aristotle in

39:20

their pocket, how does one create an

39:22

edge?

39:23

>> I think it has to do with we're all just

39:25

going to be running faster. In the same

39:27

way that when Steve Jobs introduced

39:28

Apple computers, he said this is like a

39:30

bicycle for the mind. What he meant by

39:32

that was that for millions of years

39:34

we've been walking bipedily and then

39:37

just in the last nancond of evolution we

39:39

invented the bicycle and suddenly humans

39:42

can move faster because of the bicycle

39:44

and he said having a personal computer

39:46

is like a bicycle for the mind and I

39:49

think of AI now as like a motorcycle for

39:51

the mind it's it allows us to move so

39:55

much faster so now it's a motorcycle

39:56

race and there will be people who are

39:58

much faster than other people because

40:01

they're really using that optimally.

40:03

>> And that's what I mean. It's like how do

40:04

I create an edge versus my whoever I'm

40:06

competing with in whatever industry I'm

40:07

in.

40:08

>> Well, for sure the people who are just

40:09

copying and pasting the AI slop that'll

40:12

be easy to beat that crowd. But

40:15

otherwise, I think it's just a matter

40:16

of, hey, these are the newest things.

40:18

It's like in history when the new sword

40:20

gets invented or the new gun or the new

40:22

cannon, you know, you have to keep

40:24

improving and and using that. And that's

40:27

what's going on now with AI

40:28

>> and with from a neuroscience

40:29

perspective. If I wanted to use AI to

40:33

based on all these things you've told me

40:34

about novelty and all these other points

40:36

that expand the the connections across

40:38

my brain and give me a big cognitive

40:40

reserve.

40:41

What might I I install as a practice

40:43

every week when I'm speaking to my AI?

40:46

Oh, ask it questions that you're curious

40:47

about about anything. Just asking

40:50

questions. Here's one thing I do all the

40:52

time. I'll say, "Hey, I've been thinking

40:54

about this. You know, I on my podcast, I

40:56

do a lot of monologues and so I'll start

40:59

talking to it and I'll say, "Hey, I've

41:01

got this idea that I'm thinking about.

41:02

What if blah blah blah blah." And then

41:03

I'll say, "Here's my idea. Give me pros

41:05

and cons." You know, tell me why this is

41:08

wrong. And I do that pretty much with

41:10

everything that I ask it if I'm

41:12

proposing some, you know, stupid seed of

41:14

an idea and it really gives me the

41:16

counter arguments and I really engage

41:18

with it. That is the important part, I

41:21

think. And by the way, I just want to

41:22

say I think for the next generation that

41:24

we're teaching this, there really only

41:27

two things we can teach because all the

41:29

details of, you know, hey, let's teach

41:31

computer programming or something,

41:32

that's probably already gone as a useful

41:34

thing. So what we can teach is critical

41:37

thinking and creativity. That's it. I

41:41

think that's such an important point,

41:42

this point about asking your AI why you

41:44

might be wrong.

41:45

>> Yeah. I I think I've had most of my

41:47

paradigm shifting moments when I've come

41:49

to an AI model that I was using with a

41:52

very with very high conviction. And the

41:54

prompt that always I think is most sort

41:56

of expansive in terms of my intellectual

41:59

knowledge is when I say to it, be

42:02

brutally honest about your opinion.

42:04

Think for yourself and be objective and

42:06

tell me where my blind spots are.

42:09

There's something innate with within us

42:10

all where we don't actually want to be

42:14

wrong. We often I think as a natural

42:16

reflex and this is why people get really

42:17

sort of trapped in echo chambers of

42:18

political opinion and you know Leon

42:20

Fesser talked about this idea of

42:21

cognitive dissonance when something you

42:23

believe contrasts with new information

42:26

and how it makes you feel uncomfortable

42:28

there's something when I type that out

42:29

when I when I love the idea or the thing

42:31

I've written or the memo I've written

42:32

this new idea and I go on tell me why

42:35

I'm completely completely wrong and it

42:36

eviscerates me it is both uncomfortable

42:40

but it feels incredibly important

42:42

because then then it's like I've I've

42:44

grown. But these AIs, they're they're

42:47

programmed almost to like kiss my ass.

42:49

>> Yes. Although, you know, Chatupati

42:52

released a very sickopantic version, I

42:54

don't know, maybe a year ago. Meaning it

42:56

compliments you. You give some idea and

42:58

it says, "Oh, Stephen, that's the best

43:00

idea I've ever heard. You're a genius

43:01

and blah blah." And that didn't last

43:03

very long, that model, because nobody

43:05

actually liked it. So, you're exactly

43:07

right. And and I'm sure most listeners

43:09

know this, but you can tell your AI to

43:12

be brutally honest with you all the

43:14

time. You can tell them to do that all

43:15

the time and it'll do that. So you can

43:18

you can establish the kind of person

43:19

that you're talking to. Here's the

43:21

thing. You're right. Of course, people

43:22

don't like to be wrong. It can be

43:24

socially embarrassing. It can be

43:25

uncomfortable. And yet, there's

43:27

something very different when you're

43:28

talking to your AI. It's a very private

43:30

thing. And you say, "Hey, tell me why

43:31

I'm brutally wrong." And when it tells

43:33

you, you think, "Oh, thank God it's

43:34

telling me that instead of like a real

43:36

human." So I I think a lot of that is

43:39

alleviated with AI. We we don't feel as

43:43

bad about being wrong there.

43:44

>> As you were saying that, I just went on

43:45

chat and I typed this in. Is my joke

43:49

funny? And the joke I typed in is knock.

43:51

Who's there? A letter. Let us who? Let

43:54

us in and I'll tell you.

43:56

>> Okay. You didn't laugh. I didn't laugh.

43:58

>> Okay.

43:58

>> Chapati said, "Yes, it works as a joke.

44:00

solid structure, uses the classic pun

44:03

payoff, which is exactly how most not

44:05

jokes land. And then it's done a

44:06

laughing emoji. I then said, "Be

44:08

brutally honest and completely

44:10

objective. Was that funny?" It said,

44:13

"It's not very funny."

44:16

Interesting. You know, but but that's

44:18

interesting because it depends, right? A

44:20

little child actually finds that joke

44:22

funny and and for a little child, they

44:24

then get to repeat that to their

44:26

classmate. They're learning how to do a

44:28

joke and so on. So I'm not I'm not sure

44:31

I think there's a single answer to

44:32

whether that can be funny or not.

44:34

>> But the interesting thing is it just

44:36

reinforcing what I already believed. And

44:38

therefore when we think about growth or

44:40

having a growth mindset if someone's

44:42

just always reinforcing what you already

44:44

believe and know I don't know if it's

44:46

ever going to be a growth mindset. I

44:47

mean I just asked it again. I said be

44:49

really honest and it said it's

44:50

absolutely not funny.

44:52

>> Yeah. But but remember all it's doing is

44:55

it's just it's a statistical parrot. And

44:57

so when you say be brutally honest, it

44:59

it thinks that's what it should answer.

45:02

>> Also, be even more honest. It says it's

45:03

basically not funny at all and you

45:05

shouldn't say that to people.

45:06

>> Okay.

45:06

>> And it says comedic originality 1 out of

45:09

10. Likelihood of real laughter 1 out of

45:10

10.

45:11

>> Well, that's that's quite good. That's

45:12

quite accurate. Um, here's the thing.

45:15

I've been thinking about this issue a

45:16

lot about whether AI can be funny. And

45:19

at the moment, it can't be. It It's

45:22

great at repeating jokes, but it doesn't

45:24

understand humor on its own. what it

45:27

knows if you ask it to make up a new

45:29

joke, what it'll do is it'll have, you

45:31

know, the first guy walks in the bar,

45:32

then the second guy walks in the bar and

45:34

does X, and that establishes the

45:36

pattern, but then the third guy, it'll

45:38

have break that pattern, which is the

45:39

structure of a joke, but it doesn't know

45:42

how to break the pattern in a way that's

45:44

funny. It's just the third guy does some

45:45

random thing. So AI as it stands now,

45:48

the way it's structured with what's

45:49

called a transformer model, doesn't know

45:52

how to think of the punchline and then

45:54

go back and make the joke lead to that

45:56

punchline.

45:57

>> A lot of people don't either.

45:59

>> Do you know what I mean? Like I say that

46:01

not in an offense way, but just to say

46:02

that like

46:03

>> I don't know. I often hear the claim

46:04

that AI could never be creative.

46:06

>> It's massively creative. Here's why.

46:09

Creativity in the brain, all creativity

46:12

is is you absorb your world. the whole

46:14

world around you, every experience

46:15

you've ever had. And then you're bending

46:17

and breaking and blending those

46:19

cognitive concepts into new remixes.

46:22

That's all creativity is. And you're

46:24

doing that all the time. Whether you're

46:26

just trying to think of what to say next

46:27

or what recipe to make next or what

46:29

patent to do or what company to start,

46:31

you're just remixing the stuff that you

46:33

already know. And that's why, you know,

46:36

I don't know, take Beethoven, he could

46:38

have written any kind of music that was

46:41

being done anywhere in the world. But of

46:42

course, he didn't. like that's what he

46:43

grew up with was the music and his local

46:45

culture and so on. What we have now is a

46:48

much broader diet as I mentioned before

46:50

where we can get everything going in.

46:52

But the point I want to make here is

46:54

that AI that's what it does. It remixes

46:57

stuff that's come in. So AI is massively

46:59

creative. The part of creativity that AI

47:01

can't do right now is selection. Meaning

47:05

it can generate a 100 pictures but it

47:07

doesn't know which one to pick. It

47:08

doesn't know which one is going to be

47:09

the most appealing to you. But it can

47:12

remix beautifully.

47:13

>> But neither do humans, right? So if I

47:15

asked an intern to make me 100 pictures,

47:18

I mean, I could get my AI to pick one,

47:19

but it wouldn't know what the intern or

47:21

the AI wouldn't know which one I loved.

47:23

>> The intern would have a much better shot

47:25

at it. And as the intern is there for a

47:27

while, he or she becomes quite good at

47:29

getting, oh, okay, I get Steven's taste.

47:31

It would be this one.

47:32

>> And the AI can't learn that what my

47:33

taste is. I don't think the AI could

47:35

learn that about visual images because

47:37

when it generates the pixels, it's doing

47:39

this, you know, this magical stuff under

47:40

the hood where it's deciding which

47:42

pixels and how they diffuse together

47:43

and, you know, mix the image, but it

47:45

doesn't know how to read that image

47:47

like, oh yeah, the way this is and blah

47:50

blah that'll really appeal to Steve. It

47:52

does it it's not seeing the image except

47:54

as a bunch of pixels. Hm. Hm.

47:56

>> You need to be a human for that

47:58

>> cuz I feed um I was doing an experiment

48:01

recently where I took our my behind the

48:02

scenes channel which is a 30 minute long

48:04

video. I dropped it into Gemini and I'd

48:06

say things to it like predict where

48:07

people would drop off on the video and

48:10

then we upload the video to YouTube. we

48:12

get the retention data back and Gemini

48:14

uh in the last two times that I've done

48:16

it has a 100% record of knowing that at

48:18

minute 7 where insert person talked for

48:22

too long and might have been a bit more

48:24

sight might have tried to sell a hoodie

48:26

for example in that part it would say

48:29

you're going to lose people here and it

48:30

would and it very accurately say why it

48:32

would say because there's you talked for

48:34

74 seconds and it was jarring versus the

48:38

the the moment that came before it and

48:40

when I feed the AI I don't let's say

48:41

thumbnails and say which thumbnail is

48:43

going to perform the best. We did a test

48:45

recently where we put four thumbnail

48:47

test results that we knew the answer to

48:49

into Gemini and said which one's going

48:50

to win on YouTube AB testing and it got

48:53

100% accuracy of predicting on data we

48:56

already had which one would win. And so

48:59

now I I don't know I I keep having these

49:02

paradigm shifting moments where only

49:03

humans could could do that. But

49:05

increasingly the the AIs that we're

49:08

experimenting with are making better

49:10

creative decisions than now I can make

49:12

myself as if the outcome of that

49:14

creative decision is which one is people

49:15

going to prefer.

49:16

>> Yeah.

49:16

>> I'd say a year ago that wasn't the case.

49:18

>> Okay. So I totally agree with you. But

49:19

but let me just mention one thing which

49:21

is fascinating which is that often the

49:24

way it's doing it is not at all the way

49:25

that a human would do it which might be

49:27

fine for our purposes but the data and

49:30

the way that it's picking up on it. It

49:32

might be something about you know how

49:33

much I'm making this up you how much

49:35

green was in the YouTube thumbnail image

49:37

or how much red or whatever whatever the

49:40

thing is or just noticing that there's

49:42

big font versus smaller font or

49:44

whatever. the next time you try it, it

49:47

says, "Oh, yeah, this thumbnail is going

49:48

to be great." And it's some ridiculous

49:50

thumbnail that doesn't make any sense to

49:51

you as a human, nor to your fellow

49:53

humans, but it might say, "Oh, yeah,

49:55

this would be great." Because it's

49:57

judging things on very weird dimensions

49:59

that we can't always see. You know, the

50:00

example you gave about maybe it's cuz

50:02

the text is bigger or the color red, but

50:04

those are the same factors we think

50:05

about as a human. We think if we know

50:08

that if the font is bigger, it performs

50:10

better. We know that red performs better

50:11

than green.

50:12

>> Quite possibly. But here's the

50:13

interesting thing. Human art constantly

50:15

evolves and all AI is trained on is what

50:18

has been done before and what has

50:19

worked. And so if I asked it, let's say

50:23

we composed five different songs and

50:25

said, "Hey AI, which song is going to be

50:27

better?" It's going to say something

50:28

that's right in the middle of the

50:29

distribution of popular songs. But

50:31

that's not what actually makes it next

50:33

year and the year after. It's new

50:35

things. It's new twists that that nobody

50:37

has seen before. That's what we love.

50:39

That's what we seek as consumers. And so

50:41

because AI can only be trained up on

50:44

what already exists, it's never going to

50:46

get the new thing at the edge.

50:48

>> But if if the AI was asked to cuz I

50:51

think the reason why a new song would

50:52

break out, let's say, you know, a new

50:55

Drake song comes out and it's a smash

50:57

hit. If we think about that distribution

50:59

curve, so like if I draw on the GR,

51:01

you're saying that um this middle

51:02

section here is what sort of AI will aim

51:04

at because it's the popular in the

51:06

known. Well, if I tell AI to make a

51:10

million songs, which is kind of what I

51:11

guess is what's going on every day um

51:13

around the world, if you scattered them

51:16

on on this graph at like, you know,

51:18

>> Absolutely.

51:19

>> And then the AI's most unusual song ends

51:22

up taking off. But it's just because

51:23

there's so many of them.

51:24

>> Quite right. But that's the human

51:26

selection part that we're seeing over

51:28

there. If you asked, okay, out of all

51:30

these dots, which do you think AI is

51:32

going to be best? It's going to have to

51:33

tell you the middle of the curve. But

51:35

the surprising part is the part that you

51:37

circled there, which is the one on the

51:38

edge is the one that humans like. Why?

51:40

Because we're constant novelty seekers.

51:43

We care about the things that are new. I

51:45

think the the point I'm getting at is

51:47

that um the creation of it, the creative

51:51

process is still the same, which is like

51:53

>> totally

51:53

>> AI or humans just trying a bunch of

51:56

and then the world going, "Ooh, that

51:58

one."

51:59

>> Oh. Oh, yeah. I totally agree. This is

52:00

consistent with what I was saying, which

52:01

is that AI can be massively creative in

52:03

terms of the generation of something,

52:05

but you need humans to do the selection.

52:07

I'm only arguing the point that AI is

52:09

not good at saying, okay, I've generated

52:11

a 100 songs. This is the one humans will

52:13

choose. We end up saying, hey, wait,

52:16

this one is just weird and unique enough

52:18

that I really like that. It's

52:20

interesting because when you um when you

52:21

speak to like record labels about music,

52:24

what they're often doing is getting a

52:28

format of a song that they know will

52:31

work. So they're like, "Right, so it's

52:33

got to be eight bars here. It's got to

52:34

be this here. You got to have a chorus

52:35

that's like hookie. It's got to come

52:36

back around. It's got to build up pace.

52:38

And there's like a rough format to it."

52:40

And it's no surprise that Ed Sheer

52:42

someone like Ed Sheeran has written so

52:44

many songs for so many people.

52:45

>> Yeah. When I spent some time working

52:47

with Sony, they had a brand new boy band

52:49

in the wake of One Direction. And when I

52:51

sat with the boy band um and was

52:53

introducing myself, they said they said

52:54

to me, "Oh yeah, so um here are their

52:55

his the boy band's first three songs and

52:58

um Ed Sheeran has written all of them."

53:00

And I was like, "What?" I thought I

53:02

thought like they're like, "No, Ed Ed

53:03

Sheeran's written all of them." And then

53:05

what we do is we give them to the boy

53:06

band and then the boy band sing them and

53:09

they're pretty much guaranteed to be

53:10

hits because Ed Sheeran has like a

53:11

formula. the way he writes is really in

53:15

like vogue right now. You people tend to

53:17

think a lot that the songs that are

53:19

number one in the charts are there

53:21

because just because someone had

53:23

creative genius and of course that is

53:24

the case sometimes but there is a lot of

53:26

this writing going on and then handing

53:28

the formula over because someone has

53:30

cracked the code of a hit,

53:31

>> right? But here's the thing and you know

53:33

that we all know this which is that the

53:34

code never lasts. So humans have this

53:38

pull where they're always seeking things

53:41

between novelty and familiarity. So we

53:44

like things where we recognize the brand

53:46

and we recognize what the singer has

53:48

done before. But there has to be novelty

53:50

or else we're not going to go for it.

53:52

We're not going to listen to that boy

53:53

band for the next 10 years doing the

53:55

same song over and over. So you're of

53:57

course right that we, you know, we want

53:59

a bit of familiarity. We want to be

54:01

anchored, but we definitely seek the

54:03

new. This is what humans always do. This

54:05

is why car companies always release the

54:07

next model even though the current model

54:09

is perfectly fine. This is why haircuts

54:11

evolve. This is why fashion evolves

54:12

through the years. Um because we always

54:15

care about novelty. And the other thing

54:17

in the music industry that I think is is

54:19

also creating a hit is I was reading

54:21

many years ago about some psychology

54:23

which you'll probably know much more

54:24

about that says exactly what you just

54:26

said which is we love something when it

54:28

is familiar but new.

54:31

>> Exactly. So the way that the record

54:33

industry and the radio industry make

54:35

something familiar is they blast the

54:37

same song at you on every radio station

54:40

for a long period of time until it

54:42

breaks past being just novel, just new

54:45

and it becomes familiar. And like I saw

54:48

this graph which shows that the a song

54:50

that you'll love is right there in the

54:52

middle of like it's new enough that

54:55

you're still into it but it's um

54:57

familiar now because you've heard it so

54:59

many times that you love it and you'll

55:01

if anyone listening the first time you

55:03

hear a song you might not love it as

55:04

much as once you've heard it like 20

55:06

times

55:07

>> and then at some point you've heard it

55:08

too much.

55:09

>> Yeah.

55:10

>> And it comes back down the other side of

55:11

the cover where it's now too familiar.

55:13

>> Yeah. That's exactly right. And so we're

55:15

always seeking that tension in the

55:17

middle. And yeah, companies run into

55:19

this all the time. Like sometimes they

55:21

try things that are too novel that just

55:24

completely fail. You know, Coca-Cola

55:25

tried this a long time ago with

55:26

introducing new Coke and no one liked

55:28

it, whatever. Um, and other companies

55:29

like what was that company? Blackberry

55:31

with the the little thumb things that

55:33

you can press the physical keyboard on

55:34

the phone. They failed because they

55:36

wouldn't change fast enough. But anyway,

55:38

companies that make it are always

55:39

staying in that uh sweet spot.

55:47

Whoa. What's that on your face?

55:49

>> This is my Bon Charge face mask. I've

55:51

been wearing this for some time now.

55:52

They're a sponsor of the podcast. I put

55:53

this on for 15 20 minutes a day. I can

55:55

sit here in the chair and wear it.

55:57

Boosts my collagen production. Helps

55:58

with fine lines, blemishes, my

56:00

complexion gets better, and then more

56:02

people listen to the podcast cuz I I

56:03

look better. Professionalgrade equipment

56:05

in such a small box. It's noninvasive.

56:08

And having sat here with so many of the

56:09

world's leading health professionals,

56:11

there's various things that I repeatedly

56:13

hear work and some things I'm a bit

56:14

skeptical about. This is one of the

56:16

things that almost all of my guests on

56:18

this show have confirmed works. It is

56:20

really, really, really effective. And

56:22

they offer fast, free shipping worldwide

56:24

with easy returns and exchanges. And

56:26

you'll also get a one-year warranty on

56:27

all of their products. And they're HSA

56:29

and FSA eligible, giving you taxfree

56:32

savings up to 40%. And you can get 20%

56:35

off when you order through my link at

56:37

bondcharge.com/doac.

56:40

That's bondcharge.com/doac.

56:43

The deal applies sitewide. I'm 100% more

56:46

productive using this app despite

56:48

spending 50% less time typing. And that

56:51

might confuse you, but let me explain,

56:53

which is exactly why I invested in

56:54

Whisper Flow. They're also one of our

56:57

sponsors on this podcast. Whisper Flow

56:58

turns your speech into text, so you can

57:01

send it in any app or device at any

57:03

time. And I promise you, it doesn't seem

57:05

to ever make mistakes. This is the most

57:08

accurate voice dictation I have ever

57:10

used after a decade of trying to get one

57:12

to work. Not only does it save me a ton

57:14

of time, it also corrects your speech if

57:16

you change your mind mid-sentence before

57:18

turning it into text on the device. I

57:19

love it and I know my team loves it too

57:21

because when I posted it in our Slack

57:22

channel asking if anybody wanted a pro

57:24

version, half the office said yes and

57:26

they had it within an hour which tells

57:28

me everything. This is the tool you and

57:30

your team need to speed yourselves up

57:32

and to capture those important ideas so

57:35

that they don't disappear. Head over to

57:37

whisperflow.ai/stephven

57:40

to download it now. That's

57:41

wispr.ai/stephven.

57:47

When you think about the brain and how

57:48

it's built and then you think about the

57:50

exact technology that they've used to

57:53

create AI, isn't it very very similar?

57:55

And if so, if it is similar, what does

57:58

that say about humans role in the

58:00

future? It's similar, but it's not the

58:02

same. Which is why with AI, you get what

58:04

what we call jagged intelligence,

58:06

meaning that it can do something so

58:09

extraordinarily smart and then in the

58:10

next moment give an answer that's weird

58:12

and doesn't make any sense. AI still is

58:15

doing this. It's not it's not yet

58:16

thinking like we think. Okay. Why? It's

58:18

because

58:20

AI as we think about it now really

58:22

started of course decades and decades

58:24

ago where people said look you've got

58:26

all these billions of cells neurons in

58:28

the brain that are connected to each

58:30

other. What if we ignore all that

58:32

complexity and we just say look imagine

58:34

that you have units that are connected

58:35

to each other. We're going to forget

58:36

about you know a single cell in the

58:38

brain is as complicated as a city. It's

58:40

got the entire human genome. It's

58:42

trafficking millions of proteins. Let's

58:43

put all that aside. Just imagine it's a

58:45

circle and it's connected to other cells

58:47

and each connection has a certain

58:48

strength and that's what we call an

58:50

artificial neural network. Now that went

58:53

off in its own direction and the kind of

58:55

amazing surprising part is how

58:57

successful it's been to just get rid of

58:59

all the detail but it's still super

59:02

different than what human brains are

59:03

like. So just an example uh this thing I

59:07

mentioned at the very beginning about

59:08

how we're a team of rivals under the

59:10

hood. You got all these different

59:11

competing neural networks that are

59:13

trying to drive your behavior and so on.

59:15

The fact that we're emotional, the fact

59:17

that we are driven by different

59:20

appetites, whether food or sexuality or

59:22

whatever it is, but you know, you're a

59:24

your chat GPT, you don't want that in

59:26

the chat GPT. So, it's just an

59:27

artificial neural network many layers

59:29

deep and it's extraordinary at what it

59:31

does, but it's so different than a

59:32

human. For example, the fact that it's

59:34

read everything on the planet and

59:35

remembers it and you haven't, you would

59:38

need to lead a thousand lifetimes to

59:40

read that much. And of course, you

59:41

wouldn't remember much of it. It It's

59:43

very different is the point I'm making.

59:45

They both have converged on something

59:48

that we would call intelligence, but

59:49

it's a pretty different structure. Even

59:51

though AI was inspired by the brain,

59:53

that's what Jeffrey Hinton was telling

59:54

me. He was telling me that like much of

59:56

the the breakthroughs that have made AI

59:58

what it is today came from understanding

60:00

how the brain works.

60:02

>> Yeah. But that's interesting because

60:04

Hinn isn't is incentivized to say that.

60:07

But a neuroscientist

60:09

>> incentivized to say that

60:10

>> people doing AI of course are paying a

60:13

lot of attention to how this is

60:15

structured like the brain because before

60:17

that people would do things like

60:19

probability theory or rules or you know

60:22

they were trying to do AI by trying to

60:24

say okay if this then do that but when

60:27

people started doing artificial neural

60:29

networks that led to a lot of success

60:31

I'm only pointing out that the

60:32

artificial neural network looks a lot

60:34

like the brain on the surface You say,

60:37

"Hey, you've got units and you've got

60:38

connections, but beyond that, there's a

60:40

lot of differences."

60:41

>> And why are those differences

60:43

significant as it relates to what's

60:44

possible?

60:45

>> Because what we've developed is this a

60:48

new species essentially that is

60:50

incredibly impressive, but it ain't a

60:52

human brain. It's different than a human

60:54

brain. There may be all kinds of

60:56

similarities, things that we even come

60:57

to understand are similar, but there are

60:59

so many differences. Here's an example.

61:02

You know, we humans do one trial

61:03

learning all the time. Meaning if I say

61:06

or when you were a kid and and your mom

61:07

said, "Hey, Stephen, this is a

61:09

pomegranate." You say, "Okay,

61:10

pomegranate. Got it." But you can't when

61:13

you're training up a an artificial

61:15

neural network like at OpenAI or Gemini

61:18

or Anthropic, you have to give thousands

61:21

or millions of examples of everything

61:23

for it to learn anything. There's no one

61:24

trial learning on those uh systems. And

61:28

they have to be trained at the cost of

61:29

billions of dollars. then they can do a

61:31

run where you ask a question and and it

61:33

answers the question. But brains in the

61:36

real world don't have that luxury of

61:38

having a training phase and then an

61:40

action phase. We have to learn on the

61:42

fly. It's very different.

61:43

>> So I guess the the pertaining question

61:45

is

61:47

does it change what's possible for the

61:49

brain versus the artificial neural

61:53

networks we see in AI? like is there

61:55

some limitation based on what you've

61:57

just said that means the this brain in

61:59

front of me, this human brain in front

62:00

of me will always be better than the AI

62:02

at something because I'm trying to track

62:04

forward about what this means for the

62:05

future of humans.

62:06

>> Yeah.

62:07

>> Um

62:07

>> I think it's an interesting question um

62:09

that we'll have to see. But it's clearly

62:12

the case that we know what it is to be a

62:15

human from the inside. And when I'm

62:17

making a model of you and who you are

62:19

and you're making a model of me, we have

62:21

assumptions about what it is like to be

62:23

a human. AI only watches human behavior

62:26

from the outside. And so it can tell a

62:28

lot of great stuff, but it doesn't

62:30

really know what it is to be a human. So

62:33

if I ask it some question about what

62:35

would it be like if this or that

62:37

happened, it can answer based on

62:39

observing lots of things, but it can

62:41

only ever know from the outside

62:42

>> in terms of why that matters.

62:44

>> Yeah. Because you know if I ask my AI my

62:47

fiance's been like this today or if I

62:49

ask my best friend my fiance's been like

62:50

this today. If it both of them give me

62:52

the same useful answer it doesn't really

62:54

matter what's

62:54

>> I agree with you. I agree it may I I I'm

62:57

actually writing a new podcast on this

62:58

about what you can tell from the outside

63:00

and what you can tell from the inside

63:02

and whether that difference matters.

63:04

Look an example is you know I last year

63:06

got a Tesla with full self-driving and I

63:09

was watching as it was full

63:10

self-driving. I was coming up on a very

63:12

complicated traffic situation. And I

63:13

thought, well, what's my car going to do

63:14

here? How's it possibly going to

63:15

understand? But what it did is it slowed

63:17

down and came to a stop, which was

63:18

exactly the right thing. And I thought,

63:20

oh, that's interesting. Algorithmically,

63:22

it might think of it very differently

63:24

than I am thinking about the situation.

63:26

Doesn't matter. It comes to the same

63:28

conclusion, ends up in the same place.

63:29

Yeah, I agree. We have yet to see where

63:32

these differences matter and and what it

63:35

is to be a human. But I can tell you one

63:37

thing. We care about other humans. So

63:40

here's my little prediction is that

63:41

there's going to be actually a

63:42

renaissance in things like live theater

63:44

and live performances. When when things

63:47

first came out like Napster, everyone

63:49

thought, okay, that's the death of

63:51

concerts. Like who's that's the death of

63:53

musicians, right? But in fact, you look

63:55

at a a Taylor Swift concert, gajillions

63:57

of people there paying lots of money.

63:59

Like everyone loves the the thing. Why?

64:02

Because they're going to see the real

64:03

Taylor Swift in person. And I have

64:05

noticed I give a lot of talks on the

64:06

road. I have noticed an increase in the

64:08

number of talks since AI came out a few

64:11

years ago. The first thing that my

64:13

friend said to me is hey did you know

64:15

David that you can you know use uh 11

64:18

labs and hey Jen and you know you can

64:20

make an avatar of yourself and you can

64:22

use your voice and and use chat to

64:24

generate what you're going to say and

64:25

have a fully virtual version of you. He

64:28

said my friend who gives talks too he

64:30

said maybe we can start doing this and

64:31

do virtual talks. I said nobody's going

64:33

to want that. In fact, what's happened

64:35

is more people want to fly us across the

64:38

country to have us stand there in person

64:41

because it really matters to see fellow

64:43

humans. And I think that's only going to

64:45

increase.

64:46

>> I completely agree with you. I think I

64:48

think it's so funny. I did a post on

64:49

LinkedIn the other day saying that maybe

64:52

the like interesting paradox or

64:54

interesting outcome of AI is that every

64:58

other iteration of technology made us

65:01

less human. And maybe the intelligence

65:04

now has gotten to a point where

65:07

>> it's now forcing us to be more human

65:10

because that is all that kind of remains

65:12

in a way that maybe the the technology

65:14

has gotten so good like social media

65:16

didn't make us more human in any

65:18

capacity. But maybe this is the moment

65:19

where it goes we've got this now

65:21

>> go do what only you as a human can do

65:23

which is like go out there Taylor Swift

65:25

and sing in front of people IRL.

65:27

>> Go and do something in the real world.

65:28

Even for like nurses um and doctors,

65:30

maybe they shouldn't be filling out

65:31

admin and paperwork anymore. Maybe they

65:33

should be holding your hand and giving

65:34

you, you know, in real life care that

65:37

only a human could do.

65:39

>> I totally agree.

65:40

>> And so maybe that's the like the the

65:41

positive upside to all of this is um

65:44

finally, you know, we've been on this

65:45

journey with technology and finally it's

65:46

delivered upon its promise.

65:48

>> I totally agree. And by the way, you

65:49

know, AI relationships, by one estimate,

65:52

there's a billion people having

65:53

relationships with AI, like a girlfriend

65:55

or boyfriend kind of thing.

65:57

>> Okay? And so for people like us who grew

66:00

up before that existed, we think, "Oh my

66:02

gosh, that's weird." But in fact, I

66:04

think it might become helpful because it

66:06

can be a sandbox as long as we have the

66:08

proper feedback. In the end, we have

66:11

millions of years of evolution driving

66:12

us towards being with the person you

66:15

love, touching another human being,

66:16

watching the stars, taking her out to

66:19

dinner with your parents, like all you

66:21

know, we care about that. And so this

66:23

worry that people sometimes talk about

66:25

about oh people are just going to be on

66:26

their phone with their AI relationship I

66:28

don't think is realistic for almost

66:29

everybody because it gives us the chance

66:33

to you know hopefully sandbox some

66:35

things about relationships and get over

66:36

some dumb things with relationships and

66:38

then we can actually be with our fellow

66:40

humans. counterargument would be that

66:42

maybe there's going to be a bifocation,

66:43

a splitting of society where some people

66:46

are going to become even more addicted

66:48

to the technology because the AI is now

66:51

much smarter at retention. Like I know

66:53

exactly what I need to say to you based

66:56

on your brain, Dr. David, to make you

67:00

not put this device down. Yes. But

67:03

fundamentally, I want to be in contact

67:06

with my wife. I mean, that's that's the

67:09

evolution

67:11

of hundreds of millions of years is that

67:13

I want to make babies. I want to go and

67:16

eat dinner with somebody. And and as

67:19

much as I might find my phone appealing,

67:20

I'm not going to sit it across from me

67:22

at a nice Italian restaurant and sit

67:24

there like that. So, I a lot of people

67:27

do.

67:28

>> Me and my me and my friends are at

67:29

restaurants cuz we have a rule where we

67:31

don't touch our phones when we're at

67:32

date night. And I have to look around

67:33

and I'm like, "Oh my god, like how is

67:35

how are all these guys getting away with

67:37

this?" Like, but do you see what I'm

67:38

saying? Like some some people they just

67:40

have a different sort of proclivity or

67:42

they have a different wiring which means

67:44

that you know instead of doing the hard

67:46

thing of going out there and going on a

67:47

first date and being rejected,

67:49

pornography or a virtual uh wife might

67:52

be a substitute for that.

67:54

>> Yeah. No, I agree with you. There will

67:55

be bifurcations. One question I don't

67:57

know the answer to, but one question is

67:59

what would that person have done in

68:02

previous generations? You know, is it

68:05

really the case that person would have

68:06

gone out and had a great successful

68:08

relationship or would they always have

68:09

had troubles relating to people?

68:12

>> Yeah, I sat with um a few

68:14

neuroscientists and experts that are

68:16

studied dopamine. Dr. Anna LMK was one.

68:19

>> Yeah, she's my colleague.

68:20

>> She's your colleague. Yeah. And uh she

68:22

talks a lot about how we all have

68:24

different types of addictive substances

68:28

and like you know we will think like

68:30

heroin's addictive for everybody and

68:31

alcohol's addictive and I used to think

68:33

of it on a spectrum but actually she

68:35

said like for her addiction was romantic

68:38

erotic novels.

68:39

>> Yeah. and she she almost ruined her

68:40

relationship because of erotic novels,

68:42

which is something that I would read and

68:43

just throw in the bit like but so maybe

68:46

this new technology is particularly

68:49

addictive to a certain type of person.

68:51

>> Yeah, I I think that's exactly right.

68:53

And I think we're going to see that with

68:54

everything. I mean,

68:55

>> the wild part about human society is

68:57

that there's so little that we have in

69:00

common, meaning everybody is really

69:03

different. And this is something I've

69:04

studied in my lab for for decades is

69:06

this issue about what are the subtle

69:08

differences from person to person. Not

69:10

big things like oh this person is a

69:13

psychopath or this person has

69:14

schizophrenia but the more subtle

69:16

things. I'll just give you an example

69:18

like if I ask you to imagine to

69:21

visualize let's say an ant on a purple

69:25

and white tablecloth uh crawling towards

69:28

a jar of red jelly. Do you see that in

69:32

your head like a movie or do you have

69:34

like no particular picture at all or

69:36

somewhere in between? What what do you

69:38

experience?

69:38

>> An ant crawling towards a jar of jelly.

69:40

>> Yes.

69:42

>> Yeah. I see a big black ant and then

69:44

this jar of jelly is like overflowing

69:46

down the sides with a wooden lid on top

69:48

of it and the ant is almost there.

69:50

>> Oh wow. Okay. So you have a Okay. So

69:52

what you have I'm just guessing where

69:55

you are but you are on the end of the

69:56

spectrum that we call hyperfantasia

69:58

which means you have very rich

70:00

visualization. You're like seeing it

70:02

like a picture or a movie. Is that is

70:04

that accurate? Okay. I happen to be at

70:06

the other end of that spectrum called

70:07

aphantasia where I don't have any visual

70:10

images at all. There's no I I don't see

70:12

things visually in any way.

70:14

>> And it turns out the whole population is

70:16

spread evenly along this spectrum. I'll

70:18

just give a quick side note which is

70:20

that for many years I've been talking

70:22

with Ed Catmull about this. He's the guy

70:24

who started Pixar films. So he's got all

70:26

the patents on how to do ray tracing and

70:28

how to make these beautiful animated

70:29

characters, right? Ed Catmull is

70:31

afantasic like I am. And when he learned

70:34

about this, he got really interested and

70:35

he gave the questionnaire to everybody

70:37

at Pixar. And it turns out many of his

70:38

best animators and directors are

70:40

aphantasic. They don't picture anything

70:42

inside their heads. Now this seems

70:45

surprising and strange, right? But it

70:47

turns out that if you are an aphantasia

70:49

kid, you're going to become better at

70:50

drawing because you have to really pay

70:52

attention to the subject out there and

70:54

really have a dialogue with the page

70:56

with your pencil. Whereas a kid who's

70:58

hyperfantasic might say, "Oh, I know

70:59

what a horse looks like." And just draws

71:01

it. Okay. So anyway,

71:02

>> got tracks.

71:03

>> Yeah. Yeah. So it turns out there's a

71:06

real spectrum across the population,

71:07

meaning inside your head and my head,

71:09

we're having pretty different

71:10

experiences. But I've studied this along

71:13

dozens of different axes and everyone's

71:15

got different things going on. Just as

71:17

one example, do you know about

71:18

synesthesia? Have you ever heard of

71:19

this? Forget is that forgetting or

71:20

something?

71:21

>> No. Sesthesia is having a blending of

71:23

the senses. So someone with sesthesia

71:25

might look at letters and it triggers a

71:27

color experience in their head. So they

71:28

look at J and that triggers green and

71:29

they look at M and that triggers blue

71:31

and whatever. It's different for each

71:33

person. Or you might hear music and it

71:34

triggers a visual experience. Or you

71:36

might taste something, it puts a feeling

71:38

on your fingertips or whatever. It's

71:39

just it's a blending of the senses. At

71:41

least 3% of the population has this.

71:44

It's not a disease or a disorder. It's

71:45

just an alternative perceptual reality.

71:49

So if you have aphantasia, does that

71:51

mean that you can't picture your kids?

71:53

>> It means that the way I picture them is

71:55

not visually. I mean there's sort of a

71:59

very g but for me it's more motoric

72:02

imagery and you know I I and audio

72:05

imagery. Like I'm I'm imagining talking

72:07

to them and being with them and being

72:08

close to them and probably some old

72:10

factory imagery meaning you how they

72:12

smell and the whole thing like I have a

72:14

very rich notion of what it is to be

72:16

with my kids but it's a pretty terrible

72:18

visual picture. Not much there.

72:20

>> So I imagine people at home have done

72:22

that same experiment while they were

72:24

listening. Could they picture an ant

72:26

walking towards a jar of jam and if they

72:28

find themselves on the aphantas I can't

72:31

remember the two.

72:32

>> Aphantasagasic. Yeah. Or hyperfantasic.

72:34

So hyperfantasia is you can picture it,

72:36

aphantasia because you can't.

72:37

>> Yes.

72:38

>> What does that potentially suggest about

72:41

nothing? Now here's the interesting

72:42

part. So we've done lots of studies

72:44

about what this translates to in terms

72:46

of your capacities in the world.

72:48

Nothing. Why does it translate to

72:49

nothing? It's because you can

72:52

accomplish tasks in a hundred different

72:55

ways. And so some people are doing this

72:56

very visually. Other people are doing it

72:59

where they're like picturing it with

73:01

their motor systems. Others are doing

73:03

it, you know, as I mentioned, with sound

73:05

or smell or whatever, or others are

73:06

doing it just purely conceptually, just

73:08

thinking through how the steps would go.

73:11

But there's nothing there's nothing

73:12

obvious other than this thing I

73:14

mentioned about visual artists often

73:16

being aphantasic.

73:18

Um, otherwise you can kind of accomplish

73:20

anything.

73:21

>> I run multiple companies that have

73:23

multiple sales teams. And one of the

73:24

things as a founder of a company that's

73:26

often confusing is you find it hard to

73:28

figure out where sales are. So about 10

73:30

years ago, I started using Pipe Drive in

73:32

my former company and it's also the

73:34

reason why I switched over all of my

73:35

commercial teams in my current media

73:37

company called Steven.com to use Pipe

73:38

Drive as well. Not only do they sponsor

73:40

this show, but they've been an

73:41

incredibly effective way of scaling our

73:43

sales engine over the years. Pipe Drive

73:44

is an easy to use intelligent CRM. And

73:47

at its very core, it makes your sales

73:49

process visible through one dashboard, a

73:53

visual pipeline showing every deal, what

73:55

stage it's in, what needs to happen

73:57

next, and it's all in real time with no

73:59

delay. It doesn't magically close the

74:01

deal for you, of course, but it does

74:03

replace complexity with clarity. If you

74:05

want to join over a 100,000 companies

74:07

already using Piperive, you can use my

74:09

link for a 30-day free trial with no

74:11

credit card payment needed. Head to

74:13

piperive.com/ceeo

74:16

to get started. That's

74:17

piperive.com/ceeo.

74:20

I'll see you over there. This is

74:22

something that I've made for you. I

74:24

realized that the diio audience are

74:26

striv

74:29

goals that we want to accomplish. And

74:31

one of the things I've learned is that

74:33

when you aim at the big big goal, it can

74:36

feel incredibly psychologically

74:38

uncomfortable because it's kind of like

74:40

being stood at the foot of Mount Everest

74:42

and looking upwards. The way to

74:43

accomplish your goals is by breaking

74:45

them down into tiny small steps. And we

74:48

call this in our team the 1%. And

74:50

actually this philosophy is highly

74:52

responsible for much of our success

74:54

here. So what we've done so that you at

74:56

home can accomplish any big goal that

74:58

you have is we've made these 1% diaries

75:01

and we released these last year and they

75:03

all sold out. So I asked my team over

75:05

and over again to bring the diaries back

75:07

but also to introduce some new colors

75:08

and to make some minor tweaks to the

75:10

diary. So now we have a better range for

75:14

you. So if you have a big goal in mind

75:17

and you need a framework and a process

75:18

and some motivation, then I highly

75:21

recommend you get one of these diaries

75:22

before they all sell out once again. And

75:25

you can get yours at the diary.com.

75:27

And if you want the link, the link is in

75:29

the description below.

75:31

I heard that you might have after many,

75:34

many decades of people debating this,

75:36

you might have figured out the reason

75:38

why we dream.

75:39

>> Yeah. Yeah, it's actually after

75:41

millennia of people debating this. This

75:43

is the cool part. So, okay, remember I

75:45

mentioned earlier that if you go blind,

75:49

the visual cortex of the back of the

75:50

brain gets taken over by hearing and by

75:53

touch and by other things and it's no

75:54

longer visual cortex. Well, what we

75:56

realized is that because we live on a

76:00

planet that rotates into darkness for

76:02

half the time, the visual cortex, the

76:05

visual part of your brain is at a

76:07

disadvantage. So what I realized is that

76:10

the purpose of dreaming is to defend the

76:12

visual territory from takeover from the

76:16

other senses. So every 90 minutes you've

76:18

got these um you've got this very

76:21

ancient thing in your midbrain that

76:24

shoots random activity into the visual

76:26

system and only the visual system only

76:28

this very tiny part of the visual

76:30

system. Every 90 minutes you just blast

76:31

random activity in here and the reason

76:33

is you are just defending that territory

76:36

against takeover. Now, the reason that

76:38

all this came together is because our

76:40

colleagues at Harvard did an experiment

76:41

where they took normally cighted people

76:44

and they blindfolded them tightly for 60

76:46

minutes. And it turns out that 60

76:47

minutes was sufficient for the visual

76:50

cortex to start responding to sound and

76:53

to touch. You could start seeing that

76:55

takeover happening after 60 minutes. And

76:57

that's when we realized, wow, this this

77:00

part of the brain really needs a way of

77:02

defending itself now because the brain

77:05

is a natural storyteller. If you blast

77:07

random activity in there, it'll, you

77:09

know, put that together in some sort of

77:10

visual story about what's happening,

77:12

mostly based on what connections are hot

77:14

from the day. But that's why we dream.

77:18

So we we dream to stop the other parts

77:20

of our brain overtaking the visual part

77:24

of our brain, um, overpowering it, and I

77:27

guess ultimately making us go blind.

77:29

>> Yeah, that's exactly right. If we lived

77:30

on a different kind of planet that did

77:32

not rotate into darkness, then we would

77:35

we presumably wouldn't dream.

77:37

>> Would we even need to close our eyes? I

77:38

mean,

77:38

>> not necessarily. Yeah. It may be that in

77:41

the sleeping state, in the state of deep

77:43

sleep, the brain is doing particular

77:45

things like taking out the trash and

77:47

cleaning some things up. That might be

77:49

necessary. Who knows? But yeah, I don't

77:51

think we would need to dream. We

77:52

wouldn't need to blast random activity

77:54

in there. um you know if if if our eyes

77:57

were always open for example and it was

77:58

always light out

77:59

>> are there other examples in the animal

78:01

kingdom which support this?

78:04

>> Yes, thank you for asking that. It's

78:06

this is why this new theory about why we

78:08

dream is taking off because we can make

78:09

quantitative predictions across animal

78:12

species. So for example in our last

78:13

paper we looked at 25 different species

78:16

of primates, apes and monkeys and we

78:19

looked at how plastic their brains are.

78:21

In other words, how flexible the whole

78:23

circuitry was and how much they dream at

78:25

night, which you can tell by looking at

78:27

rapid eye movements. You know, when you

78:28

dream at night, your eyes are shooting

78:30

back and forth like that. It's called

78:31

REM, rapid eye movement sleep. So, you

78:33

can measure that in other animals, their

78:34

eyes moving back and forth. So, we

78:37

correlated how plastic the brain is and

78:40

how much dream sleep you have. And it

78:42

correlates perfectly, which is to say,

78:44

humans, which are the most plastic, have

78:47

dream sleep all the time. And by the

78:49

way, when you're an infant, you sleep

78:50

for you have dream sleep for half of

78:52

your sleep time, 50% of the time. As you

78:54

get older, you get less and less dream

78:56

sleep because you just don't need it as

78:57

much anymore. But anyway, when we look

78:58

across species, it correlates perfectly

79:00

if you're a monkey that drops into the

79:02

world sort of already fully baked and

79:04

you don't need to have much plasticity.

79:06

You don't have much dream sleep either.

79:07

Interesting.

79:11

Seems like a very strange thing. It

79:12

sounds like it's a very strange thing

79:13

for the for the brain to do, but it also

79:16

is perfectly plausible based on

79:17

everything you've said.

79:18

>> Yeah. And by the way, I just want to

79:19

mention dreaming is across the animal

79:21

kingdom. Everybody dreams. All animals

79:23

dream at night. Even like animals at the

79:25

bottom of the ocean. Uh, yes. It's

79:27

harder to measure stuff all the way at

79:28

the bottom of the ocean. But fish do

79:30

have what is equivalent to dream sleep

79:33

where you're just zapping activity in

79:34

there. And by the way, even animals that

79:36

have gone blind, like there's a there's

79:38

a mammal called the blind mole rat,

79:40

which lives in darkness and has eyes,

79:43

but they're blind because over

79:44

evolutionary time, they've lost vision.

79:46

But they still dream because the dream

79:49

circuitry is so ancient. This is so

79:51

ancient that all animals have to defend

79:54

themselves against the darkness by

79:56

keeping their visual systems going. And

79:58

so even though the animal went blind,

80:00

the rest of the brain didn't catch up. I

80:02

mean, that's how evolution goes.

80:03

>> Funny. It's funny because it's kind of

80:05

like that evolution gave us this TV

80:10

that comes on at nighttime when the real

80:12

TV, our real life turns off and it just

80:14

puts on this fake TV set to keep that

80:16

part of the brain doing something so

80:18

that it doesn't deteriorate and um

80:21

atrophy.

80:22

>> It's exactly right. Yeah, it's exactly

80:24

right. Which means dreams are quite

80:26

pointless outside of just protecting our

80:29

neurological matter.

80:31

>> I suspect so. It might be that the

80:34

particular pathways that could travel

80:35

down, you know, maybe there's some

80:38

meaning there. I my own suspicion is

80:40

that it's like if I went to your

80:41

bookshelf and I picked picked a random

80:43

book up and I flipped to a random page

80:45

and picked a random sentence. I might

80:48

find some meaning in that. I might say,

80:49

"Oh, that was just the sentence that I

80:51

needed to hear." But it's not really.

80:53

It's just that it has some meaning to

80:54

me. Anyway, the point is if you blast

80:55

random activity in there, I might dream

80:57

about something where I wake up and say,

80:58

"Oh, that was pretty useful." But the

81:01

thing that I think gets overlooked is

81:03

that most dreams are totally useless and

81:05

bizarre. Dr. David, what is the most

81:07

important thing we haven't talked about

81:08

that we should have talked about as it

81:09

specifically relates to people that are

81:12

trying to improve their lives, get

81:15

better at whatever their subjective

81:16

mission is and the brain.

81:20

There are probably a lot of things, but

81:22

I got to say the thing that I've been

81:23

thinking about so much lately is just

81:25

about our political uh interfacing with

81:29

one another. And so I do feel that

81:32

really learning the skills of dialogue

81:35

with our fellow humans where we listen

81:38

to what they're saying and try to better

81:39

understand what their internal model is.

81:42

It's not equivalent to agreeing with

81:43

them. But it is saying, "Hey, somebody

81:45

is coming from this perspective. Let me

81:48

see if I can understand that." I think

81:49

that matters a lot. And I also think

81:52

that because we're so highly predisposed

81:55

for in-groups and outgroups, it's really

81:57

useful to figure out how to complexify

82:00

those relationships. Meaning, how do you

82:02

figure out the all the things that cross

82:05

cut in the relationship so that you say,

82:07

"Hey, you know what? I shouldn't dismiss

82:08

this person as a member of my out group

82:10

right away because actually

82:13

they belong to the same group I do and

82:15

they love surfing as much as I do and

82:17

they love golden retriever dogs and they

82:19

you know grew up in my hometown and

82:21

whatever. Like finding those things uh

82:24

explicitly helps the brain to keep these

82:28

circuits on that are involved in seeing

82:30

another person as a person. We have we

82:33

have all this social circuitry that is

82:36

all about understanding other people and

82:39

when things get dehumanized that

82:42

actually gets dialed way down. When we

82:44

look at you know let's say a homeless

82:46

person or a drug addict or someone who

82:49

we think of as our enemy or an out group

82:51

that gets dialed down so we don't think

82:53

of them as a person anymore. We think of

82:55

them as an object to to get around. Mhm.

82:58

So, this is what I think is really

82:59

important is figuring out what we can do

83:02

to keep that social circuitry still

83:04

going, which includes the things like

83:06

eye contact and conversation. And this

83:09

is this is one of the most important

83:10

things we can do as citizens in a

83:13

rapidly changing world as it relates to

83:17

things like dementia, which I know is a

83:20

fear that a lot of people have. A lot of

83:21

people are suffering with dementia, I

83:23

think increasingly. In fact, if I was

83:25

trying to save off dementia, what advice

83:27

would you give me, David?

83:28

>> Yeah, keep your brain active. Keep it

83:30

active till the day you die. Take on new

83:32

challenges. And as soon as you get good

83:34

at something like, you know, sudoku,

83:37

drop it and pick up some that you're not

83:39

good at.

83:40

>> And in simple terms, why?

83:42

>> It's because you're forcing your brain

83:43

to make changes. Otherwise, your brain

83:45

says, "Okay, I got this. I got the

83:47

world. I understand what's going on.

83:49

There's no real particular need for me

83:50

to change." And the fact is that the

83:52

structure of the brain is always

83:54

degenerating. And when you get something

83:56

like a disease like Alzheimer's disease,

83:58

it degenerates much faster. And what you

84:00

want to always be doing is building new

84:02

roadways and fashioning new paths that

84:05

had not been walked before.

84:06

>> So that there's more to degenerate,

84:09

which gives me more left over once that

84:12

degeneration begins.

84:14

>> Yeah, I that's Yeah, I think that's a

84:16

good way to look at it. your pathways

84:18

are falling apart and if you can build

84:20

new pathways which requires effort you

84:22

have to actually care and pursue and do

84:24

the thing even as parts of the thing

84:26

have fallen apart you still have ways of

84:28

getting from A to B

84:29

>> what do I need to stay away from in

84:31

terms of chemicals or supplement I don't

84:33

know or food I don't know

84:35

>> yeah obviously there's just been a lot

84:36

more emphasis on getting good sleep and

84:38

good diet and this stuff really matters

84:40

I think that's really useful for the

84:42

brain I mean it's fascinating to watch

84:44

what's happened in the latest generation

84:46

in terms of alcohol ol consumption. I

84:48

live up in Silicon Valley and there's a

84:49

lot of people who have wineries just

84:52

north of me and they're like selling

84:53

half their acorage. It's absolutely

84:55

fascinating to see what's happening

84:56

there. I will say I have a friend who's

84:59

who's in her 20s who said that she's in

85:02

favor of bringing drinking back. Why?

85:05

Because she said we go to parties and

85:07

everything's so awkward and no one knows

85:08

how to talk to one another. And so

85:10

they're missing something else. they're

85:12

missing the the dumb mistakes category

85:14

that we all got to enjoy growing up. So,

85:17

it it is a really interesting balance of

85:20

of how abstious one wants to become.

85:23

>> David, we have a closing tradition where

85:24

the last guest leaves a question, the

85:25

next guest, not knowing who they're

85:26

leaving it for.

85:27

>> Question left for you is, what do you

85:29

wish most for our planet over the next

85:33

10 years?

85:38

>> Well, the whole list are the top 10.

85:40

>> Yeah. um can't be world peace.

85:44

>> You know, I think I would come back to

85:45

this piece about the complexification of

85:47

relationships, which is to say, if we

85:50

could just get a little bit smarter

85:53

about understanding people out groups as

85:58

being humans with lives with their own

86:00

thing going on. doesn't mean we have to

86:03

love them or agree with them, but if we

86:06

can just get to that point, I don't

86:08

think we'll ever hit world peace, but at

86:10

least we'd have slightly less

86:11

polarization. So, I'm I'm definitely in

86:13

favor of that and I do think it's

86:14

possible and I do think AI can help us

86:16

get there by challenging us on these

86:18

points and saying, "Hey, that group that

86:21

you've already dismissed as an out

86:23

group, what if I told you this story

86:25

about this person? What if I introduced

86:27

you to this person?" That kind of stuff.

86:29

and you know having there's all kinds of

86:31

social movements that have sprung up

86:33

that allow people of different political

86:35

opinions to come together in a room and

86:37

talk with one another again it's not

86:38

that anyone has to change their mind but

86:40

they can say hey you know what I really

86:42

like that person I thought that was a

86:44

cool person a sweet person nice person

86:46

and and now I understand that somebody

86:48

who I have seen with my own eyes has a

86:49

different opinion on this than idea

86:51

>> is that wishful thinking to some degree

86:52

>> I don't think so because these things

86:54

are happening all over the place and and

86:57

>> the macro is is division isn't it It's

86:59

polarization echo chambers. There's now

87:01

I think there's now 20 social networks

87:03

or some crazy number that have more than

87:04

20 million people on them which means

87:06

that social networks are splintering off

87:08

into niches and interests and you know

87:10

there's like Rumble and Bumble and then

87:12

there's like threads and X and Facebook

87:14

snap Instagram and and what we're seeing

87:16

is more and more

87:18

>> interest group and also the other thing

87:19

with algorithms is we went from having

87:22

like a social graph where if I had a

87:24

thousand people follow me those thousand

87:26

people would see my stuff to now these

87:27

interest graphs where it doesn't matter

87:29

if I have one follower or million

87:30

followers, the algorithm is going to

87:32

decide who's interested in that thing

87:34

and it's going to serve it to them

87:35

because that's the most retentive thing

87:36

if you're a publicly listed company

87:38

that's driven by ad revenue. So, you've

87:40

got this algorithm that's actually

87:41

forcing you into what you know into this

87:43

into tighter and tighter and tighter

87:44

echo chambers. And even as someone

87:46

that's been on social media 15 years and

87:47

ran social media companies, this is one

87:48

of the great things I've noticed is when

87:50

I had a million followers back in the

87:51

day, I would reach those people because

87:53

they'd hit follow or subscribe. Now,

87:56

even on our YouTube channel, 61% of you

87:59

don't subscribe. Um, and please

88:02

subscribe. Um, and that's in part

88:04

because the algorithm is now doing the

88:06

work of deciding who to show it to, who

88:09

it will

88:10

>> on the basis of who will be retained.

88:12

>> Yeah. Here's what I would say. There's

88:14

absolutely nothing new about echo

88:16

chambers because it was always the case

88:18

that your neighbors and your community

88:20

and whatever, that's what you thought

88:22

was reality. I'm actually quite

88:24

optimistic about the existent the mere

88:25

existence of the internet because at

88:27

least we are exposed to the fact that

88:29

there are lots of different points of

88:30

view. It used to be in places like the

88:32

USSR, they controlled the media tightly

88:35

so that everything you saw was a news um

88:37

approved story, but now you see all the

88:40

points of view. Now, many of them might

88:42

drive you crazy and whatever, but at

88:43

least you know that there are people out

88:45

there that believe in that. And I think

88:47

that's really useful. If I had to decide

88:49

between state control where there's a

88:50

single story or seeing the whole messy

88:53

spectrum of opinions, I'd rather see the

88:56

latter.

88:57

>> What about the middle? You know, they

88:58

always one of the phrases that's again a

89:00

principle that's helped me think is that

89:01

the truth is in the middle. And

89:03

generally I try understand what the

89:04

middle looks like. So you've got state

89:06

controlled over here. You've got

89:08

aggressive algorithm that's sort of

89:09

reinforcing whatever you currently

89:11

believe.

89:12

>> Is there not some kind of middle ground

89:13

where

89:15

um the algorithms have to let up a

89:17

little bit and of course we're not going

89:18

to go for state controlled. Here's my

89:20

prediction in 2026 is that there is a

89:23

market opportunity for a new social

89:25

media company to come along because

89:27

everybody is aware of exactly this

89:29

problem that you're pointing out.

89:30

Everyone hates when they surf and they

89:33

get served exactly what they're supposed

89:34

to get served and they get off after an

89:36

hour or two and they feel like they've

89:38

wasted their lives. I think there's a

89:40

real opportunity for a social media

89:41

company to come along and say, you know

89:42

what, we're not building our algorithm

89:44

like the other guys. It's not about just

89:46

trying to get engagement at any cost

89:47

with, you know, um, incendiary posts,

89:51

but instead we're looking for ways to

89:54

connect people. So, if you and I both

89:57

love this particular thing, this

90:00

particular cuisine or or location or

90:03

whatever it is, we get connected. We see

90:05

each other's stuff and the algorithm

90:08

carefully, temporally sequences things

90:10

so that we come to have a certain

90:12

connection threshold before we find out,

90:15

whoa, you have a totally different

90:16

political opinion than I do on on

90:18

subject X. Wow, I didn't know that, but

90:20

I really like Stephen, so I'm going to

90:22

lean in and listen a little bit more. I

90:24

think this is very easy to do and I

90:26

think it can actually be part of the

90:27

selling point of the media company is

90:29

saying hey we are here not to enrage you

90:32

but to to actually build connection

90:35

>> sounds like how social media started

90:37

>> yeah it's a return

90:39

>> I think there's probably a neuroscience

90:42

basis as to why we ended up yeah

90:45

>> no it's an economics basis

90:47

>> but the fact is there's now an economic

90:49

opportunity now that everyone sees the

90:51

landscape

90:51

>> what I'm trying to say is that that

90:53

social network wouldn't be that

90:54

retentive by design because it wouldn't

90:56

trigger my dopamine. It wouldn't be a

90:58

slot machine like in Tik Tok is a slot

91:00

machine. Ping ping randomized returns.

91:03

Ping ping ping. Dopamine hit. Ping ping

91:05

ping. So this other social network that

91:07

wasn't playing with my dopamine in such

91:09

a way. I don't know whether I'd be

91:11

addicted enough to return. Therefore,

91:12

they wouldn't sell their ads the

91:13

economic return. Therefore, they

91:14

wouldn't do very well.

91:16

>> Here's the thing. I don't know if the

91:17

story is that simple that we all want to

91:19

do slot machines all the time.

91:21

>> Exactly. Because the fact is that a lot

91:24

of people go to Las Vegas and do slot

91:25

machines sometime, but we don't do that

91:28

all the time. It's kind of rare

91:29

actually. What we really desire are

91:31

meaningful connections. We really desire

91:34

feeling like, hey, you know what? I met

91:36

this person online that I'm following

91:37

and he's following me and we really

91:40

connect on all these points and oh by

91:43

the way, I then found out interestingly

91:45

he's got a totally different opinion

91:46

about Iran or abortion or whatever than

91:48

I do, but that's cool. Now we're we're

91:50

listening to each other. It kind of goes

91:52

back to your point earlier about at the

91:53

very start where we're talking about,

91:54

you know, the brain having an internal

91:56

battle like, do I want the cookie or do

91:58

I want the salad?

91:59

>> And unfortunately in the world we live

92:00

in, you know, this the cookie is going

92:02

to give me a dopamine hit.

92:04

>> Yes. But we don't eat cookies all the

92:05

time. This is the point. We do eat

92:07

salads much of the time because we're

92:10

not just unconscious automaton that are

92:12

doing the cookies.

92:13

>> Dr. David Eagleman, thank you so much

92:15

for the work that you do. I'm going to

92:16

link your book below um so everyone can

92:18

read this book. You've got a new book on

92:20

the way which I'm very excited about as

92:21

well. What's that book going to be about

92:22

and when is that out?

92:23

>> That's about the Ulisses contract and

92:24

that'll come out in 2027.

92:26

>> June. Okay. Um for anyone that wants to

92:27

know how to change your life by changing

92:29

your brain, I think this is the perfect

92:31

book to read. It's a New York Times

92:32

bestselling um author. Um and the book

92:36

is absolutely fascinating. It was

92:38

actually learning about this subject

92:39

matter in LiveWire that helped me to um

92:43

pursue more of a growth mindset and just

92:44

a growth mentality across my life and to

92:46

realize that if I'm not something now,

92:48

it doesn't mean that I can't be

92:49

tomorrow. So, thank you so much for the

92:51

work that you do, David. And, um, it's

92:53

been truly illuminating, and I'm sure my

92:55

my neural pathways have expanded in

92:57

really important ways because of this.

92:59

Great. Thank you, Stephen.

93:01

>> YouTube have this new crazy algorithm

93:02

where they know exactly what video you

93:04

would like to watch next based on AI and

93:07

all of your viewing behavior. And the

93:08

algorithm says that this video is the

93:12

perfect video for you. It's different

93:13

for everybody looking right now. Check

93:15

this video out and I bet you you might

93:17

love

Interactive Summary

Dr. David Eagleman, a neuroscientist, discusses the concept of brain plasticity and our ability to reshape our minds throughout life. He explains that while our brains peak in connectivity at age two, we continue to develop crystallized intelligence and can build new neural pathways through novelty, challenge, and social interaction. Eagleman addresses the role of AI in our lives, advocating for using it to handle 'vicious friction' (menial tasks) while engaging in 'virtuous friction' (deep learning and challenging oneself) to maintain cognitive health. He emphasizes the importance of social connections, the function of dreaming as a mechanism to defend the visual cortex from takeover, and the potential for a 'motorcycle for the mind' era where AI accelerates human potential.

Suggested questions

4 ready-made prompts