HomeVideos

Why Every Brain Metaphor in History Has Been Wrong [SPECIAL EDITION]

Now Playing

Why Every Brain Metaphor in History Has Been Wrong [SPECIAL EDITION]

Transcript

1107 segments

0:00

Let me tell you a little story. 1960s in

0:03

the summer, a little kid named Carl

0:06

[music] was playing around in the back

0:07

of his garden and he noticed all of

0:09

these wood lice crawling around. You

0:11

know, the little insects [music] that

0:13

can curl up into a ball. And what he

0:15

noticed was that depending on whether

0:17

they were in the sun or in the shade,

0:20

they would move faster or slower. They

0:24

behaved differently. And that's it.

0:29

Paul grew up to be Professor Carl

0:31

Friston, one of the most cited [music]

0:33

neuroscientists alive. He's been on this

0:36

channel before, more times than I can

0:37

count. And that childhood observation

0:39

about wood lice, [music] it never left

0:41

him. He spent decades developing what he

0:44

calls the free energy principle, which

0:46

tries to explain all of behavior with

0:49

one equation. perception, [music]

0:51

action, learning, why you scratch your

0:54

nose, all of it, [music] Fston claims,

0:57

comes down to minimizing a single

0:59

mathematical quantity. There's an old

1:02

physics joke, assume that we can model a

1:05

spherical cowl in a vacuum. The joke is

1:09

about how scientists grotesqually

1:10

[music]

1:11

simplify messy reality to tame it. The

1:14

free energy principle might be the

1:16

ultimate spherical cow. It promises

1:18

[music] to explain self-organization,

1:20

this bewilderingly complicated

1:22

phenomenon with something so emaciated

1:25

[music] we might as well call it

1:26

tortological. Even Friston himself

1:29

agrees with this by the way. This is

1:30

what he said to us last time we spoke

1:32

with him. The free energy principle is

1:34

not meant to be complicated or difficult

1:36

to understand. It's actually you know

1:38

almost logically simple. Um [gasps]

1:41

so the your the whole free engine

1:43

principle is just basically a principle

1:45

of least action pertaining to density

1:47

dynamics. The the the dynamics or the

1:49

evolution of not densities but

1:52

conditional densities. That's just it.

1:55

This is before thermodynamics. It's

1:57

before quantum mechanics. It's just

2:00

about conditional probability

2:01

distributions.

2:04

>> So what do we do with this? Has

2:06

Fristston actually found some deep truth

2:07

about how minds work? Or is he doing

2:10

what many scientists do, which is

2:13

mistaking the simplification for the

2:15

actual thing? Well, it turns out there's

2:17

a philosopher who has spent an

2:19

incredible amount of time thinking about

2:21

this exact problem. Professor Marvita

2:24

Chiramuta teaches at Edinburgh

2:26

University. Her book, The Brain

2:29

Abstracted, is basically about what

2:31

happens when neuroscientists simplify

2:33

brains to study them. What gets

2:35

captured? What gets lost?

2:38

>> One of the answers that might seem

2:41

obvious to people is that we pursue

2:43

science because we're curious. We just

2:45

want to know how the world works. We

2:47

want to

2:49

reveal discover the underlying

2:52

principles of the universe which apply

2:54

in all cases. switching off the idea

2:57

that you're just interested in nature

2:58

for its own sake out of curiosity and

3:01

saying, "Okay, how can we engineer these

3:03

systems to actually do things that we

3:05

want?" Getting them to behave in

3:08

artificial ways. If those

3:10

simplifications sort of allow you to

3:12

achieve your technological goals,

3:14

there's no in principle problem with

3:16

oversimplification. If you're going to

3:18

say, "I'm not just interested in nature

3:20

for its own sake. I just want applied

3:22

science." I should say, by the way, that

3:24

the brain abstracted probably influenced

3:26

my thinking more in 2025 than anything

3:28

else. She's an inspirational lady. I

3:31

look up to her very much, and certainly

3:33

thinking back on many of the episodes

3:35

we've done in 2025, I can see her

3:37

influence in the questions I ask and how

3:39

I think about things. So, here's her

3:41

starting point. Scientists have to

3:43

simplify. We're limited creatures trying

3:45

to wrap our heads around systems way

3:48

more complex than we can actually

3:49

comprehend. Our working memory holds

3:51

maybe seven items. Our attention is more

3:54

scattered than a group of toddlers with

3:57

iPads. Um, we die after 80 years if

4:00

we're lucky. So, we build models, right?

4:02

We leave stuff out on purpose. We tell

4:05

ourselves stories about how the world

4:07

works. But the question is, why does any

4:09

of this even work at all? Science is a

4:12

humanistic endeavor,

4:15

right? The purpose of science in the

4:18

universe is to make the universe

4:20

intelligible to us, not to control it,

4:25

not to predict it, and not to exploit

4:28

it. Now, you can do all those wonderful

4:30

things if you like, but in the end, as

4:32

far as I'm concerned, uh science is no

4:34

different from poetry is that we're

4:37

trying to make sense of the world,

4:38

trying to give it meaning, uh in

4:40

relation to our own existence.

4:42

>> If you'll allow the indulgence, I want

4:44

to tell a little story. It's a boxing

4:46

match in the red corner. Simplicius. He

4:49

thinks science works because the

4:50

universe is actually simple underneath.

4:53

Find an elegant equation and you've hit

4:55

the real thing. Simplicity tells you

4:57

that you're on the right track. And in

4:59

the blue corner, ignorantio. He thinks

5:02

we simplify because we're too dumb to do

5:05

otherwise. Our models work well enough

5:07

for our purposes, but they're

5:08

approximations, just useful fictions, if

5:11

you like. The map, not the territory.

5:14

Now both of them agree that scientists

5:16

need to simplify but where they disagree

5:18

is what that means about reality.

5:21

Simplicius had history on his side or at

5:23

least a certain type of history.

5:26

Galileo, Newton, Einstein, they all

5:29

believed pretty explicitly that nature

5:31

was fundamentally orderly and that

5:33

finding simple laws meant you'd found

5:35

something true.

5:38

Einstein famously said, "God doesn't

5:40

play dice." And no, he didn't actually

5:42

think God had anything to do with it,

5:44

but he was expressing faith that the

5:46

universe is at the very bottom legible.

5:50

Now, Chiramuta has gone allin on

5:52

ignorantio's position. She thinks

5:55

successful science tells us we've become

5:56

good at building useful simplifications,

5:59

and that doesn't prove that nature is

6:01

simple. The philosopher Nicholas of Kusa

6:03

had a phrase for this attitude, doctor

6:06

ignorant. Basically, learned ignorance.

6:10

You study hard, you learn a lot, and

6:12

what you learn includes what you don't

6:14

know. Now, when we interviewed

6:16

Cherimuta, she had been following

6:17

Francois Schlay's videos. And for those

6:19

of you who don't know, Francois is a

6:21

friend of the channel. He's our mascot.

6:22

He's one of my heroes. And um he's got

6:25

this idea called the kaleidoscope

6:26

hypothesis, which is basically that the

6:29

universe is made out of code. And

6:32

underneath all of the apparent gnarly

6:34

mess that we see, there is intrinsic

6:38

underlying structure. Everyone knows

6:40

where the kaleidoscope is, right? It's

6:43

like this uh cardboard tube with a few

6:46

bits of colored glass in it. these uh

6:50

these just like few bits of uh uh

6:52

original information get uh mirrored and

6:56

repeated and transformed and they create

6:59

uh this tremendous richness of complex

7:02

patterns. You know it's it's beautiful.

7:04

The kaleidoscope hypothesis is this idea

7:07

that the world in general and any domain

7:11

in particular follows the same structure

7:14

that it appears on the surface to be

7:17

extremely rich and complex and uh

7:21

infinitely novel with every passing

7:23

moment. But in reality it is made from

7:28

the repetition and composition of just a

7:31

few atoms of meaning. A big part of

7:34

intelligence is the process of mining

7:38

your experience of the world to identify

7:41

bits that are repeated

7:44

um and to extract them extract these

7:47

unique atoms of meaning. Uh when we

7:49

extract them we call them abstractions.

7:52

Now she's not saying that Chole is

7:54

wrong. She's saying that he's making a

7:56

philosophical bet. Might be right, might

7:59

be wrong. It's the same bet that Plato

8:02

made. Seeing that as a philosopher I

8:04

thought that's Plato because France

8:06

precisely says we have the world of

8:09

appearance. It's complicated. It looks

8:11

intractable. It's messy. But underlying

8:14

that real reality is neat um

8:18

mathematical decomposible.

8:20

>> Now I feel like I should defend Charlay

8:21

a little bit here you know because

8:24

obviously we love Charlay. He's not

8:26

making any weird metaphysical claims. At

8:28

least I don't think he is. If scientific

8:30

theories actually explained reality the

8:32

way it is, you would expect fewer

8:35

U-turns. Now, the biggest simplification

8:38

in the 21st century, the final boss of

8:41

simplifications is this idea that the

8:43

mind is a computer or that the mind is

8:46

running a software program. So, we have

8:48

inputs, we have processing, we have an

8:50

output. This metaphor has become so

8:52

established in the collective zeitgeist

8:54

that no one even questions it anymore.

8:56

It barely even registers in our brains

8:58

as a metaphor. So is it or isn't it

9:01

isn't it a little bit weird that

9:03

computation is this abstract formalism

9:06

like you know an an automter that makes

9:08

these state transitions something

9:10

completely non-physical and we're

9:12

describing the mind as if it is that

9:15

abstract thing that sounds a little bit

9:18

weird. There are many movies made about

9:20

this who talk about uploading their

9:22

minds into the matrix. Neuralink talks

9:24

about interfacing with your brain's

9:26

software. Yosha Bach thinks that

9:28

consciousness is a software program

9:29

running on your brain.

9:31

>> That this is the universal that you have

9:32

these invariances in nature that you can

9:34

have patterns that have causal power

9:37

that have the ability to reproduce

9:39

themselves [music] that have the ability

9:40

to shape reality that uh are invariances

9:44

that you cannot simply explain more

9:46

simply by looking at what atoms are

9:48

doing in space. But you have to look at

9:50

these abstract patterns to make sense of

9:53

them. every [music] other explanation is

9:54

going to be more complicated in the same

9:56

way as money is going to be impossibly

9:58

complicated if you try to reduce it to

10:00

atoms. So you have to look at these

10:02

causal invariances and spirits are

10:04

actually such causal invariances. They

10:06

are actually disembodied, right? They

10:07

they're not bodies. They're not stuff in

10:10

space. They're not mechanisms in the

10:12

same way, but they are causal mechanism,

10:14

abstract mechanisms. And so we put the

10:16

spirit back into nature using the

10:18

concept of software. A lot of people

10:20

think that's metaphorical, but I don't

10:22

think it's metaphorical at all. It's the

10:24

literal truth. Software is spirit. We're

10:27

all just talking about this stuff

10:29

without even batting an eyelid. Like,

10:31

where's the skepticism, man? It just

10:34

sounds so plausible to us. So, we assume

10:37

that it just kind of has to be the case.

10:39

There is something super interesting

10:41

about computers. What a computer

10:43

ultimately is is it's a causal

10:45

insulator.

10:46

The computer is a layer on which you can

10:50

produce an arbitrary reality. For

10:52

instance, the world of Minecraft. You

10:54

can walk around in the world of

10:55

Minecraft and it's running very well on

10:58

a Mac and it's running very well on a

10:59

PC. And if you are inside of the world,

11:01

you don't know what you're running on,

11:03

right? It's not going to have any

11:05

information about the nature of the CPU

11:07

that it's on, the color of the casing of

11:10

the computer, the voltage that the

11:12

computer is running on, the place that

11:14

the computer is standing in in the

11:15

parent universe, right? Our universe. So

11:18

the um computer is insulating this world

11:21

of Minecraft from our world. It makes it

11:24

possible that an arbitrary world is

11:27

happening inside of this box. And our

11:30

brain is also such a causal insulator.

11:32

It's possible for us to have thoughts

11:34

that [music] are independent of what

11:35

happens around us. Right? We can

11:37

envision a future that is not much

11:39

tainted by the present. We can remember

11:42

a past that is independent from the

11:44

present in which we are. And that's

11:46

necessary for us. Our brain has evolved

11:49

as such a causal insulator as well to

11:52

allow us to give rise to universes that

11:55

are different from this one. For

11:57

instance, future worlds so we can plan

11:58

for being in them. B says that money is

12:01

an example of a causal pattern. It's not

12:04

the ink on a bank note. It's not [music]

12:06

the electrons in your bank server. It

12:08

persists across and inconces in various

12:12

physical [music] instantiations. So

12:14

paper, coins, gold, digital ledgers. And

12:17

yet they say money causally affects the

12:20

world. It gets you fed. It starts wars.

12:23

It builds cities. He says that software

12:26

is the same. A program is an abstract

12:29

pattern that can run on many types of

12:32

chips, maybe even neurons. And that

12:35

pattern has causal power because it

12:38

controls whatever substrate it's running

12:40

on. [music] The same algorithm produces

12:42

the same effects regardless of what

12:45

physical stuff implements it. So the

12:48

invariance that sameness across

12:51

substrates is [music] the causal

12:53

mechanism the pattern itself at least

12:56

according to Yosha. He even accepts that

12:58

physics is causally closed. He says that

13:01

the abstract description and the

13:02

physical description are two ways of

13:04

looking at the same causal structure.

13:06

Neither is reducible to the other. Both

13:09

are real. But I'm pretty sure Chiramuta

13:11

would ask who identifies that invariance

13:14

when we say the same algorithm runs on

13:17

different chips. Completely different

13:19

things are actually physically

13:21

happening, right? Different voltages,

13:23

different electrons doing different

13:25

things. The sameness is something that

13:27

we impose. It exists in our description,

13:31

not in nature. And as for the money

13:34

example, money only works because of

13:36

human interpretive practices. Right? If

13:38

you take away the humans and their

13:40

agreements, it's just paper, right?

13:42

Money is just paper and the causal power

13:45

is actually in the social substrate that

13:48

participates in it. Now, I think Yosha

13:50

has taken a useful way of talking about

13:51

complex systems and promoted it to

13:54

metaphysics. And that's simplicious all

13:56

over again, right? Mistaking the

13:58

elegance of our descriptions for the

14:00

structure of reality itself. I mean,

14:03

maybe information really is more

14:04

fundamental than matter, but that's

14:06

another philosophical wager. And we've

14:08

made these bets many, many times before.

14:11

Just look at the history of all of this.

14:13

So, Daycart thought that the nervous

14:15

system worked like the hydraulic

14:16

automter in French royal gardens. Fluids

14:20

pumping through [music] tubes, pushing

14:21

levers. That was the high-tech metaphor

14:24

of his day. Later, when scientists

14:26

figured out that nerves carry electrical

14:28

signals, the brain became a telegraph

14:30

network. Then it was a telephone

14:32

switchboard, signals traveling down

14:34

wires, operators routting calls. And now

14:37

in our era, the brain is a computer. To

14:40

be precise about what we mean by

14:42

physical and everything has to be

14:43

physical because even GitHub, you know,

14:46

has to store its data in some sort of

14:49

hard drive or magnetic field or whatever

14:51

technology, but it's not storing it in

14:53

nothingness, you know. So, so knowledge

14:55

information always has this form of

14:58

physical embodiment.

14:59

I think we tend to think about it as

15:02

non-physical because it is a thing that

15:04

is not a thing which is the same as

15:07

temperature. You wake up, you look at

15:09

your phone and you see the temperature

15:11

and you decide how you're going to dress

15:12

and nobody has any doubt that

15:14

temperature is something that can be

15:15

measured. But it took about like 2,000

15:17

years for us, you know, as a species to

15:19

[music] figure out, you know, what

15:20

temperature was and the fact that it

15:22

could be measured. And there were two

15:24

fundamental difficulties that I would

15:25

say [music] made it difficult for us to

15:27

understand you know uh temperature. The

15:30

first one is that first people thought

15:33

that hot and cold were two separate

15:35

things. Okay. So that temperature was

15:38

like a mixture of the two. It's like

15:39

when you make green out of blue and

15:41

yellow. Okay. And it took a while for

15:44

people to understand that cold was the

15:46

absence of heat and not that cold and

15:48

heat were two different quantities that

15:50

were tempered together. They were mixed.

15:52

So temperature actually mix means

15:54

mixture not you know like what we

15:56

[music] now mean by temperature. The

15:59

other thing that was very difficult to

16:00

understand is that people thought that

16:02

temperature was a thing was some sort of

16:04

fluid that grabbed onto things. So let's

16:06

say if you had a steel uh rod that is

16:09

hot is that steel rod kind of like has

16:11

this sort of invisible fluid that is

16:13

heat and they had good reasons to

16:15

believe that it was an invisible fluid

16:17

because it could flow. Let's say you

16:18

could connect that rod to something that

16:20

was cold and that cold thing was going

16:21

to warm up because that fluid was going

16:23

to be flowing in that direction and so

16:25

forth. So they thought that it had a

16:26

physicality as a thing. A brilliant

16:29

Englishman Jou basically figures out

16:32

that that is not the case that you know

16:35

temperature is not a thing. And the way

16:37

that they do it is through this

16:39

observation in which I don't know if you

16:40

know how cannons used to be built, you

16:42

know. So if you just grab a piece of

16:45

sheet metal and you make it into a

16:47

cylinder and you try to make a cannon

16:48

out of that, the moment exactly that you

16:50

that you shoot the cannon, that's going

16:52

to open up like a flower in a cartoon,

16:53

you know, like like you know, like a

16:55

Looney Tunes type of situation. So what

16:57

they would do is they would make these

16:59

solid you know h cylinders of metal and

17:02

they would bore a hole in it you know to

17:05

create the cannons and boring those

17:07

holes released an enormous amount of

17:08

heat. So J thought well how come all of

17:11

that heat is there it's like an infinite

17:14

amount of heat. If I continue to bore a

17:16

hole in a piece of metal for an infinite

17:18

amount of time I'm going to it cannot be

17:20

a thing then. And that you know leads

17:23

him to realize that temperature is

17:26

actually something that has to live in

17:28

things but it's not a thing itself is

17:30

related to the kinetic energy of the

17:32

particles in the thing but it's not a

17:34

thing itself. It doesn't have its own

17:36

particle. [music] There isn't kind of

17:37

like a temperature particle. Temperature

17:38

is kind of like a property that matter

17:40

has and that holds on to things.

17:42

Knowledge is similar, you know, in that

17:44

it holds on to you and to me, you know,

17:46

and and and to the collective to exist,

17:48

but it doesn't have kind of like a

17:49

physicality in itself, but it always

17:52

exists in some sort of physical medium

17:54

or substrate. So, in that sense, it's

17:57

always going to be physical. No matter

17:58

how virtual it gets, it has maybe a

18:01

different type of physicality. But even

18:02

electromagnetic waves that are

18:04

transmitting, you know, data from your

18:07

Wi-Fi router to your laptop are

18:08

technically a physical embodiment. Now,

18:10

I spoke with Professor Luchiano Fuidi a

18:12

few years ago, and it was actually one

18:14

of my favorite ever episodes of MLST. I

18:16

think very highly of him, which is why

18:19

we're going to show some clips of him in

18:20

in this show because it's very apppropo.

18:22

But this is what he had to say about it.

18:24

Ontology, on the other hand, is how we

18:27

[snorts] structure the world in the

18:29

sense that we think that that's the way

18:31

it is. With the kind of eyes we have and

18:34

the kind of light around the world, that

18:35

those are the colors we we perceive. But

18:37

certainly a world full of colors uh is

18:40

the world which I take it to be the

18:42

world. That's my ontology.

18:43

Reontologizing means changing some of

18:47

that particular nature. Allow me a

18:50

distinction. So I hope it's not too

18:52

confusing. Reality in itself call it

18:54

system description of reality as we

18:57

perceive it enjoy it conceptualize it

19:00

live through model of the system.

19:02

Ontology to me is the ontology of the

19:04

model is not the metaphysics of the

19:06

system. I hope I haven't uh no made a

19:08

complete mess here. Okay. So metaphysics

19:11

noon system whatever the source of the

19:14

data that we get fantastic the data

19:17

don't speak about the source the music

19:19

of the radio is not about the radio but

19:21

there is a radio of course the music is

19:24

what we perceive the music has it own

19:26

ontology structure etc the model the

19:28

model is at that point what we enjoy why

19:32

dig the digital revolution has changed

19:34

the the nature of the world around us

19:37

not metaphysically but ontologically so

19:39

the ontologizing because some of the

19:41

things that we have inherited from

19:43

modernity a sense of the world that is

19:47

now being restructured and a certain

19:49

understanding of the world. So

19:52

re-epemologizing as well of that world.

19:55

We go back to this temptation of talking

19:58

about reality as if it were something

20:00

that we need to grasp, catch, portray,

20:05

uh hook uh spears. um when in fact uh

20:10

the the way I prefer to uh understand it

20:13

is as

20:15

malleable understandable in a variety of

20:18

ways um something that provides

20:21

constraints. It doesn't mean that you

20:23

can interpret in any possible way but

20:26

leaves room for different kind of

20:29

interpretations. So if the flow of data

20:31

that come from whatever is out there and

20:34

again I rather be sort of agnostic about

20:37

it can be modeled in a variety of ways.

20:40

Um one way is to especially 21st century

20:44

given the technology we have etc to

20:46

interpret that as know an enormous

20:48

computational kind of uh environment.

20:50

It's perfectly fine as long as we don't

20:52

think that there is a right metaphysics

20:56

is the correct ontology for the 21st

20:58

century. Now this is not relativism

21:01

because on the other hand different

21:04

models of the same system are comparable

21:06

depending on why you're developing that

21:09

particular model. And let me give you a

21:11

completely trivial example. Suppose you

21:13

ask me whether that building is the same

21:15

building.

21:17

That question has no real answer because

21:20

it depends on why you're asking that

21:22

question. If your question is asked

21:24

because you want to have directions I'm

21:26

going to say oh yeah that's the same

21:27

building. So the same building. Yeah.

21:29

Absolutely not. Go there, turn left. No

21:31

traffic lights. But if your question is

21:33

like same function as I know it's

21:35

completely different building. It was a

21:36

school now it's a hospital.

21:38

Next question. So is it or is it not the

21:41

same? That that question is the mistake.

21:46

an absolute question that provides no

21:50

interface what computer scientists call

21:51

level abstraction chosen for one

21:54

particular purpose so that I can compare

21:57

whether an answer is better than another

21:59

let me crack a joke for the philosophers

22:01

who might be listening this

22:04

is it the same or it's not the same who

22:05

is asking why because if it is the tax

22:09

man the tax man

22:11

you're doomed man I mean there is no way

22:13

you can play any oh I change every plank

22:16

that you're going to pay their tax. It's

22:17

the same ship. I don't care. But if it

22:19

is a collector, that ship is worth zero.

22:23

You change all the planks, you must be

22:24

joking. It's worthless. So, is it or is

22:27

it not the same? Depends on why you

22:30

asking that particular question. Tell me

22:32

why and I can give you the answer. No.

22:34

Why? In other words, no frame within

22:36

which we have chosen the interface that

22:39

provides the model of the system. No

22:42

potential answer. So the question is

22:44

like is the universe a computational

22:47

gigantic? Yes or no? Meaningless.

22:51

Is it worth modeling the universe as a

22:54

gigantia for the purpose of making sense

22:57

of our digital life? Oh yes, definitely.

23:00

Because we are informationational

23:01

organisms. Aha. So metaphysics. No, I

23:04

meant [music] in the 21st century the

23:07

best way of understanding human beings

23:08

today is as information organisms. Last

23:11

century we thought that biologically not

23:14

made much more sense. A lot of water and

23:16

a sprinkle a little bit of extra and so

23:18

on. Mechanism time etc. Not absolute

23:22

answers not relativistic answers but

23:26

relational answers. The relation between

23:28

the question the purpose and the actual

23:30

[music] answer. But it takes three not

23:33

two. So the computational model isn't

23:35

literally true but it's useful. [music]

23:37

The mistake is forgetting that it's a

23:40

model. So the early cybernetics guys, so

23:42

we mccullin pits, they knew that they

23:45

were working with analogies. McCullikin

23:47

pits wrote their famous paper showing

23:49

that neurons could theoretically work

23:51

like logic gates. Now they weren't

23:53

claiming neurons actually were logic

23:55

gates, but they were using it as a kind

23:57

of functional description. But somewhere

24:00

along the way, the metaphor hardened. A

24:02

lot of neuroscientists today don't say

24:04

that the brain is like a computer. They

24:06

say it is one and the metaphor became

24:08

the thing itself. Now Chiramuta

24:11

borrowing from Whitehead by the way she

24:13

said that this is the fallacy of

24:16

misplaced concreteness. This is another

24:18

one of those leaky abstractions I was

24:20

talking about. By the way there's a

24:21

great book called um the brain

24:22

abstracted by Marvit Chimuta. I

24:24

interviewed her recently and she said

24:26

that one of the most pervasive myths in

24:27

neuroscience is that we use these leaky

24:29

abstractions and idealizations to talk

24:31

about cognition and usually it's using

24:34

the most recent technology at the time.

24:36

So you know a few hundred years ago we

24:37

were describing the brain in terms of

24:39

pulley

24:39

>> pullies and levers. Yes,

24:41

>> that's right. And and you know and then

24:42

it was um you know as a prediction

24:44

machine as a computer and all this kind

24:46

of stuff.

24:47

>> At the end of this is an example of the

24:49

these are grounded things that we

24:50

understand. They're really good models

24:51

because we can both talk about

24:52

computers. We both know what computers

24:54

are, but the brain doesn't work like

24:55

that in any sense. Jeff Beg put it even

24:58

more bluntly when we spoke. It will

25:00

always be the case that our explanation

25:02

for how the brain works will be by

25:05

analogy to the most sophisticated

25:06

technology that we have.

25:09

Is that how's that for a non-answer?

25:11

Right. So, so you know you know couple

25:15

thousand years ago, right? How' the

25:16

brain work? It was like levers and

25:17

pulleys, man. I mean, duh. Don't be

25:19

ridiculous. Why? That was the, you know,

25:21

it at some point in the middle ages, it

25:23

became humors, right? Because fluid

25:25

dynamics was like the, you know, was the

25:27

kind of techn, you know, the technology

25:29

that was like the most advanced or

25:31

technology that took advantage of of

25:33

water power was like the most advanced

25:35

technology that we had. Now, the most

25:37

advanced technology is computers. So,

25:39

duh, that's exactly how the brain works.

25:41

>> Now, here's something that kind of bugs

25:42

me, right? You go into any AI conference

25:44

or you drink from the well of San

25:46

Francisco by spending too much time on

25:48

Twitter and you develop this mindset

25:50

that AGI is inevitable. You start

25:52

feeling the AGI and you'd be forgiven

25:54

for thinking this because I've been

25:55

using claude code and my god I feel that

26:00

there's been more interesting stuff

26:02

happening in the world of software

26:03

development in the last 6 months than

26:05

there has been in [music] the previous

26:06

20 years. This this technology is

26:08

genuinely amazing, but it is automation

26:11

technology. It's it's not really

26:13

intelligence, which means it's only

26:15

really as good [music] as your ability

26:17

to specify and supervise and delegate to

26:20

the system. But it is absolutely

26:22

amazing. But why do we have this view?

26:24

It's not an argument [music] that AI is

26:27

impossible so much as why does it seem

26:31

so possible so inevitable to people and

26:34

saying that what I'm arguing that is

26:36

that if you look at the history of the

26:39

development of the life sciences of

26:41

psychology there are certain shifts

26:44

towards a much more mechanistic

26:46

understanding of both what life is and

26:48

what the mind is which are very

26:50

congenial to thinking that whatever is

26:52

going on in animals like us in terms of

26:56

the processes which lead to cognition.

26:59

They're just mechanisms anyway. So why

27:01

couldn't you put them into an actual

27:03

machine and have that [music] actual

27:04

machine do what we do? So with all that

27:07

all of that mechanistic history in the

27:09

background, AI could seem very

27:12

inevitable. But if that [music]

27:14

mechanistic hypothesis is actually

27:17

wrong, then these claims for the

27:20

inevitability of a biological like AI

27:24

would not actually be wellounded. But we

27:27

could be subject to a kind of cultural

27:30

historical illusion that this is just

27:32

going to happen.

27:34

>> Cultural historical illusion. I've been

27:35

thinking about that phrase. um maybe our

27:38

confidence says more about what we've

27:40

inherited intellectually than about how

27:43

minds actually work. Now another thing

27:45

that um Marvita has inspired me to think

27:47

about a lot is the difference between

27:49

prediction and understanding. Indeed

27:51

when I interviewed the Nobel Prize

27:52

winner John Jumper at Google Deep Mind a

27:54

couple of months ago um this was the

27:56

question I asked and he had quite an

27:58

interesting way of distinguishing those

27:59

two things. It's almost like it's at any

28:02

point learning how to refine and

28:04

optimize the structure.

28:06

>> Okay. So I think we should distinguish

28:08

three things. Predict, control,

28:10

understand first.

28:11

>> So predict means that you say I'm going

28:15

to do a thing. What am I going to what

28:17

will be this value of my machine? What

28:19

will appear on my computer screen in the

28:20

future? That is predict. Control is I

28:24

want to measure this thing in the future

28:26

and I want it to come out 17. Right?

28:28

That's control. Understand is a lot like

28:31

predict except there's a human in the

28:33

loop. understand means that I have such

28:35

a small collection of facts that you

28:38

will predict and you will do it with

28:41

facts that I can communicate to another

28:42

human

28:44

um in kind of this compact fix fits on

28:47

an index card that's almost understand

28:50

and so I think these machines let us

28:52

predict they let us control

28:57

we have to derive our own understanding

28:59

at this moment right we can experiment

29:01

now on the artifact we can look at the

29:03

200 million predicted structures, not

29:05

just the 200,000 experimental structures

29:09

in order to help us understand, but it

29:10

doesn't do the act of understanding for

29:12

us. It does the act of predict and maybe

29:14

control.

29:14

>> The problem is these two goals actually

29:17

pull against each other. I think we're

29:19

at this moment in science now because we

29:23

have these tools like LLMs for language

29:26

and um connets and visual neuroscience

29:30

are being used um as predictive models

29:33

of neuronal responses which don't have

29:36

that mathematical legibility that

29:38

originally so when I was trained in the

29:40

field that people aspired to have and so

29:43

you have this um possible conflict you

29:47

can either

29:48

pursue that goal of understanding or you

29:51

can pursue the goal of prediction. But

29:53

it seems like you can't have both at the

29:55

same time.

29:55

>> Now, on the one hand, people go into

29:57

neuroscience because they want to

29:58

understand the mind. They want that

30:00

feeling where something clicks and you

30:02

suddenly get how it works. That's what

30:04

drew Chiramuta to the field in the first

30:06

place. That's what keeps people up late

30:08

at night reading papers. But on the

30:10

other hand, there's just prediction,

30:12

building tools that work. If your model

30:14

forecasts data accurately, maybe you

30:16

don't care whether it's true in some

30:18

deeper sense. So, LLMs are getting

30:21

unreasonably good. They are winning math

30:23

Olympiads. They are I mean, as of last

30:25

week, actually, GPT 5.2 apparently um

30:28

discovered a new theor Well, it's it's

30:30

solved one of these problems that

30:32

Terrence Tao had on on his website. This

30:35

is insane, but does it actually

30:37

understand anything? And does it matter

30:38

if it does or doesn't as long as it

30:40

works? Chomsky had an amazing commentary

30:43

on this a few years ago when we spoke

30:44

and I think it's still as relevant today

30:46

as it was then.

30:48

>> Suppose that I submitted an article to a

30:50

physics journal saying, "I've got a

30:52

fantastic new theory and accommodates

30:55

all the laws of nature, the ones that

30:57

are known, the ones that have yet to

30:59

have been discovered. And it's such an

31:01

elegant theory that I can say it in two

31:03

words. Anything goes." Okay, that

31:07

includes all the laws of nature. The

31:09

ones we know, the ones we do not know

31:12

yet, everything. What's the problem? The

31:15

[clears throat] problem is they're not

31:16

going to accept the paper. Because when

31:18

you have a theory, there are two kinds

31:20

of questions you have to ask. Why are

31:23

things this way? Why are things not that

31:25

way? If you don't get the second

31:27

question, you've done nothing. GPT3 has

31:31

done nothing.

31:33

>> Classic Chsky. So maybe theories are

31:36

overrated, maybe prediction is enough.

31:38

But Chiramuta worries about that

31:41

trade-off, right? When you give up on

31:42

understanding, you don't know when your

31:43

tools will break. You're stuck with

31:45

black boxes. They work until they don't

31:48

and you won't see it coming when they

31:49

don't. I spoke with philosopher Anna

31:52

Tunika about this recently and she had a

31:54

beautiful way of describing it.

31:56

>> Suppose [music] you want to climb a

31:57

mountain and you arrive on the top of

32:00

the mountain. What's the argument to say

32:02

that actually it's only when you're on

32:04

the top [music] of the mountain that

32:05

that what the climbing on the mountain

32:07

is. I mean you cannot really arrive on

32:10

the top of the mountain if you don't do

32:11

the first step. Every single step

32:13

matters. First step [music] is as

32:15

important as the last one. Actually we

32:18

are more conscious when we take the

32:20

first [music] steps in climbing the

32:21

mountains than when we are on the top of

32:24

the mountains and we have all this like

32:25

full-blown capacities [music] and

32:27

sometimes we shut ourselves in the legs.

32:28

And of course, I brought this up when I

32:30

debated Mike Israel. And the biggest

32:32

misconception in all of AI, what all of

32:35

the folks in San Francisco believe in is

32:37

this philosophical idea called

32:39

functionalism. That we're walking up the

32:41

mountain and when we get to the top of

32:43

the mountain, we have all of these

32:44

abstract capabilities like being able to

32:46

reason, play chess, but that disregards

32:50

that the path that you took walking up

32:52

the mountain is very important. and not

32:54

only the path, the physical

32:56

instantiation, the stuff that the

32:58

mountain is made out of. So Mike's view

33:00

is that if something produces

33:01

intelligent outputs, why does the

33:03

substrate matter? Silicon neurons, it

33:05

doesn't make any difference. It's all

33:07

information processing. Needless to say,

33:09

he pushed back hard. You can climb

33:11

mountains. You can touch stuff. But you

33:13

never truly embodied experience anything

33:15

if you push on that philosophical button

33:17

hard enough because you can always

33:18

abstract out to like these are just

33:20

neural network pings from groups of

33:22

neurons. And so you don't truly deeply

33:25

know anything in some kind of weird

33:27

philosophical way because it's just

33:29

neural network calculus all the way

33:31

down. You know, you climb the mountain,

33:33

that's cool. Helicopter can climb the

33:36

mountain much better than you. does not

33:38

have the ability to reason and

33:39

abstractly and plan and predict things

33:41

at all.

33:42

>> So, it's possible that what you can do

33:45

or how you can function isn't the whole

33:48

story. Or maybe if that's wrong, we

33:50

should just start using helicopters. So,

33:52

[snorts] individual minds are limited.

33:54

But what about collective minds? What

33:55

about humanity as a whole? We've built

33:58

this incredible thing over centuries,

33:59

right? Libraries, universities,

34:02

Wikipedia, an expanding [music] store of

34:04

knowledge that no single person could

34:05

ever hold. Doesn't that escape our

34:08

individual limitations? So there's this

34:10

dream of universal knowledge accessible

34:13

anywhere perspective free. There is a

34:16

tacid and implicit idea there that

34:18

knowledge is something that something

34:20

can have while my view is that knowledge

34:22

is a much more collective phenomenon.

34:24

Okay. So and it's not something also

34:26

that you can put in something like a

34:28

book. In my opinion the book doesn't

34:31

have knowledge. The book is an archival

34:34

record of some ideas that I was able,

34:37

you know, to put together in a nice

34:39

structure. But you cannot have a

34:40

conversation with the book. Knowledge

34:43

only can go to work when it's embodied.

34:45

You cannot throw like, you know, a bunch

34:46

of engineering manuals and cement into a

34:49

gorge and expect to get a bridge because

34:50

the books don't have knowledge. Teams

34:52

have knowledge. Organizations have

34:53

knowledge. Yes, knowledge is social.

34:55

Communities accomplish what individuals

34:57

can't. But collective knowledge is still

35:00

knowledge from somewhere. This matters,

35:03

right? It's shaped by particular

35:04

questions, particular tools, and

35:07

particular blind spots.

35:08

>> I think one of the interesting things

35:11

about this phenomenon, not only of LLMs,

35:14

but the internet as this idea that it's

35:17

the repository of all human knowledge is

35:19

that it goes along with this idea almost

35:22

that knowledge doesn't have to be

35:24

perspectival. It doesn't have to be like

35:26

of a place of a community. It kind of

35:29

can float free of the situation in which

35:32

this knowledge was acquired. That's kind

35:34

of the aspiration of these ideas sort of

35:37

of a universal repository of knowledge.

35:41

But what this perspectivalist position

35:43

actually sort of points us to is

35:45

actually knowledge is inherently

35:49

of a place um of a community. We acquire

35:54

knowledge not by being like completely

35:59

open-minded to everything that's

36:00

possible to know, but actually by sort

36:03

of narrowing our view, discounting

36:05

possibilities actually is what allows

36:08

you to pursue a line of inquiry and

36:11

actually pin down um some information

36:15

about say the natural world which is

36:16

humanly achievable. So the contrast I'm

36:19

trying to make here is between a view

36:21

which says that knowledge is

36:24

perspectival. It's inherently from a

36:26

human point of view which means that

36:29

it's inherently finite. We cannot aspire

36:32

to this sort of universal free floating

36:34

knowledge because as finite human beings

36:37

we can only achieve knowledge of the

36:39

world through recognizing our

36:41

limitations. And this notion of like you

36:44

can have non-perspectal knowledge like

36:46

everything in the internet based on like

36:50

all of the different possible

36:51

perspectives all blended together that

36:53

this somehow gives us a god's eye view.

36:55

LLMs aspire to be this like every person

36:59

voice, but it's precisely because they

37:02

don't have a particular so socialization

37:06

into a finite community that they're not

37:09

reliable that we can't pin them down to

37:13

actually um what would be a sort of

37:16

honest trustworthy perspective.

37:19

>> So Chiramuta has this idea that she

37:21

[music] calls haptic realism. Most of

37:23

the philosophy of science treats

37:24

knowledge like vision. You stand back

37:27

and you observe reality from a distance.

37:29

She thinks it's more like touch.

37:32

>> We just look around. We absorb how

37:34

things are. Our knowledge is sort of

37:36

entirely objective. It's almost like a

37:38

god's eye view on reality. But if you

37:40

think that scientific knowledge in

37:42

particular is more kind of touchlike,

37:44

you can't ignore the fact that we um

37:48

sort of run into things. We have to pick

37:50

things up, engage with them, ultimately

37:53

change them in order for us to acquire

37:55

knowledge of them. So, you cannot

37:57

discount the fact that we're kind of

37:59

meddling with things in the process of

38:02

um bringing about our our knowledge.

38:05

>> Neuroscientists are more than passive

38:06

observers of brains. They poke them,

38:08

they prod them, they stimulate them,

38:10

they model them, and in doing that, they

38:13

change what they find. The patterns that

38:15

emerge are real, but they're also

38:18

partially created by the process of

38:20

investigating itself. It takes all the

38:23

messiness of biological cognition and it

38:25

reduces it to one imperative. Minimize

38:28

free energy. Everything else supposedly

38:30

follows from that. Now, Simplicius loves

38:32

this. I mean, finally, the simple truth,

38:34

the one principle to explain it all. But

38:37

Ignorantio says, "Wait a minute. The

38:41

math is elegant. The framework is

38:43

unified, but does that mean it's

38:45

captured what brains actually are? Or

38:49

did we just build another beautiful

38:50

simplification and started forgetting

38:52

that it was a simplification? So,

38:55

Chiramuta said to me that we should ask

38:57

different questions, right? Not is this

38:59

true, but what does this help us do?

39:03

What does this light up? What does it

39:05

leave in the darkness? And the other

39:07

thing, of course, is that we are finite

39:08

biological creatures, right? We there

39:10

are limits to our cognition and Chomsky

39:13

spoke about this fascinating concept of

39:15

a cognitive horizon when we when we

39:17

chatted with him.

39:18

>> If we are organic creatures, we're going

39:20

to be like other organic creatures and

39:22

that there are bounds to our cognitive

39:24

capacities. So, for example, a rat can

39:27

be trained to run pretty complicated

39:29

mazes, but it can't be trained to learn

39:31

a prime number maze. Turn right at every

39:34

prime number. It just doesn't have the

39:36

concept. And no matter how much training

39:38

you do, you're not going to get

39:40

anywhere. Well, I suspect there's

39:43

reasons to suppose we're like rats. We

39:45

have capacities. We have a nature. We

39:48

have a structure. They yield all sorts

39:50

of extensive range of things that we can

39:53

do, but they probably impose limits. And

39:56

I think we could even make some guess

39:57

about what these limits are.

39:59

>> So our best theories, they bump up

40:01

against the walls of the limits of our

40:03

cognition, of our cognitive horizon. And

40:05

maybe that's fine. But maybe even

40:07

knowledge of where the walls are is

40:09

useful in of itself. Science makes

40:11

things simple and it's not a flaw,

40:14

right? Without simplification, we'd have

40:16

nothing. You can't study everything at

40:18

once. But simplification has risks,

40:21

right? You forget your model is a model.

40:22

You mistake elegance for truth. And you

40:24

think you found solid ground when really

40:27

you're just building another floor. So

40:29

look at Opus 4.5, right? Foundation

40:31

models today. They are artifacts of

40:33

staggering complexity. We've trained

40:34

them on everything humans have ever

40:36

written. We treat their outputs like

40:38

they came from somewhere authoritative,

40:40

somewhere outside of us, somewhere that

40:43

knows, but the knowing was ours all

40:45

along, right? Just compressed,

40:47

refracted, reflected back to us from the

40:50

silicon. Whether that reflection

40:52

captures the actual thing, that is a

40:54

question that we're barely starting to

40:56

ask. You can use powerful frameworks

40:58

like the free energy principle, but just

41:00

remember, they're frameworks, right?

41:01

They're tools for building. They're not

41:02

the final word. So the brain is not a

41:05

hydraulic pump. It's not a computer.

41:07

It's not a telephone network. It's

41:09

probably not a free energy minimizer

41:11

either. I mean, at least not in some

41:12

like literal way. What the brain

41:14

actually is, we will only ever caps

41:17

glimpses of, right? That is through our

41:18

limited instruments and theories. And

41:21

that's okay because that's what it means

41:22

to be finite. So Chiramutus, he had this

41:25

amazing example from Greek mythology uh

41:27

called Proteius, right? And if you could

41:29

pin him down, he'd have to answer your

41:31

question correctly. But if you let go

41:34

and you let him get away, then he would

41:35

shapeshift and shapeshift. Nature is

41:38

like that, right? You can pin it down,

41:40

you can ask questions, but it's always

41:42

perspectival. As soon as you let go,

41:44

there's always a myriad of other

41:45

perspectives that can be interpreted

41:48

from reality. Carl Friston's woodlice,

41:50

they were doing something very similar,

41:52

right? So slow down in the sun, move

41:54

faster in the shade. But Friston isn't a

41:57

woodlouse and neither are you.

Interactive Summary

The video explores the nature of scientific understanding, contrasting the idea of finding fundamental, simple truths about reality (Simplicius) with the idea that our scientific models are useful simplifications due to our cognitive limitations (Ignorantio). It delves into various scientific metaphors, such as the brain as a computer, and critiques the tendency to mistake these models for reality itself. The discussion highlights the importance of acknowledging the limitations of human cognition and the perspectival nature of knowledge, emphasizing that scientific progress involves making useful simplifications rather than necessarily uncovering absolute truths. The concept of "haptic realism" is introduced, suggesting that knowledge acquisition is an active engagement with the world, not just passive observation. Ultimately, the video argues that while models and predictions are valuable tools, they should not be mistaken for reality, and true understanding requires recognizing the inherent limitations of our finite perspective.

Suggested questions

10 ready-made prompts