Why Every Brain Metaphor in History Has Been Wrong [SPECIAL EDITION]
1107 segments
Let me tell you a little story. 1960s in
the summer, a little kid named Carl
[music] was playing around in the back
of his garden and he noticed all of
these wood lice crawling around. You
know, the little insects [music] that
can curl up into a ball. And what he
noticed was that depending on whether
they were in the sun or in the shade,
they would move faster or slower. They
behaved differently. And that's it.
Paul grew up to be Professor Carl
Friston, one of the most cited [music]
neuroscientists alive. He's been on this
channel before, more times than I can
count. And that childhood observation
about wood lice, [music] it never left
him. He spent decades developing what he
calls the free energy principle, which
tries to explain all of behavior with
one equation. perception, [music]
action, learning, why you scratch your
nose, all of it, [music] Fston claims,
comes down to minimizing a single
mathematical quantity. There's an old
physics joke, assume that we can model a
spherical cowl in a vacuum. The joke is
about how scientists grotesqually
[music]
simplify messy reality to tame it. The
free energy principle might be the
ultimate spherical cow. It promises
[music] to explain self-organization,
this bewilderingly complicated
phenomenon with something so emaciated
[music] we might as well call it
tortological. Even Friston himself
agrees with this by the way. This is
what he said to us last time we spoke
with him. The free energy principle is
not meant to be complicated or difficult
to understand. It's actually you know
almost logically simple. Um [gasps]
so the your the whole free engine
principle is just basically a principle
of least action pertaining to density
dynamics. The the the dynamics or the
evolution of not densities but
conditional densities. That's just it.
This is before thermodynamics. It's
before quantum mechanics. It's just
about conditional probability
distributions.
>> So what do we do with this? Has
Fristston actually found some deep truth
about how minds work? Or is he doing
what many scientists do, which is
mistaking the simplification for the
actual thing? Well, it turns out there's
a philosopher who has spent an
incredible amount of time thinking about
this exact problem. Professor Marvita
Chiramuta teaches at Edinburgh
University. Her book, The Brain
Abstracted, is basically about what
happens when neuroscientists simplify
brains to study them. What gets
captured? What gets lost?
>> One of the answers that might seem
obvious to people is that we pursue
science because we're curious. We just
want to know how the world works. We
want to
reveal discover the underlying
principles of the universe which apply
in all cases. switching off the idea
that you're just interested in nature
for its own sake out of curiosity and
saying, "Okay, how can we engineer these
systems to actually do things that we
want?" Getting them to behave in
artificial ways. If those
simplifications sort of allow you to
achieve your technological goals,
there's no in principle problem with
oversimplification. If you're going to
say, "I'm not just interested in nature
for its own sake. I just want applied
science." I should say, by the way, that
the brain abstracted probably influenced
my thinking more in 2025 than anything
else. She's an inspirational lady. I
look up to her very much, and certainly
thinking back on many of the episodes
we've done in 2025, I can see her
influence in the questions I ask and how
I think about things. So, here's her
starting point. Scientists have to
simplify. We're limited creatures trying
to wrap our heads around systems way
more complex than we can actually
comprehend. Our working memory holds
maybe seven items. Our attention is more
scattered than a group of toddlers with
iPads. Um, we die after 80 years if
we're lucky. So, we build models, right?
We leave stuff out on purpose. We tell
ourselves stories about how the world
works. But the question is, why does any
of this even work at all? Science is a
humanistic endeavor,
right? The purpose of science in the
universe is to make the universe
intelligible to us, not to control it,
not to predict it, and not to exploit
it. Now, you can do all those wonderful
things if you like, but in the end, as
far as I'm concerned, uh science is no
different from poetry is that we're
trying to make sense of the world,
trying to give it meaning, uh in
relation to our own existence.
>> If you'll allow the indulgence, I want
to tell a little story. It's a boxing
match in the red corner. Simplicius. He
thinks science works because the
universe is actually simple underneath.
Find an elegant equation and you've hit
the real thing. Simplicity tells you
that you're on the right track. And in
the blue corner, ignorantio. He thinks
we simplify because we're too dumb to do
otherwise. Our models work well enough
for our purposes, but they're
approximations, just useful fictions, if
you like. The map, not the territory.
Now both of them agree that scientists
need to simplify but where they disagree
is what that means about reality.
Simplicius had history on his side or at
least a certain type of history.
Galileo, Newton, Einstein, they all
believed pretty explicitly that nature
was fundamentally orderly and that
finding simple laws meant you'd found
something true.
Einstein famously said, "God doesn't
play dice." And no, he didn't actually
think God had anything to do with it,
but he was expressing faith that the
universe is at the very bottom legible.
Now, Chiramuta has gone allin on
ignorantio's position. She thinks
successful science tells us we've become
good at building useful simplifications,
and that doesn't prove that nature is
simple. The philosopher Nicholas of Kusa
had a phrase for this attitude, doctor
ignorant. Basically, learned ignorance.
You study hard, you learn a lot, and
what you learn includes what you don't
know. Now, when we interviewed
Cherimuta, she had been following
Francois Schlay's videos. And for those
of you who don't know, Francois is a
friend of the channel. He's our mascot.
He's one of my heroes. And um he's got
this idea called the kaleidoscope
hypothesis, which is basically that the
universe is made out of code. And
underneath all of the apparent gnarly
mess that we see, there is intrinsic
underlying structure. Everyone knows
where the kaleidoscope is, right? It's
like this uh cardboard tube with a few
bits of colored glass in it. these uh
these just like few bits of uh uh
original information get uh mirrored and
repeated and transformed and they create
uh this tremendous richness of complex
patterns. You know it's it's beautiful.
The kaleidoscope hypothesis is this idea
that the world in general and any domain
in particular follows the same structure
that it appears on the surface to be
extremely rich and complex and uh
infinitely novel with every passing
moment. But in reality it is made from
the repetition and composition of just a
few atoms of meaning. A big part of
intelligence is the process of mining
your experience of the world to identify
bits that are repeated
um and to extract them extract these
unique atoms of meaning. Uh when we
extract them we call them abstractions.
Now she's not saying that Chole is
wrong. She's saying that he's making a
philosophical bet. Might be right, might
be wrong. It's the same bet that Plato
made. Seeing that as a philosopher I
thought that's Plato because France
precisely says we have the world of
appearance. It's complicated. It looks
intractable. It's messy. But underlying
that real reality is neat um
mathematical decomposible.
>> Now I feel like I should defend Charlay
a little bit here you know because
obviously we love Charlay. He's not
making any weird metaphysical claims. At
least I don't think he is. If scientific
theories actually explained reality the
way it is, you would expect fewer
U-turns. Now, the biggest simplification
in the 21st century, the final boss of
simplifications is this idea that the
mind is a computer or that the mind is
running a software program. So, we have
inputs, we have processing, we have an
output. This metaphor has become so
established in the collective zeitgeist
that no one even questions it anymore.
It barely even registers in our brains
as a metaphor. So is it or isn't it
isn't it a little bit weird that
computation is this abstract formalism
like you know an an automter that makes
these state transitions something
completely non-physical and we're
describing the mind as if it is that
abstract thing that sounds a little bit
weird. There are many movies made about
this who talk about uploading their
minds into the matrix. Neuralink talks
about interfacing with your brain's
software. Yosha Bach thinks that
consciousness is a software program
running on your brain.
>> That this is the universal that you have
these invariances in nature that you can
have patterns that have causal power
that have the ability to reproduce
themselves [music] that have the ability
to shape reality that uh are invariances
that you cannot simply explain more
simply by looking at what atoms are
doing in space. But you have to look at
these abstract patterns to make sense of
them. every [music] other explanation is
going to be more complicated in the same
way as money is going to be impossibly
complicated if you try to reduce it to
atoms. So you have to look at these
causal invariances and spirits are
actually such causal invariances. They
are actually disembodied, right? They
they're not bodies. They're not stuff in
space. They're not mechanisms in the
same way, but they are causal mechanism,
abstract mechanisms. And so we put the
spirit back into nature using the
concept of software. A lot of people
think that's metaphorical, but I don't
think it's metaphorical at all. It's the
literal truth. Software is spirit. We're
all just talking about this stuff
without even batting an eyelid. Like,
where's the skepticism, man? It just
sounds so plausible to us. So, we assume
that it just kind of has to be the case.
There is something super interesting
about computers. What a computer
ultimately is is it's a causal
insulator.
The computer is a layer on which you can
produce an arbitrary reality. For
instance, the world of Minecraft. You
can walk around in the world of
Minecraft and it's running very well on
a Mac and it's running very well on a
PC. And if you are inside of the world,
you don't know what you're running on,
right? It's not going to have any
information about the nature of the CPU
that it's on, the color of the casing of
the computer, the voltage that the
computer is running on, the place that
the computer is standing in in the
parent universe, right? Our universe. So
the um computer is insulating this world
of Minecraft from our world. It makes it
possible that an arbitrary world is
happening inside of this box. And our
brain is also such a causal insulator.
It's possible for us to have thoughts
that [music] are independent of what
happens around us. Right? We can
envision a future that is not much
tainted by the present. We can remember
a past that is independent from the
present in which we are. And that's
necessary for us. Our brain has evolved
as such a causal insulator as well to
allow us to give rise to universes that
are different from this one. For
instance, future worlds so we can plan
for being in them. B says that money is
an example of a causal pattern. It's not
the ink on a bank note. It's not [music]
the electrons in your bank server. It
persists across and inconces in various
physical [music] instantiations. So
paper, coins, gold, digital ledgers. And
yet they say money causally affects the
world. It gets you fed. It starts wars.
It builds cities. He says that software
is the same. A program is an abstract
pattern that can run on many types of
chips, maybe even neurons. And that
pattern has causal power because it
controls whatever substrate it's running
on. [music] The same algorithm produces
the same effects regardless of what
physical stuff implements it. So the
invariance that sameness across
substrates is [music] the causal
mechanism the pattern itself at least
according to Yosha. He even accepts that
physics is causally closed. He says that
the abstract description and the
physical description are two ways of
looking at the same causal structure.
Neither is reducible to the other. Both
are real. But I'm pretty sure Chiramuta
would ask who identifies that invariance
when we say the same algorithm runs on
different chips. Completely different
things are actually physically
happening, right? Different voltages,
different electrons doing different
things. The sameness is something that
we impose. It exists in our description,
not in nature. And as for the money
example, money only works because of
human interpretive practices. Right? If
you take away the humans and their
agreements, it's just paper, right?
Money is just paper and the causal power
is actually in the social substrate that
participates in it. Now, I think Yosha
has taken a useful way of talking about
complex systems and promoted it to
metaphysics. And that's simplicious all
over again, right? Mistaking the
elegance of our descriptions for the
structure of reality itself. I mean,
maybe information really is more
fundamental than matter, but that's
another philosophical wager. And we've
made these bets many, many times before.
Just look at the history of all of this.
So, Daycart thought that the nervous
system worked like the hydraulic
automter in French royal gardens. Fluids
pumping through [music] tubes, pushing
levers. That was the high-tech metaphor
of his day. Later, when scientists
figured out that nerves carry electrical
signals, the brain became a telegraph
network. Then it was a telephone
switchboard, signals traveling down
wires, operators routting calls. And now
in our era, the brain is a computer. To
be precise about what we mean by
physical and everything has to be
physical because even GitHub, you know,
has to store its data in some sort of
hard drive or magnetic field or whatever
technology, but it's not storing it in
nothingness, you know. So, so knowledge
information always has this form of
physical embodiment.
I think we tend to think about it as
non-physical because it is a thing that
is not a thing which is the same as
temperature. You wake up, you look at
your phone and you see the temperature
and you decide how you're going to dress
and nobody has any doubt that
temperature is something that can be
measured. But it took about like 2,000
years for us, you know, as a species to
[music] figure out, you know, what
temperature was and the fact that it
could be measured. And there were two
fundamental difficulties that I would
say [music] made it difficult for us to
understand you know uh temperature. The
first one is that first people thought
that hot and cold were two separate
things. Okay. So that temperature was
like a mixture of the two. It's like
when you make green out of blue and
yellow. Okay. And it took a while for
people to understand that cold was the
absence of heat and not that cold and
heat were two different quantities that
were tempered together. They were mixed.
So temperature actually mix means
mixture not you know like what we
[music] now mean by temperature. The
other thing that was very difficult to
understand is that people thought that
temperature was a thing was some sort of
fluid that grabbed onto things. So let's
say if you had a steel uh rod that is
hot is that steel rod kind of like has
this sort of invisible fluid that is
heat and they had good reasons to
believe that it was an invisible fluid
because it could flow. Let's say you
could connect that rod to something that
was cold and that cold thing was going
to warm up because that fluid was going
to be flowing in that direction and so
forth. So they thought that it had a
physicality as a thing. A brilliant
Englishman Jou basically figures out
that that is not the case that you know
temperature is not a thing. And the way
that they do it is through this
observation in which I don't know if you
know how cannons used to be built, you
know. So if you just grab a piece of
sheet metal and you make it into a
cylinder and you try to make a cannon
out of that, the moment exactly that you
that you shoot the cannon, that's going
to open up like a flower in a cartoon,
you know, like like you know, like a
Looney Tunes type of situation. So what
they would do is they would make these
solid you know h cylinders of metal and
they would bore a hole in it you know to
create the cannons and boring those
holes released an enormous amount of
heat. So J thought well how come all of
that heat is there it's like an infinite
amount of heat. If I continue to bore a
hole in a piece of metal for an infinite
amount of time I'm going to it cannot be
a thing then. And that you know leads
him to realize that temperature is
actually something that has to live in
things but it's not a thing itself is
related to the kinetic energy of the
particles in the thing but it's not a
thing itself. It doesn't have its own
particle. [music] There isn't kind of
like a temperature particle. Temperature
is kind of like a property that matter
has and that holds on to things.
Knowledge is similar, you know, in that
it holds on to you and to me, you know,
and and and to the collective to exist,
but it doesn't have kind of like a
physicality in itself, but it always
exists in some sort of physical medium
or substrate. So, in that sense, it's
always going to be physical. No matter
how virtual it gets, it has maybe a
different type of physicality. But even
electromagnetic waves that are
transmitting, you know, data from your
Wi-Fi router to your laptop are
technically a physical embodiment. Now,
I spoke with Professor Luchiano Fuidi a
few years ago, and it was actually one
of my favorite ever episodes of MLST. I
think very highly of him, which is why
we're going to show some clips of him in
in this show because it's very apppropo.
But this is what he had to say about it.
Ontology, on the other hand, is how we
[snorts] structure the world in the
sense that we think that that's the way
it is. With the kind of eyes we have and
the kind of light around the world, that
those are the colors we we perceive. But
certainly a world full of colors uh is
the world which I take it to be the
world. That's my ontology.
Reontologizing means changing some of
that particular nature. Allow me a
distinction. So I hope it's not too
confusing. Reality in itself call it
system description of reality as we
perceive it enjoy it conceptualize it
live through model of the system.
Ontology to me is the ontology of the
model is not the metaphysics of the
system. I hope I haven't uh no made a
complete mess here. Okay. So metaphysics
noon system whatever the source of the
data that we get fantastic the data
don't speak about the source the music
of the radio is not about the radio but
there is a radio of course the music is
what we perceive the music has it own
ontology structure etc the model the
model is at that point what we enjoy why
dig the digital revolution has changed
the the nature of the world around us
not metaphysically but ontologically so
the ontologizing because some of the
things that we have inherited from
modernity a sense of the world that is
now being restructured and a certain
understanding of the world. So
re-epemologizing as well of that world.
We go back to this temptation of talking
about reality as if it were something
that we need to grasp, catch, portray,
uh hook uh spears. um when in fact uh
the the way I prefer to uh understand it
is as
malleable understandable in a variety of
ways um something that provides
constraints. It doesn't mean that you
can interpret in any possible way but
leaves room for different kind of
interpretations. So if the flow of data
that come from whatever is out there and
again I rather be sort of agnostic about
it can be modeled in a variety of ways.
Um one way is to especially 21st century
given the technology we have etc to
interpret that as know an enormous
computational kind of uh environment.
It's perfectly fine as long as we don't
think that there is a right metaphysics
is the correct ontology for the 21st
century. Now this is not relativism
because on the other hand different
models of the same system are comparable
depending on why you're developing that
particular model. And let me give you a
completely trivial example. Suppose you
ask me whether that building is the same
building.
That question has no real answer because
it depends on why you're asking that
question. If your question is asked
because you want to have directions I'm
going to say oh yeah that's the same
building. So the same building. Yeah.
Absolutely not. Go there, turn left. No
traffic lights. But if your question is
like same function as I know it's
completely different building. It was a
school now it's a hospital.
Next question. So is it or is it not the
same? That that question is the mistake.
an absolute question that provides no
interface what computer scientists call
level abstraction chosen for one
particular purpose so that I can compare
whether an answer is better than another
let me crack a joke for the philosophers
who might be listening this
is it the same or it's not the same who
is asking why because if it is the tax
man the tax man
you're doomed man I mean there is no way
you can play any oh I change every plank
that you're going to pay their tax. It's
the same ship. I don't care. But if it
is a collector, that ship is worth zero.
You change all the planks, you must be
joking. It's worthless. So, is it or is
it not the same? Depends on why you
asking that particular question. Tell me
why and I can give you the answer. No.
Why? In other words, no frame within
which we have chosen the interface that
provides the model of the system. No
potential answer. So the question is
like is the universe a computational
gigantic? Yes or no? Meaningless.
Is it worth modeling the universe as a
gigantia for the purpose of making sense
of our digital life? Oh yes, definitely.
Because we are informationational
organisms. Aha. So metaphysics. No, I
meant [music] in the 21st century the
best way of understanding human beings
today is as information organisms. Last
century we thought that biologically not
made much more sense. A lot of water and
a sprinkle a little bit of extra and so
on. Mechanism time etc. Not absolute
answers not relativistic answers but
relational answers. The relation between
the question the purpose and the actual
[music] answer. But it takes three not
two. So the computational model isn't
literally true but it's useful. [music]
The mistake is forgetting that it's a
model. So the early cybernetics guys, so
we mccullin pits, they knew that they
were working with analogies. McCullikin
pits wrote their famous paper showing
that neurons could theoretically work
like logic gates. Now they weren't
claiming neurons actually were logic
gates, but they were using it as a kind
of functional description. But somewhere
along the way, the metaphor hardened. A
lot of neuroscientists today don't say
that the brain is like a computer. They
say it is one and the metaphor became
the thing itself. Now Chiramuta
borrowing from Whitehead by the way she
said that this is the fallacy of
misplaced concreteness. This is another
one of those leaky abstractions I was
talking about. By the way there's a
great book called um the brain
abstracted by Marvit Chimuta. I
interviewed her recently and she said
that one of the most pervasive myths in
neuroscience is that we use these leaky
abstractions and idealizations to talk
about cognition and usually it's using
the most recent technology at the time.
So you know a few hundred years ago we
were describing the brain in terms of
pulley
>> pullies and levers. Yes,
>> that's right. And and you know and then
it was um you know as a prediction
machine as a computer and all this kind
of stuff.
>> At the end of this is an example of the
these are grounded things that we
understand. They're really good models
because we can both talk about
computers. We both know what computers
are, but the brain doesn't work like
that in any sense. Jeff Beg put it even
more bluntly when we spoke. It will
always be the case that our explanation
for how the brain works will be by
analogy to the most sophisticated
technology that we have.
Is that how's that for a non-answer?
Right. So, so you know you know couple
thousand years ago, right? How' the
brain work? It was like levers and
pulleys, man. I mean, duh. Don't be
ridiculous. Why? That was the, you know,
it at some point in the middle ages, it
became humors, right? Because fluid
dynamics was like the, you know, was the
kind of techn, you know, the technology
that was like the most advanced or
technology that took advantage of of
water power was like the most advanced
technology that we had. Now, the most
advanced technology is computers. So,
duh, that's exactly how the brain works.
>> Now, here's something that kind of bugs
me, right? You go into any AI conference
or you drink from the well of San
Francisco by spending too much time on
Twitter and you develop this mindset
that AGI is inevitable. You start
feeling the AGI and you'd be forgiven
for thinking this because I've been
using claude code and my god I feel that
there's been more interesting stuff
happening in the world of software
development in the last 6 months than
there has been in [music] the previous
20 years. This this technology is
genuinely amazing, but it is automation
technology. It's it's not really
intelligence, which means it's only
really as good [music] as your ability
to specify and supervise and delegate to
the system. But it is absolutely
amazing. But why do we have this view?
It's not an argument [music] that AI is
impossible so much as why does it seem
so possible so inevitable to people and
saying that what I'm arguing that is
that if you look at the history of the
development of the life sciences of
psychology there are certain shifts
towards a much more mechanistic
understanding of both what life is and
what the mind is which are very
congenial to thinking that whatever is
going on in animals like us in terms of
the processes which lead to cognition.
They're just mechanisms anyway. So why
couldn't you put them into an actual
machine and have that [music] actual
machine do what we do? So with all that
all of that mechanistic history in the
background, AI could seem very
inevitable. But if that [music]
mechanistic hypothesis is actually
wrong, then these claims for the
inevitability of a biological like AI
would not actually be wellounded. But we
could be subject to a kind of cultural
historical illusion that this is just
going to happen.
>> Cultural historical illusion. I've been
thinking about that phrase. um maybe our
confidence says more about what we've
inherited intellectually than about how
minds actually work. Now another thing
that um Marvita has inspired me to think
about a lot is the difference between
prediction and understanding. Indeed
when I interviewed the Nobel Prize
winner John Jumper at Google Deep Mind a
couple of months ago um this was the
question I asked and he had quite an
interesting way of distinguishing those
two things. It's almost like it's at any
point learning how to refine and
optimize the structure.
>> Okay. So I think we should distinguish
three things. Predict, control,
understand first.
>> So predict means that you say I'm going
to do a thing. What am I going to what
will be this value of my machine? What
will appear on my computer screen in the
future? That is predict. Control is I
want to measure this thing in the future
and I want it to come out 17. Right?
That's control. Understand is a lot like
predict except there's a human in the
loop. understand means that I have such
a small collection of facts that you
will predict and you will do it with
facts that I can communicate to another
human
um in kind of this compact fix fits on
an index card that's almost understand
and so I think these machines let us
predict they let us control
we have to derive our own understanding
at this moment right we can experiment
now on the artifact we can look at the
200 million predicted structures, not
just the 200,000 experimental structures
in order to help us understand, but it
doesn't do the act of understanding for
us. It does the act of predict and maybe
control.
>> The problem is these two goals actually
pull against each other. I think we're
at this moment in science now because we
have these tools like LLMs for language
and um connets and visual neuroscience
are being used um as predictive models
of neuronal responses which don't have
that mathematical legibility that
originally so when I was trained in the
field that people aspired to have and so
you have this um possible conflict you
can either
pursue that goal of understanding or you
can pursue the goal of prediction. But
it seems like you can't have both at the
same time.
>> Now, on the one hand, people go into
neuroscience because they want to
understand the mind. They want that
feeling where something clicks and you
suddenly get how it works. That's what
drew Chiramuta to the field in the first
place. That's what keeps people up late
at night reading papers. But on the
other hand, there's just prediction,
building tools that work. If your model
forecasts data accurately, maybe you
don't care whether it's true in some
deeper sense. So, LLMs are getting
unreasonably good. They are winning math
Olympiads. They are I mean, as of last
week, actually, GPT 5.2 apparently um
discovered a new theor Well, it's it's
solved one of these problems that
Terrence Tao had on on his website. This
is insane, but does it actually
understand anything? And does it matter
if it does or doesn't as long as it
works? Chomsky had an amazing commentary
on this a few years ago when we spoke
and I think it's still as relevant today
as it was then.
>> Suppose that I submitted an article to a
physics journal saying, "I've got a
fantastic new theory and accommodates
all the laws of nature, the ones that
are known, the ones that have yet to
have been discovered. And it's such an
elegant theory that I can say it in two
words. Anything goes." Okay, that
includes all the laws of nature. The
ones we know, the ones we do not know
yet, everything. What's the problem? The
[clears throat] problem is they're not
going to accept the paper. Because when
you have a theory, there are two kinds
of questions you have to ask. Why are
things this way? Why are things not that
way? If you don't get the second
question, you've done nothing. GPT3 has
done nothing.
>> Classic Chsky. So maybe theories are
overrated, maybe prediction is enough.
But Chiramuta worries about that
trade-off, right? When you give up on
understanding, you don't know when your
tools will break. You're stuck with
black boxes. They work until they don't
and you won't see it coming when they
don't. I spoke with philosopher Anna
Tunika about this recently and she had a
beautiful way of describing it.
>> Suppose [music] you want to climb a
mountain and you arrive on the top of
the mountain. What's the argument to say
that actually it's only when you're on
the top [music] of the mountain that
that what the climbing on the mountain
is. I mean you cannot really arrive on
the top of the mountain if you don't do
the first step. Every single step
matters. First step [music] is as
important as the last one. Actually we
are more conscious when we take the
first [music] steps in climbing the
mountains than when we are on the top of
the mountains and we have all this like
full-blown capacities [music] and
sometimes we shut ourselves in the legs.
And of course, I brought this up when I
debated Mike Israel. And the biggest
misconception in all of AI, what all of
the folks in San Francisco believe in is
this philosophical idea called
functionalism. That we're walking up the
mountain and when we get to the top of
the mountain, we have all of these
abstract capabilities like being able to
reason, play chess, but that disregards
that the path that you took walking up
the mountain is very important. and not
only the path, the physical
instantiation, the stuff that the
mountain is made out of. So Mike's view
is that if something produces
intelligent outputs, why does the
substrate matter? Silicon neurons, it
doesn't make any difference. It's all
information processing. Needless to say,
he pushed back hard. You can climb
mountains. You can touch stuff. But you
never truly embodied experience anything
if you push on that philosophical button
hard enough because you can always
abstract out to like these are just
neural network pings from groups of
neurons. And so you don't truly deeply
know anything in some kind of weird
philosophical way because it's just
neural network calculus all the way
down. You know, you climb the mountain,
that's cool. Helicopter can climb the
mountain much better than you. does not
have the ability to reason and
abstractly and plan and predict things
at all.
>> So, it's possible that what you can do
or how you can function isn't the whole
story. Or maybe if that's wrong, we
should just start using helicopters. So,
[snorts] individual minds are limited.
But what about collective minds? What
about humanity as a whole? We've built
this incredible thing over centuries,
right? Libraries, universities,
Wikipedia, an expanding [music] store of
knowledge that no single person could
ever hold. Doesn't that escape our
individual limitations? So there's this
dream of universal knowledge accessible
anywhere perspective free. There is a
tacid and implicit idea there that
knowledge is something that something
can have while my view is that knowledge
is a much more collective phenomenon.
Okay. So and it's not something also
that you can put in something like a
book. In my opinion the book doesn't
have knowledge. The book is an archival
record of some ideas that I was able,
you know, to put together in a nice
structure. But you cannot have a
conversation with the book. Knowledge
only can go to work when it's embodied.
You cannot throw like, you know, a bunch
of engineering manuals and cement into a
gorge and expect to get a bridge because
the books don't have knowledge. Teams
have knowledge. Organizations have
knowledge. Yes, knowledge is social.
Communities accomplish what individuals
can't. But collective knowledge is still
knowledge from somewhere. This matters,
right? It's shaped by particular
questions, particular tools, and
particular blind spots.
>> I think one of the interesting things
about this phenomenon, not only of LLMs,
but the internet as this idea that it's
the repository of all human knowledge is
that it goes along with this idea almost
that knowledge doesn't have to be
perspectival. It doesn't have to be like
of a place of a community. It kind of
can float free of the situation in which
this knowledge was acquired. That's kind
of the aspiration of these ideas sort of
of a universal repository of knowledge.
But what this perspectivalist position
actually sort of points us to is
actually knowledge is inherently
of a place um of a community. We acquire
knowledge not by being like completely
open-minded to everything that's
possible to know, but actually by sort
of narrowing our view, discounting
possibilities actually is what allows
you to pursue a line of inquiry and
actually pin down um some information
about say the natural world which is
humanly achievable. So the contrast I'm
trying to make here is between a view
which says that knowledge is
perspectival. It's inherently from a
human point of view which means that
it's inherently finite. We cannot aspire
to this sort of universal free floating
knowledge because as finite human beings
we can only achieve knowledge of the
world through recognizing our
limitations. And this notion of like you
can have non-perspectal knowledge like
everything in the internet based on like
all of the different possible
perspectives all blended together that
this somehow gives us a god's eye view.
LLMs aspire to be this like every person
voice, but it's precisely because they
don't have a particular so socialization
into a finite community that they're not
reliable that we can't pin them down to
actually um what would be a sort of
honest trustworthy perspective.
>> So Chiramuta has this idea that she
[music] calls haptic realism. Most of
the philosophy of science treats
knowledge like vision. You stand back
and you observe reality from a distance.
She thinks it's more like touch.
>> We just look around. We absorb how
things are. Our knowledge is sort of
entirely objective. It's almost like a
god's eye view on reality. But if you
think that scientific knowledge in
particular is more kind of touchlike,
you can't ignore the fact that we um
sort of run into things. We have to pick
things up, engage with them, ultimately
change them in order for us to acquire
knowledge of them. So, you cannot
discount the fact that we're kind of
meddling with things in the process of
um bringing about our our knowledge.
>> Neuroscientists are more than passive
observers of brains. They poke them,
they prod them, they stimulate them,
they model them, and in doing that, they
change what they find. The patterns that
emerge are real, but they're also
partially created by the process of
investigating itself. It takes all the
messiness of biological cognition and it
reduces it to one imperative. Minimize
free energy. Everything else supposedly
follows from that. Now, Simplicius loves
this. I mean, finally, the simple truth,
the one principle to explain it all. But
Ignorantio says, "Wait a minute. The
math is elegant. The framework is
unified, but does that mean it's
captured what brains actually are? Or
did we just build another beautiful
simplification and started forgetting
that it was a simplification? So,
Chiramuta said to me that we should ask
different questions, right? Not is this
true, but what does this help us do?
What does this light up? What does it
leave in the darkness? And the other
thing, of course, is that we are finite
biological creatures, right? We there
are limits to our cognition and Chomsky
spoke about this fascinating concept of
a cognitive horizon when we when we
chatted with him.
>> If we are organic creatures, we're going
to be like other organic creatures and
that there are bounds to our cognitive
capacities. So, for example, a rat can
be trained to run pretty complicated
mazes, but it can't be trained to learn
a prime number maze. Turn right at every
prime number. It just doesn't have the
concept. And no matter how much training
you do, you're not going to get
anywhere. Well, I suspect there's
reasons to suppose we're like rats. We
have capacities. We have a nature. We
have a structure. They yield all sorts
of extensive range of things that we can
do, but they probably impose limits. And
I think we could even make some guess
about what these limits are.
>> So our best theories, they bump up
against the walls of the limits of our
cognition, of our cognitive horizon. And
maybe that's fine. But maybe even
knowledge of where the walls are is
useful in of itself. Science makes
things simple and it's not a flaw,
right? Without simplification, we'd have
nothing. You can't study everything at
once. But simplification has risks,
right? You forget your model is a model.
You mistake elegance for truth. And you
think you found solid ground when really
you're just building another floor. So
look at Opus 4.5, right? Foundation
models today. They are artifacts of
staggering complexity. We've trained
them on everything humans have ever
written. We treat their outputs like
they came from somewhere authoritative,
somewhere outside of us, somewhere that
knows, but the knowing was ours all
along, right? Just compressed,
refracted, reflected back to us from the
silicon. Whether that reflection
captures the actual thing, that is a
question that we're barely starting to
ask. You can use powerful frameworks
like the free energy principle, but just
remember, they're frameworks, right?
They're tools for building. They're not
the final word. So the brain is not a
hydraulic pump. It's not a computer.
It's not a telephone network. It's
probably not a free energy minimizer
either. I mean, at least not in some
like literal way. What the brain
actually is, we will only ever caps
glimpses of, right? That is through our
limited instruments and theories. And
that's okay because that's what it means
to be finite. So Chiramutus, he had this
amazing example from Greek mythology uh
called Proteius, right? And if you could
pin him down, he'd have to answer your
question correctly. But if you let go
and you let him get away, then he would
shapeshift and shapeshift. Nature is
like that, right? You can pin it down,
you can ask questions, but it's always
perspectival. As soon as you let go,
there's always a myriad of other
perspectives that can be interpreted
from reality. Carl Friston's woodlice,
they were doing something very similar,
right? So slow down in the sun, move
faster in the shade. But Friston isn't a
woodlouse and neither are you.
Ask follow-up questions or revisit key timestamps.
The video explores the nature of scientific understanding, contrasting the idea of finding fundamental, simple truths about reality (Simplicius) with the idea that our scientific models are useful simplifications due to our cognitive limitations (Ignorantio). It delves into various scientific metaphors, such as the brain as a computer, and critiques the tendency to mistake these models for reality itself. The discussion highlights the importance of acknowledging the limitations of human cognition and the perspectival nature of knowledge, emphasizing that scientific progress involves making useful simplifications rather than necessarily uncovering absolute truths. The concept of "haptic realism" is introduced, suggesting that knowledge acquisition is an active engagement with the world, not just passive observation. Ultimately, the video argues that while models and predictions are valuable tools, they should not be mistaken for reality, and true understanding requires recognizing the inherent limitations of our finite perspective.
Videos recently processed by our community