HomeVideos

The Future Of Brain-Computer Interfaces

Now Playing

The Future Of Brain-Computer Interfaces

Transcript

1667 segments

0:00

I think it is very possible that the

0:01

first people to live to a thousand are

0:03

alive right now. It still takes some

0:04

suspension of disbelief because I think

0:06

biotech has just been so incremental.

0:08

One of the things that's so exciting

0:09

about what's happening now is that no

0:10

longer really feels so incremental to

0:11

me. I think that BCI we're going to come

0:13

to see is not is not a specific product.

0:16

I think there going to be a bunch of BCI

0:17

companies going after different

0:18

applications where different types of

0:20

probes will make sense. To me, it feels

0:21

like we're firmly in like the takeoff

0:23

era now. Like something new has happened

0:25

on Earth.

0:31

Welcome back to another episode of How

0:33

to Build the Future. Today we've got a

0:35

real treat, Max Hodak, the co-founder of

0:39

Neurolink and also founder of science,

0:43

one of the most exciting BCI brain

0:45

computer interface companies that we've

0:48

ever seen. Max, welcome to How to Build

0:51

the Future.

0:52

>> Thanks for having me. So science

0:53

recently announced more than 40 people

0:56

have received one of your first BCI

0:58

treatments which gives people their

1:00

sight back. What is that? You what h

1:02

what's happening?

1:03

>> So we finished a big clinical trial last

1:05

year which was published in the New

1:07

England Journal of Medicine in the fall.

1:08

So it's a it's a little chip a tiny

1:11

little 2mm x 2mm silicon chip that's

1:14

implanted in the back of the eye under

1:16

the retina that it's it's this tiny

1:18

little array of essentially solar

1:19

panels. So the patients wear glasses

1:21

that have a camera that looks out at the

1:22

world and then a laser projector that

1:24

projects an image into the eye. And

1:26

wherever the laser hits the implant, it

1:28

like the solar panel absorbs the light

1:29

and that excites the cells directly

1:31

above it. It's a retinal stimulator and

1:33

this allows us to bypass the dead rods

1:35

and cones like the the cells that

1:36

normally make the eye light sensitive to

1:38

get a visual signal back into the retina

1:40

if they've gone blind because they've

1:41

lost the rods and cones. And so yeah, I

1:43

mean there there's a big clinical trial

1:45

in Europe across 17 sites and it was a

1:48

huge effect. And so we are submitting

1:50

for approval now. It's not it's not

1:52

approved on on the market yet. Hope to

1:53

have that later this year.

1:54

>> For those watching who have never heard

1:56

of a brain computer interface. What is

1:59

it? And what have people been able to

2:01

do? What are they able to do now?

2:03

>> So the brain

2:05

is this powerful computer, but it's

2:07

encased in the skull. Like it is not

2:10

magically connected to things. And so um

2:12

it has these these handful of

2:14

connections to the world. And these give

2:16

you the senses that you know and in and

2:18

the motor control that you know but you

2:21

can kind of ask like is that so either

2:23

do we want to replace these with

2:24

something else. So for example like the

2:26

simulated reality or the matrix use

2:28

case. The other is restoring lost

2:30

functionality. So this is I mean this is

2:31

how they're deployed today. So if

2:33

someone has gone blind you can restore

2:35

the ability to see. If they've gone deaf

2:36

you can restore the ability to hear. If

2:38

they're paralyzed you can restore the

2:39

ability to move. And then you can think

2:41

about structural neural engineering. And

2:43

this is the this is the thing that

2:44

people haven't really we haven't gotten

2:45

to as a field as much. But looking at

2:48

how how does the brain process

2:49

information? Can you add new brain

2:51

areas? Are there ways to understand how

2:54

the brain is like what what is going on

2:56

either to use this to build smarter

2:58

machines or to think about how to treat

3:00

things like depression or addiction? I'm

3:02

taken by uh to what degree right now

3:05

it's about sort of um taking someone who

3:09

has a condition or a disease and then

3:11

bringing them like sort of restoring

3:13

them to like sort of capability, right?

3:17

I think that's playing out in AI right

3:19

now as well, right? like you had

3:20

computers that had no ability to do like

3:23

any sort of pure cognition or like you

3:26

know and uh you know no neurons and then

3:28

suddenly a bunch of neurons and then AGI

3:31

is sort of like what a human can do.

3:32

sort of like a restoration of capability

3:35

and then of course there's like this

3:37

other thing after that which is you know

3:40

uh ASI super intelligence do you ever

3:44

think about what that might be down the

3:46

road you know what is that for BCI there

3:49

are many types of BCIs so it's there it

3:52

really is going to be a category like

3:53

pharma it's not it's not one product I

3:55

don't think there's going to be like the

3:56

VCI that people get and there are

3:59

different modalities that will work for

4:00

different things so for example Um I

4:02

don't work on ultrasound but one of the

4:04

things I think will be possible with

4:05

ultrasound is like a digital ambient or

4:08

like a digital aderall. So can you like

4:10

stimulate part of the brain to cause

4:11

focus or sleep and things like that

4:14

would not surprise me if that was

4:15

possible and that could I could see as

4:17

being more of a consumer application

4:19

almost and that won't require brain

4:20

surgery hopefully right now that the

4:21

high quality ultrasound stuff does

4:23

require drilling through the skull but I

4:25

think that that will be overcome for the

4:26

implantable BCIs. I mean this is a very

4:28

serious brain surgery. Um I think that's

4:30

important to appreciate. So when you

4:32

think about how do you actually get this

4:33

into humans and who's going to use it. I

4:35

mean these are going to be very disabled

4:36

patient populations. You always look at

4:38

riskreward you start at the most

4:40

disabled patients. You get the most

4:41

benefit for even relatively basic

4:44

functionality. Like I don't think that

4:45

you or I would want to get one of the

4:47

cortical motor decoders that you might

4:49

have seen out there today. Um because

4:51

the reality is that like a keyboard and

4:53

mouse is like great. It is a much higher

4:55

performance. Like it you can get like

4:57

spoken word is like 40 bits per second.

4:59

you can many people can type in the like

5:01

20 20-ish bits um and so the 10 bit per

5:05

second cortical motor decode is like not

5:07

going to make your life better. I

5:07

wouldn't get serious brain surgery for

5:09

that. Now as it gets more powerful and

5:11

as we are able to produce kind of access

5:14

richer representations from more of the

5:16

brain especially birectionally

5:18

um then you'll start to see like the

5:20

risk benefit change where like my my

5:22

view on this is not that I think healthy

5:24

30-year-olds are going to be getting

5:26

these soon but eventually many people

5:28

become patients aging is like the

5:30

coralate of kind of everything getting

5:32

worse and so there's some critical age

5:33

where it kind of crosses over where it

5:35

makes sense to have something that will

5:37

restore some functionality that you had

5:39

and then eventually that will kind of c

5:41

like cross the origin and then you'll

5:42

see people that had something terrible

5:45

happen to them who now have a capability

5:46

that you're jealous of and that will be

5:48

kind of when you start to see it

5:50

changing. Talk to me about how uh people

5:53

who maybe never had sight, you know, why

5:56

is was the optic nerve not you not

5:58

actually set up? Like is that not

6:00

something that you can do later? How

6:02

does plasticity fit in? You know, do you

6:04

have to get BCIs when you're incredibly

6:06

young while the brain is still plastic?

6:08

like how does all this come together?

6:10

>> Neuroplasticity is really interesting

6:11

and really misunderstood. Um there are

6:13

genuine critical periods in early

6:15

development that if you miss them, there

6:17

are some things that will be very hard

6:18

to wire up later. Um there actually are

6:20

some cases of patients that were born

6:22

blind who um but it wasn't a it wasn't a

6:25

loss of the optic nerve. It wasn't

6:26

something in the brain, but they had

6:28

congenital cataracts. So their vision

6:30

was blurry from birth and they were

6:31

never able to really form images who

6:33

then had this fixed as adults and that

6:36

did not work. This was um they didn't

6:39

their brain could not make sense of the

6:40

information. It was totally

6:41

overwhelming. They would wear eye

6:43

patches. Several of them committed

6:44

suicide. And so there is there are clear

6:47

critical periods in early development

6:49

where if you miss that, some things are

6:50

not going to work. With that said, the

6:52

brain stays way more plastic throughout

6:54

life and adulthood than I think is is

6:56

widely appreciated.

6:58

>> That's a relief. Um yeah, if I put an

6:59

electrode almost anywhere in your brain

7:01

and then wake you up in during surgery

7:03

and I show you a flashing light that is

7:05

that flashes proportionally to how much

7:06

that neuron is firing at least almost

7:08

anywhere in cortex within a couple

7:10

minutes you can learn to control uh like

7:12

that neuron and so the brain is very

7:14

plastic under feedback and this is

7:17

partly how the the cortical motor

7:18

decoders work. Some of it is you're

7:19

decoding what the brain was originally

7:21

representing um in terms of like a hand

7:24

or an arm representation, but also just

7:26

if you're getting these signals out of

7:27

the brain and you're giving the patient

7:28

feedback for like what those signals are

7:30

doing, then the brain also adapts to

7:33

you. And so in the first experiments for

7:34

this, they actually didn't fit anything

7:36

at all. They just took a couple they

7:38

took two neurons or a handful of neurons

7:40

and fixed the weights. So it said when

7:42

this neuron fires more, we're going to

7:44

go up the screen. When this neuron fires

7:45

more, we're going to go down the screen

7:47

and sideways. They fix the weights and

7:49

let the brain figure it out. Let the

7:50

brain learn. And again, the brain is

7:52

very plastic under feedback and can do

7:53

this.

7:54

>> A powerful moment. You have a learn, you

7:56

know, we have uh two learning systems

7:58

that can learn off of one another

8:00

instead of sort of a fixed one with if

8:02

statements on this side.

8:03

>> Totally. Yeah. And the brain really like

8:04

if you give the cortex information, it

8:07

is really good at extracting the

8:08

meaning. Now, in adulthood, I think one

8:10

of the reasons that you don't see it as

8:12

being so plastic is because it has

8:14

already fit well to reality. And so

8:17

there's like if you think of it as this

8:18

like energy surface and like the state

8:20

of brain states is this like you've got

8:21

these hills and valleys. So during

8:23

normal development typically for most

8:24

people there's this like enormous basin

8:27

in this energy surface. And so for most

8:29

people like you like during development

8:30

you descend into this basin and then

8:32

you're down there and it's stable

8:33

because you've like fit to reality and

8:35

if I show you like weird movies it's not

8:37

going to really push you out of that.

8:38

You can I think like one of the theories

8:40

of what psychedelics do is they kind of

8:42

add kind of anneal it so it kind of

8:43

shrinks the surface a little bit so you

8:45

kind of access these other states but

8:47

then when it wears off you just

8:49

immediately descend back down into the

8:51

energy well that the brain had fit to

8:53

and so even though the brain is still

8:54

plastic it is in this stable like part

8:58

of the attractor system so that it

9:01

doesn't you don't see the plasticity as

9:03

much but

9:04

>> this was selected for

9:05

>> um and this was absolutely selected for

9:07

yeah and so there's There's this tension

9:08

between there absolutely is ongoing

9:10

plasticity. If there wasn't plasticity,

9:11

you couldn't learn things. And so like

9:13

your ability to learn new stuff is like

9:14

and have memory like all memory is brain

9:16

plasticity in many ways. And so we are

9:18

constantly experiencing very dramatic

9:20

plasticity. But there are also clear

9:22

limits to it especially in how like the

9:24

modules of the different brain areas end

9:26

up interconnected past these critical

9:28

periods.

9:29

>> I have like a million questions

9:30

honestly. I mean one of the things that

9:32

I'm super curious about is like well

9:34

what is the qualia of the person who has

9:37

prima and what is you know I'd be

9:39

curious like with the biohybrid approach

9:42

like what does it feel like and you know

9:44

is it like having a second screen like

9:47

you know is there an input or output I'm

9:48

very curious yeah so for prima actually

9:50

on the topic of plasticity in the time

9:52

that the patients are blind the brain

9:54

the brain wants to see like again you

9:56

the thing you experience is this world

9:58

model constructed by the brain and that

10:00

is this is this generative model that is

10:03

conjuring your reality. And so when it's

10:05

not getting input from the from the

10:06

optic nerve, it is still trying to see

10:08

things. So it kind of turns up the

10:10

noise. And so um blind patients often

10:13

report like hallucinations and these

10:15

like internally generated percepts. When

10:16

you first turn on the implant in these

10:18

patients, like you hit it with the

10:19

laser, um they'll they'll say, "Oh, I

10:21

see a flash." But then you can do a

10:23

thing where you'll you'll turn on the

10:25

laser, they'll see a flash, and you'll

10:27

play a tone. And you do this a couple

10:29

times and then you like don't turn on

10:30

the laser but you play the tone and

10:32

they're like I see the flash.

10:33

>> And so for the first couple hours of

10:35

rehab they kind of just have to like

10:37

learn to like dissociate the real

10:38

percepts from the phantom percepts

10:40

because the brain is like so it is like

10:43

so turned up the gain like turned up

10:45

turn down the noise floor that um just

10:48

like getting learning how to

10:49

discriminate real information coming in

10:51

from the optic nerve takes a little bit

10:53

of rehab. The quailia of prima is is

10:56

normal sight. um it's black and white.

10:58

It's only a it's a small field of view,

11:00

but it's it's vision. The deeper

11:03

question is like what is the quality of

11:04

like a brainto brain of like an ultra

11:06

high bandwidth like a bio-hybrid neural

11:07

interface and that is just like I don't

11:10

like impossible to imagine. I those

11:12

devices will get built and we're going

11:13

to find out but um there are some

11:16

natural case studies. So there's a pair

11:18

of conjoined twins in Canada that it's

11:21

really like one head with four

11:22

hemispheres. And what's really

11:24

interesting is that the two hemispheres

11:26

of each of the twins's brains are

11:28

connected normally, but they're not

11:29

connected with each other except for

11:31

this one cable connecting the the the

11:34

phalami like from the phalamus to

11:35

phalamus. There's this big biological

11:37

cable that you can see on an MRI. And

11:40

over this they can share meaningful

11:41

elements of their conscious experience.

11:43

And one of the open questions that

11:45

hasn't really been studied in in the

11:47

depth that I would like love to see it

11:49

um is when they they can see to some

11:52

degree through each other's eyes, but

11:54

does this show up as new visual field?

11:56

Like how is this how do those get

11:58

experienced directly? Like we already

12:00

most people have two image modes like

12:03

you've got your eye open vision but you

12:05

also have imagination. Some people are

12:06

aphantasic and they don't have internal

12:08

imagery. Most people have kind of two

12:10

image modes. Do they have three image

12:12

modes or four image modes? Or if they um

12:16

have internal monologue, they can they

12:18

seem to each individually have internal

12:20

monologue, but they also can clearly

12:21

communicate over this channel because

12:22

they've done they've done tasks where

12:24

like they can coordinate without saying

12:26

anything to to do stuff

12:28

>> and they're conscious of it.

12:29

>> And they're conscious of it and it also

12:30

they don't confuse it for each other.

12:32

It's not like like with a schizophrenic

12:34

where it's like, oh, I'm hearing voices

12:35

and that they're coming from internally

12:36

generated me. It's misattributed

12:38

monologue. That doesn't happen to them.

12:40

they can tell it apart. Um, but they're

12:42

experiencing it directly in some way.

12:44

And so there's a question of is this

12:46

like when you look at that cable, are

12:47

they sending the like information in the

12:50

classical way or is this is there like

12:53

an effect of like phenomenal binding

12:54

happening over this cable where it's

12:56

more like the two hemispheres of your

12:57

brain that are bound together into one

12:59

moment. And so there's these natural

13:00

case studies that tell us that some

13:01

really interesting things might be

13:03

possible here, but it's kind of tough to

13:05

imagine what it would feel like. paint

13:07

the picture for us. You know, you're

13:10

here, everything goes really, really

13:12

well. Where are we in 5 to 10 years with

13:14

this technology?

13:15

>> I mean, I do think that that you can get

13:17

to close to native acuity, so kind of

13:20

like your normal 2020 vision. We're

13:22

definitely not there yet, but I see a

13:23

path to get there and be able to get

13:26

color and fill in a lot of the field of

13:27

view. To be be clear, that is not where

13:28

we are right now, but in the next 10

13:30

years, I think that that's possible. But

13:31

beyond that, I'd say that our worldview

13:33

or my worldview kind of the motivating

13:35

idea behind the company is you can

13:37

contrast this this like there's like a

13:39

drug discovery approach to medicine

13:41

versus a neural engineering approach to

13:42

medicine. I this is much broader than

13:44

the retinal prosthesis. We started with

13:45

that because it's a huge unmet need and

13:47

I think it's the most valuable BCI like

13:50

product on the like on the horizon that

13:51

I thought was doable now. Humanity just

13:53

isn't very good at drug discovery. every

13:55

now and then you kind of find a thing

13:56

it's amazing like you find a GLP-1 or

13:58

you find um like there's every like

14:01

there's a handful of drugs that are we

14:03

were lucky to find but it's much more

14:05

common that you spend a decade going

14:08

down this this path and then at the end

14:09

you run a study and the answer is no and

14:11

then it's like where do you go from

14:12

there? There's been a huge amount of

14:13

work that's gone into finding drugs to

14:15

to like stop um blindness getting worse

14:18

or to or to reverse and restore vision

14:21

to to basically no effect. there's a

14:23

million dollar per patient gene therapy

14:26

that has a really very marginal like if

14:28

any benefit to a very small small

14:30

percentage of patients in the first

14:31

place and with our retinal prostthesis

14:34

that what we saw in the trial was we can

14:35

take a patient who's been unable to see

14:37

faces for a decade and allow them to

14:39

read every letter on an eye chart and so

14:41

not only is the brain the only organ

14:42

that really in some deep sense matters

14:45

we are also just empirically much better

14:46

at engineering it and so I think this

14:49

like allows like a really fundamental

14:50

reframing of medicine and over the next

14:52

decade I think like beyond people need

14:54

people see hear have balance have a

14:57

kilobit per second of motor control that

14:59

is like you and I think like we have

15:01

coar implants we have we know how to do

15:03

motor decoding the thing we didn't know

15:04

how to do is restore vision we're

15:06

working we are making real progress on

15:08

that I think all of this adds up to

15:10

something that speaks I think to the

15:11

really foundations like this paradigm

15:13

shift in what's possible in healthcare

15:15

>> something like this uh I remember

15:17

reading about maybe like 10 maybe even

15:19

20 years ago they were able to stimulate

15:22

the optic nerve ve with electricity

15:23

directly, but it was very very low

15:25

resolution and it was so invasive that

15:28

it could probably only be done in a

15:29

clinical setting or in a surgical

15:31

setting.

15:31

>> It's relatively easy to get flashes of

15:33

light um to cause a patient to kind of

15:35

see these these flashes. We call these

15:37

phosphines. There was a company a decade

15:40

ago called Second Sight that had an

15:42

electrical stimulator that was implanted

15:43

in the eye. It was a 4 and 1 half hour

15:45

surgery with a titanium box on the side

15:47

of the eye. um it stimulated a different

15:49

layer of cells than we do and they were

15:51

able to get these flashes where like if

15:53

a patient looked at it they could say

15:54

like oh there's some flashes here

15:56

there's some flashes here it's connected

15:57

that's an A and like the next letter and

15:59

it's like there's some here there's some

16:00

here it's an H but it doesn't the brain

16:03

doesn't assemble together these flashes

16:05

of light into like a gestalt hole that

16:07

is an image in the mind's eye um

16:09

similarly when you stimulate cortex um

16:11

like the back of the head where the

16:12

visual cortical areas are you can get

16:14

these flashes of light and you can even

16:17

in some cases He's got a lot of them,

16:18

but again, the brain doesn't like you.

16:19

It's kind of this more psychedelic

16:21

effect like this doesn't get assembled

16:22

together into form vision. And as far as

16:24

I know, our clinical trial was the first

16:26

time ever that form vision had been like

16:29

had created like a coherent image in the

16:30

mind's eye of a of a person.

16:32

>> Is there something uh specific about

16:34

macular degeneration that causes you

16:37

know this to be possible for this set of

16:38

patients?

16:39

>> So there's a bunch of reasons why people

16:40

lose rods and cones. Um there's macular

16:43

degeneration, there's retitis

16:44

pigmentotosa, there's some rare like

16:46

inherited diseases like stararts

16:48

disease, diabetic retinopathy can do it,

16:50

age- related macular degeneration. It's

16:52

the most common. Um so this globally

16:54

affects 200 million people. The severe

16:56

form geographic atrophy is is a million

16:58

to a couple million. In that sense, it's

17:01

a big need. One of the nice things about

17:03

our device is that it doesn't we're

17:06

somewhat agnostic to the reason that you

17:07

lost the photo receptors. And so we we

17:10

think it'll also work. um for retinized

17:13

pigmentotosa, for stararts, for these

17:15

other indications. We're actually just

17:16

about to start a new clinical trial on

17:19

on inherited retinal disease um which

17:21

affects much younger people. And this

17:23

again this goes back to like the drug

17:24

discovery versus neural engineering view

17:26

of the world. Like if you want to make a

17:27

if you want to make a drug then you care

17:29

a lot about exactly like what

17:31

molecularly went wrong in the rot

17:34

and that is different by disease then

17:37

even if you figure this out it's really

17:38

hard to like understand what to do about

17:40

it. here. We don't really care why the

17:41

rods are coincided. We just care that we

17:43

can get the the visual signal back into

17:45

the computer.

17:45

>> I guess I'm just very fascinated by you

17:48

obviously uh as a computer scientist

17:51

spend a lot of time thinking about

17:52

inputs and signals and then what I'm

17:54

hearing is that like some of that

17:56

thinking does actually translate into uh

17:58

from software into wetwware.

18:00

>> Well, I mean the brain is a computer and

18:02

it's going to saying that is going to

18:03

get me yelled at by some corner of of

18:05

the field, but I think like I think that

18:06

you can take that like almost literally.

18:08

It's a it's a very different

18:09

architecture than like a like a

18:11

vonoyoman architecture electrical

18:14

computer, but it processes information.

18:16

It gets information down one of 12

18:18

cranial nerves or 31 spinal. So all of

18:20

the information that flows in or out of

18:22

the brain goes through a small number of

18:23

cables. The optic nerve we'd call

18:25

cranial nerve 2. Um the vestibular coar

18:27

nerve that carries hearing balance,

18:28

cranial nerve 8. Um there's 31 spinal

18:31

nerves that carry commands out to the

18:32

muscles and sensory information into the

18:34

brain. And you can think of that as like

18:36

the API of the brain. And if you can

18:38

like get all the signals going down

18:40

those then like that's like the brain is

18:41

not magically connected to the

18:42

environment. It is reality is whatever

18:45

spikes are on the cranial and spinal

18:46

nerves. And in that sense you've got

18:48

this like well- definfined interface to

18:50

it. Then with the processing once it

18:52

gets this information is enormously

18:54

complicated. It constructs everything we

18:56

experience. Like I think it's important

18:58

to appreciate you experience yourself

18:59

being in the world. You kind of see the

19:01

the walls and the room and the lights

19:04

and everything. But that of course

19:05

you're not experiencing directly. you're

19:07

experiencing a world model like

19:08

fabricated by your brain. But I I think

19:10

one of the interesting things that's

19:11

come out of progress in artificial

19:13

intelligence is we're seeing this big

19:14

unification in neuroscience and and AI.

19:17

I think we're actually learning a lot

19:18

from AI re more than I think we thought

19:20

we would learn from AI research. I mean

19:22

I can tell you 10 years ago we thought

19:23

it would go the other way and that the

19:24

AI people would learn a lot from

19:26

neuroscience and it's really been the

19:28

other way around.

19:28

>> I'm always curious. I mean you were

19:31

mentioning second side sort of you know

19:33

flashes of light and yet you know here

19:36

you know how did you figure out the API

19:38

I mean if I was you know trying to

19:40

reverse engineer it I guess I would like

19:41

try to measure the signals is it similar

19:43

with you know biology

19:45

>> it's just it's difficult to measure the

19:47

signals so brain brain computer

19:49

interface research and development is

19:50

limited by your ability to record and

19:52

stimulate these signals that

19:54

neuroscience comparatively is actually

19:55

pretty simple as soon as you can record

19:57

these signals we've very quickly figured

19:58

out what we we talk about neural

20:00

representations what they are second

20:03

sight's instructive so in the retina

20:04

there's three layers of cells that

20:06

matter there's 150 million rods and

20:08

cones this connects to 100 million

20:10

bipolar cells bipolar because they've

20:12

got two ends and that connects the rods

20:14

and cones to 1.5 million optic nerve

20:16

cells call them retinal ganglen cells

20:18

gang is like a fancy word for like

20:19

reaches a far distance and connects to

20:21

somewhere we stimulate the 100 million

20:23

bipolar cells second sight stimulated

20:25

the 1.5 million ganglen cells and so

20:27

they were trying to get the signal into

20:30

the brain past that 100x compression and

20:33

the retina was doing a lot of

20:34

computation there. The eyes of camera

20:36

light shines in from the front, it hits

20:37

the rods and cones like that. The

20:39

representation in the rods and cones is

20:40

a bit mapped image. It's just like you

20:42

take the image, you tile it across the

20:43

rods and cones that that's what it is.

20:47

>> Now, in the the 1.5 million optic nerve

20:50

cells, it's not like that. Like if you

20:52

just project an image onto them, you get

20:54

a bunch of trash because at that point

20:56

it's already compressed things like

20:58

edges, relative motion, a bunch of other

21:01

like blobby shapes, color. And so if you

21:04

stimulate a cell there, you're not going

21:05

to get just like a pixel. You're going

21:06

to get like some uh edge mo like

21:10

direction gradient thing. And when you

21:12

excite that, you you can't do that

21:14

selectively because we don't like first

21:15

of all, you just can't do it selectively

21:16

enough. And we don't know like the

21:18

codec. We don't have like the know how

21:20

to pattern it appropriately. And so you

21:22

end up getting these flashes of light.

21:24

It was an empirical discovery of of our

21:25

study that if you excite the bipolar

21:27

cells with an image, you get an image in

21:28

the mind's eye because that is clearly

21:30

the critical processing step in the

21:32

retina that you wanted to preserve.

21:33

>> Did you know that that would happen or

21:35

did you have to try different parts?

21:36

When we started the company, we I think

21:39

we're a little bit different than most

21:40

medical device or biotech companies

21:42

because they're often founded around

21:44

like a specific asset like a a patent or

21:47

some specific piece of IP that they're

21:48

going to spin out of a university or

21:50

maybe something that the founders have

21:52

worked on. We weren't like that. We did

21:54

we had a couple ideas at the beginning.

21:56

Um we had this like neural engineering

21:58

centric view of healthcare. We had a

22:00

specific um BCI probe idea in biohybrid

22:04

and we had a sense that the most

22:05

valuable thing that we could build in

22:07

the near term was a retinal prostthesis.

22:08

We thought the time was there like the

22:10

technology was all there that that would

22:11

be possible circuit 2021 and that was

22:14

also further from stuff that I had

22:15

worked on before and so it felt like a

22:17

good thing for us to to kind of go

22:19

explore. I think we took this very very

22:20

like first principles approach and you

22:22

have to be careful with first principles

22:23

in biology because first principles are

22:24

not enough in biology like they'll get

22:26

you very far in many other areas of

22:28

engineering but in biology you also have

22:30

to understand like what did evolution

22:31

actually do and there's a lot of other

22:34

nuance there but in this case we we

22:37

looked at the retina there were kind of

22:39

reasons intuitions to think that past

22:41

that would be much harder and so in the

22:42

retina you've got this 2x2 matrix you've

22:44

got a choice of do if you've lost the

22:45

rods and cones do you stimulate the

22:47

bipolar cells or the optic nerve cells

22:49

And do you do it electrically or with a

22:51

technique called optogenetics? And we

22:53

just went and explored all four

22:54

quadrants of that. We uh very quickly

22:57

figured out that stimulating the the

22:58

optic nerve cells was very difficult for

23:00

these reasons. You end up with this like

23:02

1 million degree of freedom calibration

23:04

that you have to do per patient that

23:05

like can't be done in practice. And so

23:07

that led us to the bipolar cells which

23:10

was before this compression. And so then

23:12

the question was do you want to

23:13

stimulate them electrically or using

23:14

optogenetics? And we developed both. And

23:17

so we have a state-of-the-art

23:18

optogenetic gene therapy in house.

23:20

Published a paper last fall on on the

23:22

world's most sensitive optogenetics

23:23

option proteins. These are proteins that

23:26

you can express in a neuron to make a

23:28

neuron that is not normally light

23:29

sensitive responsive to light.

23:31

>> Oh wow.

23:31

>> But the drawback was that the

23:33

conventional optogenetic proteins take

23:35

like a bright laser to activate them.

23:37

And so what we were able to do were find

23:39

optogenetic proteins that are so

23:41

sensitive that they're sensitive to like

23:42

indoor office lighting. And so this you

23:44

could use in very different ways. and

23:46

then we could target them to the bipolar

23:47

cells, but that still has like 5 to

23:49

seven years of clinical translation away

23:51

if it ends up working and there's a

23:53

bunch of pitfalls it could run into

23:54

along the way. And then we also um just

23:57

surveyed the world to see what was the

23:59

state-of-the-art for the best out there

24:01

in um in electrical stimulation and

24:04

there was this technology that had been

24:05

invented at Stanford about a decade ago

24:07

that a small company in Europe had been

24:10

uh kind of developing in the meantime

24:11

and we got convinced that that was the

24:13

right way to go and so we acquired them

24:15

a few years ago and this was kind of all

24:17

from this like bird's eye view of if you

24:19

want to restore vision in the retina

24:21

kind of how would you do that what are

24:22

the promising approaches narrow that

24:24

down and and that brought us to hear.

24:26

>> That's insane. That's so cool. I wanted

24:28

to jump to your start in tech broadly. I

24:30

mean, did you start in bio and software

24:34

and engineering? Like, you know, what

24:35

was your sort of journey into what

24:38

you're doing now, which is I mean,

24:40

giving people blindsight is the wildest

24:43

thing people watching might be asking

24:45

themselves like, well, you know, I hear

24:48

a lot about B2B SAS, but you know, how

24:50

do I actually become uh something more

24:52

like you? I was certainly doing software

24:55

and my deepest hard skill is software.

24:57

Um my I have a degree in biomedical

24:59

engineering but I grew up programming

25:01

and so I was doing that well before I

25:02

was doing any any biotech stuff. My

25:04

parents tell me a story about how I um

25:07

sat on the floor of a Barnes & Noble and

25:08

cried until they bought me a Learn

25:10

Visual Basic book. I was always

25:11

interested in the brain. I was

25:13

definitely inspired by science fiction.

25:15

Um the Matrix had a big impact on me.

25:18

Um, both because the idea of this like

25:21

world of bits was just so alluring for

25:23

for a bunch of like fundamental reasons.

25:25

Like when I look around at at the world

25:27

like it's hard to build things. Um,

25:29

space is constrained. It's like the

25:31

earth is small. The resources are

25:32

intensely contested. The like space is

25:35

large. The speed of light is low. Like

25:37

you don't have any of those constraints

25:38

in in the machine. And so if you could

25:40

simulate a world kind of anything was

25:42

possible there. But then also if you

25:45

then kind of turned that inside out, if

25:47

you realize that you can build this and

25:50

that you couldn't tell the difference,

25:52

then the coral area of that was must be

25:54

like the thing that matters is the brain

25:56

and if you can engineer the brain and

25:58

support the brain, then kind of all the

26:01

rest of it is replaceable. And that just

26:03

seemed like a kind of a fairly deep

26:05

insight that was not being borne out in

26:07

the world in the way that it seemed like

26:08

like it should be. Some of it is um if

26:12

you can surround that consciousness with

26:14

like the correct inputs.

26:16

>> Yeah. I mean this also gets into

26:17

questions of like what is consciousness

26:18

like the how does the brain create our

26:21

experience. There's this meme out there

26:23

that BCI is an artificial intelligence

26:25

adjacent story um and that the goal is

26:27

to we have to merge humans and machines.

26:30

And I do think that there's something to

26:31

that but I think in the more immediate

26:34

thing here is that ICBC is really a

26:36

longevity like healthcare adjacent

26:38

story. If the end of the quest of

26:40

artificial intelligence are super

26:41

intelligent machines, then I think the

26:43

end of the BCI quest are actually

26:45

conscious machines, it might turn out

26:47

that there's actually no measurement

26:48

that we can take that will tell us if

26:50

something is conscious or not or what

26:52

it's like. And the only thing that you

26:53

can actually know on that is your own.

26:55

And so if that's the case, then to study

26:57

consciousness, we will need to use brain

26:59

computer interfaces to like see it for

27:00

ourselves. And once you've developed

27:03

that, then I think that you kind of can

27:05

understand the fundamental physics of

27:06

what's happening there, whether that's

27:08

new fundamental physics or it's emergent

27:09

in some way. But if you can learn how to

27:12

build like kind of understand whatever

27:14

the brain is taking advantage of that

27:15

our universe supports, then eventually

27:18

you get super intelligent conscious

27:19

machines that we can be part of through

27:22

these these ultra high bandwidth

27:23

connections. Uh I think that's a very

27:25

different narrative than how people

27:26

usually think about BCI today.

27:27

>> I mean, we're at the beginning of that,

27:28

right?

27:28

>> Oh yeah, we're at the very beginning of

27:29

that. the current trial that you have I

27:32

mean it's uh low it's relatively low

27:35

bandwidth but it's going to get much

27:37

higher bandwidth and then I mean like

27:40

anything you sort of bootstrap with the

27:41

thing that works which I think you know

27:43

what what you have is a clear

27:45

breakthrough as it is and then if you

27:48

look at like the PC revolution for

27:50

instance it's like could you believe

27:52

that all of this that we have today

27:54

started with like a little blue box like

27:56

in Altter it still takes some suspension

27:58

of disbelief because I biotech has just

28:00

been so incremental. Like it's been so

28:02

like there's there's been big advances,

28:04

but at the same time, these time

28:05

constants historically, I mean, you

28:06

could easily spend 10 years on something

28:07

that feels very incremental. And I think

28:09

that one of the things that's so

28:10

exciting about what's happening now is

28:11

that no longer really feels so

28:12

incremental to me. To me, it feels like

28:14

we're firmly in like the takeoff era

28:16

now. Like something new has happened on

28:18

Earth. But I think it's also important

28:19

to remember that this didn't start in

28:21

like 2019 or 1999. This started in the

28:24

late 1800s with the industrial

28:26

revolution. just a few years before the

28:28

industrial revolution really kicked off.

28:30

I mean, life was more or less unchanged

28:33

in a fundamental sense for several

28:34

thousand years. And they didn't really

28:36

even have like a concept of progress in

28:38

many ways. And I don't think there's any

28:41

way they could have imagined like the

28:42

way that their life would have changed

28:43

over the course of the like first 10 15

28:46

years of the steam engine. And that is

28:48

how I feel like looking at the next 15

28:50

years right now.

28:51

>> Yeah. I mean, so we have an electrical

28:53

stimulation right now. And then at the

28:56

same time you also do have a bioupling

28:58

like it's not purely just electrical.

29:01

Would you call it a V2 or like sort of a

29:03

next frontier? So this is a totally

29:05

different area. I mean the

29:07

>> you might be able to use a provision. So

29:08

one of the diseases that prima or

29:10

electrical stimulator doesn't treat is

29:11

glaucoma which is loss of the optic

29:13

nerve itself. And so it's possible that

29:15

you could use our biohybrid BCI

29:16

technology for that. But that's not what

29:18

we're doing right now. There are three

29:19

elements to our pipeline at at science.

29:21

The first is our work in the retina in

29:23

blindness especially with the prima

29:24

implant. The second is our work in

29:26

neural interfaces and the third is is um

29:30

our work in profusion with our vessel

29:32

program. The biohybrid neural interfaces

29:34

the idea here is like if your brain is a

29:36

bunch of neurons like how would how

29:38

would nature solve this problem like we

29:39

often look to nature for inspiration.

29:42

Evolution is a way better engineer than

29:43

we are at least when dealing with

29:44

biology. I think the intuition here kind

29:47

of started from your brain is is

29:50

composed of two hemispheres and they

29:52

kind of process different halves of the

29:54

world separately but you don't

29:56

experience two hemispheres or two hemi

29:58

fields we would say you experience one

30:00

integrated moment and this is there's a

30:03

cable that connects the two hemispheres

30:05

of the brain called the corpus colosum

30:06

it's about 200 million fibers and I was

30:11

thinking like if nature wanted to build

30:12

a ultra high bandwidth braintobrain

30:14

connection Like what would how did or if

30:16

you wanted to make a new cranial nerve.

30:18

So instead of having an optic nerve or a

30:19

vestigular nerve, it wanted to have like

30:21

the internet nerve like how would nature

30:23

solve this problem is it would grow like

30:25

a new nerve. It would have a new fiber

30:27

bundle with a USB port at the end. So

30:29

the intuition here is like if your brain

30:32

is a bunch of neurons, what happens if I

30:34

culture some neurons on your neurons? Do

30:36

they like when you do that in in a lab

30:38

that neurons will typically grow

30:39

together and wire up and form new

30:41

biological connections? And so we have

30:44

an approach to the device where we seed

30:47

our the implant with living neurons.

30:50

These heavily engineered stem cell

30:52

derived neurons that we've created. Are

30:54

they related to your own neurons or

30:57

>> No. So really interestingly, this is

30:58

actually one of the deep areas of

31:00

research. So we um there's it's one cell

31:02

line and the probably the single deepest

31:05

area of of of IP on this is that we've

31:07

hidden them from the immune system. So,

31:09

we're one of a really small number of

31:11

companies that have, I think, like

31:13

pretty convincing what we call

31:14

hypoimmunogenic stem cells. You don't

31:16

need to manufacture it per patient,

31:17

which would be really expensive and take

31:18

much longer. We've got this hypoamogenic

31:21

um stem cell derived engineered neuron

31:24

that we load into the device in a dish

31:27

and then that kind of gets stuck there

31:30

and then you engraft this onto the

31:32

brain. So, we don't um we don't place

31:34

any wires into the brain. We also don't

31:35

need to genetically modify the like your

31:38

brain. um some of the other ideas out

31:39

there, for example, using optogenetics

31:41

or things like ultrasound. This requires

31:44

using a gene therapy to genetically

31:45

modify the neurons in your brain, which

31:48

first of all, that's like a one-way

31:49

door. And if it goes wrong, that can go

31:51

really wrong. Whereas here, because

31:53

we're adding the only thing that has

31:54

been edited are the graft cells that we

31:57

add. And if if those die off, then like

32:00

you're really not worse off than you

32:01

were before for the most part. Um, but

32:03

it comes with the potential of growing

32:05

throughout the brain, forming biological

32:07

connections all over the place. Um, and

32:09

I mean that's what we've seen in the

32:11

animal models. That's not in humans yet,

32:12

but have you seen James Cameron's Avatar

32:14

movies?

32:15

>> Definitely.

32:15

>> Like you know the ponytails that the

32:16

aliens have. That's how I think about

32:18

it. Basically, it's like it's a big new

32:20

cranial nerve with a connector at the

32:22

end. I think that's actually the the

32:25

Avatar Q. I think is like a pretty

32:27

direct reference for how I think about

32:29

our biohybrid neural interfaces. So

32:31

earlier you were saying sort of this how

32:33

do we find a USB port? I mean obviously

32:35

an avatar that's uh you know one of the

32:38

manifestations in the blue creatures the

32:40

optic nerve in a way is like a port. Um

32:44

and then you know jumping to Neurolink

32:46

uh when you were co-founding it that you

32:49

know sort of enters the brain and then

32:50

you there is no not necessarily like an

32:53

obvious port like how do you think about

32:56

that you know you know where where do

32:58

you attach and how does it work and what

33:00

did what did you learn from Neurolink

33:01

that you know was useful here? Well, I

33:04

mean a lot of what I learned from

33:05

Neuralink was like just like the in many

33:08

ways it was kind of the ultimate startup

33:09

PhD and so that was more about like how

33:11

do you execute a technically complex

33:13

company that requires this type of like

33:15

multi-disiplinary team and

33:16

infrastructure

33:17

>> like I'm very curious from those days

33:19

like what was the V1 and then you know

33:21

there's the hypothesis and then you know

33:23

the outcome and then here like the

33:25

outcome is very very awesome with

33:26

science so far not done obviously.

33:29

>> Yeah. Yeah, when you think about the

33:30

brain, like cuz I I remember it being

33:32

like totally magical to me, like what is

33:34

like how do you even understand what the

33:35

brain's doing? Like what is like what

33:37

language is it speaking? How do we

33:38

understand what's going on there? That

33:39

seems like impossibly complicated. The

33:41

way that I would think about like the

33:42

brain from this information processing

33:44

perspective is the brain is full of

33:46

these these things that we call

33:47

representations. And so you can have a

33:49

representation of like hand activity. So

33:53

there's like a like a geometric object

33:55

in the brain. Like if you record from

33:56

some neurons, then when your finger is

33:59

is like held open, a neuron will be

34:01

firing. When it's closed, another neuron

34:03

will be firing. There's neurons that

34:05

kind of correspond to every possible

34:07

state here. And often in prim primary

34:09

motor cortex, which is where many of the

34:11

other BCI companies record from, primary

34:13

motor cortex is a couple synapses, often

34:16

two synapses from the muscle. So it

34:17

projects all the way from the top of the

34:19

head down to the spine, and then there's

34:20

another synapse from the spine out to

34:22

the muscle. And so the representation

34:24

that you get in primary motor cortex um

34:27

is kind of easy to understand because it

34:30

looks like like it it directly

34:32

corresponds to things that we can easily

34:33

reason about like hand state and

34:35

specifically often joint joint torques.

34:38

One of the things that I like to do

34:39

sometimes with the LLMs is like I'll

34:42

pick like a neuron to start from for

34:43

example like the retinal ganglion cell

34:45

and I'll be like okay go forward one

34:47

synapse like what are all the cells that

34:48

we're connected to? I'll pick another

34:50

one be like okay go forward one synapse

34:51

like what are all the cells that we're

34:52

connected to just kind of try to walk

34:54

through the brain and each generation of

34:57

model your ability to do this gets

34:58

better but one of the things that you

35:00

see is that when you're close to like an

35:03

input or an output like a muscle or a

35:06

coclear hair cell or a retinal ganglen a

35:09

roer cone like in these cases we think

35:12

of the representations as being concrete

35:13

because they correspond to things that

35:15

are intuitive for us like colors and

35:17

like image intensities or frequencies of

35:20

sound or uh muscle control. But as you

35:24

go deeper into the brain, it very

35:25

quickly kind of blows up into these very

35:27

abstract things. And so um like there's

35:30

a part of the brain called infratemporal

35:31

cortex where the representation that it

35:33

has is a map of face like a map of

35:36

objects or a map of another area right

35:39

next to is a map of faces. We think

35:40

about this like map of object space this

35:42

normal representation of general

35:44

objects. There's like one point you can

35:46

think of as like a long list of numbers

35:48

and there's some point in that that's

35:50

like a vase. There's some point that's

35:52

like the Eiffel Tower. There's some

35:55

point that's a car. There's some point

35:56

that's a person. There's some point

35:58

that's like a zebra. And as you move

36:00

around in this on this like manifold,

36:03

you get um kind of the percept of any

36:06

possible object. And there's millions of

36:09

neurons there that are representing this

36:12

like this space of possible objects that

36:14

the brain could be identifying. Sounds

36:16

like latent space.

36:17

>> It is a latent space. Exactly. And so

36:18

there's this huge unification going on

36:19

between AI and and neuroscience. And you

36:22

know, one of the most interesting things

36:23

is that um when you train AI models like

36:27

like image models or and even language

36:30

models, um the representations that you

36:32

get inside them look a lot like the

36:34

representations you see in the brain.

36:35

>> Fascinating. And so this is like a real

36:37

hint that the AI people I mean that's

36:38

really good are on the right track.

36:39

Yeah. No, I mean the whole idea like

36:41

there's these things are like stochastic

36:42

parrots or glorified autocompletes like

36:44

these people just don't know what

36:45

they're talking about. Many people in

36:46

neuroscience have gone over to AI

36:48

because they're basically still doing

36:49

neuroscience but it's just way easier to

36:51

do it on the models.

36:52

>> It sounds like it's very good news for

36:54

you in that like there is actually some

36:57

kind of latent space mapping and then

36:59

the job of science in terms of being

37:02

sort of like the API to the brain.

37:04

>> Totally. Exactly.

37:06

like entirely possible

37:07

>> the neural activity that you when you

37:08

record neural activity from the brain

37:09

this is just another this is just

37:11

another latent and if you can translate

37:12

this into another model then you can do

37:15

we think really cool stuff with that

37:17

>> so you have input now and then you

37:19

earlier saying I mean a lot of the

37:21

earlier BCI uh experiments involved

37:24

figuring out like

37:25

>> motor yeah so motor decoding is kind of

37:28

this very classic task and you can do it

37:30

any number of ways um but getting like

37:33

cursor control or keyboard control in a

37:35

human. That was first done in the late

37:37

'9s. And so I think a lot of the BCI

37:39

companies are doing that now just

37:40

because like we know it definitely

37:42

works. Um you know there's some patient

37:44

need and it really is just like an

37:46

electronics problem. Like if you can

37:48

shrink the electronics so they're small

37:50

enough and low power enough so they they

37:51

don't dissipate a lot of heat so you can

37:53

close the skin then that is like a big

37:56

advance. And that I think is really the

37:57

first thing that Neuralink has done.

37:58

There were prior devices that could do

38:00

that type of motor decoding but they

38:01

required a connector coming out through

38:03

the scalp. And as long as the skin is

38:05

open, there's a risk that like an

38:07

infection will climb down that and then

38:08

you're going to have a really bad day.

38:10

So being able to close the skin is

38:11

really important. But that was really

38:13

difficult because it required really

38:14

efficient electronics that were small

38:16

enough to fully implant and also were

38:18

power efficient enough that they

38:19

wouldn't get hot. And so I think the

38:21

thing that made this possible is is what

38:22

we call the smartphone dividend. Like

38:24

BCI couldn't have done this on its own,

38:25

but Apple and Samsung and others have

38:27

poured epic amounts of money onto making

38:30

these types of electronics exist in the

38:31

world so that people like us can use

38:33

them. And then it feels like you have um

38:35

a really significant advantage around

38:37

being a biohybrid. I mean there are all

38:39

these issues uh famously about you sort

38:42

of trying to electrically stimulate uh

38:45

brain cells for a long period of time.

38:47

Yeah. I mean I think that there are

38:48

different products here. I think that on

38:50

the like on the one hand I mean I that's

38:52

why I'm doing it. I think that's a good

38:53

idea. On the other hand I think some

38:55

people look at this and they're like

38:56

that is now you have a cell to deal with

38:57

like you took a device and you added a

38:59

bunch of biology to it. And I think we

39:01

have a good handle on that. that's why

39:03

we're doing it. But there's definitely a

39:04

trade-off there. And I think that BCI

39:06

we're going to come to see is not is not

39:08

a specific product in the way that like

39:10

pharma is not a product. I think they're

39:13

going to be a bunch of BCI companies

39:14

going after different applications where

39:16

different types of probes will make

39:17

sense. And I think biohybrid in

39:19

particular is only really necessary for

39:22

some of like the very highest end

39:25

things. And on the flip side, it will be

39:28

harder to deploy for many other

39:29

important medical needs and important

39:31

applications along the way. Um, and will

39:35

probably be a little backloaded relative

39:36

to some other things in in that scalable

39:39

impact. So earlier you're referring to

39:41

uh, you know, there's a third part of

39:43

science which is vessel. Talk more about

39:45

that because it feels like you're

39:46

applying a lot of the first principles

39:48

thinkings that got you here to this

39:50

thing that is also like pretty pretty

39:53

groundbreaking. So this is this is our

39:56

smallest project. So there's this field

39:59

of profusion. You can think of it as

40:01

they're kind of like heart and lung

40:03

machines. And I I was first clued into

40:06

the need here about a decade ago when I

40:08

read an article in in a medical journal

40:10

called the Lancet, which was this case

40:11

study of a the 17-year-old living in

40:13

Boston who was waiting for a lung

40:14

transplant. And while he was waiting for

40:17

this lung transplant, he was being kept

40:18

alive on a on an ECMO circuit. ECMO is

40:20

sacra corporeal membrane oxygenation.

40:23

this fancy word for like heart lung

40:24

machine. And in his case, his heart was

40:26

okay, but his lungs had failed. And so

40:28

this was keeping him alive. And after a

40:31

while on the transplant list, he was

40:33

diagnosed with a complication that made

40:35

him no longer a priority recipient for

40:37

donor lungs. And so they took him off

40:39

the transplant list. And so this article

40:40

is kind of about the ethical dilemmas of

40:42

like, what do we do with him?

40:43

>> But he's alive because he's Yeah. He's

40:45

like playing video games. He's doing

40:46

homework, hanging out with friends. If

40:48

we turn off the circuit, he will

40:49

immediately die.

40:50

>> Well, don't do that then. On the other

40:51

hand, he's consuming a half a million

40:52

dollar a month ICU suite. And so there

40:55

are these quotes in this article from

40:56

the doctors being like his family and

40:58

friends derived benefits from his

41:00

continued survival and how this raised

41:02

fairness questions because if we like

41:04

support him for a longer period of time

41:05

than why would we do this for everybody?

41:06

And so I saw this I'm like those were

41:08

great questions. I need answers to those

41:09

questions because there seemed to be

41:11

this big gap between what was

41:12

technically possible and what was

41:13

economic to deploy for some reason. I

41:15

mean that's exactly what being a founder

41:17

is about.

41:18

>> Yeah. Yeah. So I saw this and I there's

41:20

this database of medical literature

41:21

called PubMed and I realized that if I

41:23

searched PubMed for the phrase ECMO

41:25

ethical dilemma, there were multiple

41:27

pages of results. So this was not like a

41:29

one-off. And when I looked at this

41:31

literature there, it was often a lot of

41:34

it was talking about how ECMO shouldn't

41:35

be used as a as a quote bridge to

41:37

nowhere and how many doctors were

41:40

basically trying to discourage families

41:41

from like even pursuing it in these

41:43

critical care cases because it would

41:45

create this bridge nowhere and then like

41:46

what do we do? And it creates these

41:47

dilemmas. And then I went and asked some

41:50

some doc, this was a long time ago. This

41:51

was almost a decade ago now. Like, oh

41:53

well, like why don't we consider it as a

41:54

destination? That the phrase is like a

41:56

destination therapy versus a bridge

41:58

therapy. What if the technology just

42:00

isn't good enough yet and it needs to be

42:01

improved?

42:01

>> It needs to be improved definitely. But

42:03

that wasn't even the response that I

42:04

got. The response that I got was just

42:05

like shouting and throwing things. And

42:07

so I was like something feels wrong

42:08

here. But I wasn't really in a position

42:10

then to pursue it. But this was always a

42:12

thing that was kind of I saw that there

42:14

was a really important unification here.

42:17

It also this the same fundamental type

42:18

of technology has really transformed

42:20

organ transplantation. So there they

42:22

call it NMP normotheric machine

42:23

profusion rather than ECMO but it's the

42:25

same idea. Um so 20 years ago if you

42:27

needed a like a kidney transplant or a

42:29

liver if the car crash happened at 3 in

42:32

the morning the surgery would happen at

42:33

4:00 or 5 in the morning. But now it

42:35

gets scheduled for like the afternoon or

42:36

the next day. And over 75% of liver

42:38

transplants in the US use this type of

42:41

profusion technology now. But like the

42:43

the systems that exist for this are like

42:45

$500,000. They can only be moved by

42:47

private jet. Like one of the big

42:49

companies in the space, it turns out

42:50

that their like private jet logistics

42:51

business is bigger than their medical

42:53

device business. And it just like there

42:55

was just like clearly an engineering

42:57

that could refine this. And so we looked

42:59

at this and we thought like, well, what

43:00

if you could refine this to the point

43:02

where you could check a kidney's luggage

43:03

on a United flight to the East Coast? Or

43:06

what if you could make a thing that that

43:07

17-year-old could have brought home as a

43:09

backpack um instead of just what they

43:11

did in his case is they stopped changing

43:12

the oxygenator filter and a week later

43:14

it clotted and he died. And that's what

43:16

happened. There are other problems here

43:17

like like being able to close the skin

43:19

around the brain implant. also need to

43:21

make it so that the the tubes that

43:23

connect the the blood supply to the

43:25

circuit can the skin can heal to it. So

43:27

that's not an infection risk. You can

43:29

otherwise you have to clean it very

43:30

carefully. But just overall there's this

43:32

huge gap between like clearly like where

43:34

the scientific breakthroughs like were

43:36

put were pointing and like what was

43:38

being done like I think I think that

43:39

people don't appreciate is that in many

43:41

cases like there's like if you want to

43:44

be a brand in a vat like this basically

43:45

already exists like you can keep a like

43:48

an end life like patient alive in an ICU

43:52

almost indefinitely but this is very

43:54

poor quality of life and so patients

43:55

like ask for that to be withdrawn like

43:57

nobody wants to be basically like a

43:58

brain and like a hospital bed connected

44:00

to tubes. you need to be able to provide

44:02

a high quality of life. And so you need

44:03

something that people can like live

44:05

with. And I think to see this like if

44:07

you can get vision, hearing, balance,

44:09

motor control, um the ability to like be

44:12

out in the world and doing things, I

44:14

just saw this like very fundamental way

44:15

to reframe the problems of medicine

44:17

here. And so that like I said like at

44:19

science even though there's these

44:20

several different projects, I really see

44:21

them as like as one project over the

44:23

next 10 years. So you know started as an

44:26

engineer um first principles thinking

44:28

which often now is quite associated with

44:31

Elon Musk. Uh how did Neurolink start?

44:34

How did you get to know him? And how did

44:36

all of this sort of come together?

44:38

Because I first met you when you were uh

44:40

doing Y Combinator many years ago, my

44:42

first stint at YC. So, I um

44:46

got an email one night in early 2016 um

44:50

from Sam uh subject line crazy question

44:54

be like Alon starting a brain computer

44:57

interface company like who should who

44:58

should run it and I assumed they're

45:01

talking to a lot of people and my first

45:03

reaction was actually I I had some

45:04

friends at MIT that I thought I'm like

45:07

well these guys are really smart you

45:08

should talk to them but then like an

45:10

hour later I was like wait a second and

45:12

so I I emailed him back I'm like can I

45:14

like

45:15

and uh Sam introduced me to Elon and

45:19

Elon was going around he'd already had

45:20

the idea like on his own that he wanted

45:21

to start a company and he had the name

45:23

Neurolink. I also think that he heard my

45:24

name from enough people that he was

45:26

talking to at the time and kind of over

45:28

the second half of 2016 there was just

45:30

this group of people that was kind of

45:32

some some degree ever shifting that

45:33

would meet once a week or so in the

45:35

evening and that snowballed into into

45:38

Nurlink and of the the initial group a

45:41

bunch of them were people that I knew

45:42

from Duke. So Tim Hansen, the guy who

45:44

had originally had the the sewing

45:45

machine idea, he was in the lab that I

45:47

came from at Duke, he was a a grad

45:49

student, um I was an undergrad working

45:51

for him and then the professor that he

45:54

and one of our other friends had gone to

45:55

at UCSF and then a collaborator of

45:58

theirs. So it was kind of a very small

45:59

community.

46:00

>> What was that like initially to talk

46:01

about the idea of like you know

46:03

connecting a computer to a human being's

46:07

brain? Elon he I mean he saw what was

46:09

coming in AI like very much more clearly

46:11

than many other people much earlier and

46:13

I think the implications of like if like

46:15

you got to this this can't be a separate

46:17

thing from humanity and that needs to

46:19

merge somehow I think that implication

46:21

was just very clear to him and so that

46:23

was the genuine motivating factor of

46:25

like how do we make it so that this

46:27

allows us to upgrade humanity rather

46:28

than get left behind. I mean if you look

46:30

at the natural history of earth um it's

46:32

not like this is a totally speculative

46:34

thing. Humanity has totally dominated

46:35

the planet and we keep our closest

46:37

living relatives in glass boxes so they

46:39

don't go extinct. And so there's a real

46:42

history here of of greater intelligence

46:44

being very dangerous. Like in the

46:46

beginning there wasn't like a specific

46:47

technical idea necessarily, but there

46:49

was that motivating force and then the

46:50

idea is we'd pull together like the

46:52

smartest group of people that he could

46:54

find and and enough resources to to do

46:57

whatever made sense and eventually got

46:59

consensus around what you see now is the

47:02

uh as the thin film polymer threads.

47:04

You're one of the best examples of

47:06

someone who came from a pure software

47:08

world and then went into hard- techch

47:10

and now is actually doing real

47:12

breakthrough type of research and work

47:14

that is also commercializable. The

47:16

people watching, they might be on a

47:18

similar track. Knowing what you know

47:20

now, like what would you tell to the

47:22

sort of 2016 version of yourself? So, I

47:25

think there's two things. The first is

47:27

um like the thing that I did and then

47:28

there's the thing I didn't do. The thing

47:30

that I did I think was I had I had a a

47:32

clear sense of what I wanted and then I

47:34

was very high agency towards that. When

47:36

I was in college I knew that I wanted to

47:37

work in brain computer interfaces. There

47:39

was a great lab that was doing that work

47:41

at Duke where I went and I was pretty

47:44

persistent in figuring out how to like

47:45

place myself into that lab. It was in

47:48

the medical center. They didn't usually

47:49

take undergrads. They're like it took me

47:50

a little while to get in there. I

47:51

eventually figured out that I could

47:52

sneak in by taking an independent study

47:54

in the chemistry department that would

47:56

like be a back door into this like

47:58

primate neuroscience group. But then

48:00

really most of my education in college

48:02

happened in that lab. So yeah, I grew up

48:04

programming in my my deepest hard skill

48:06

is software, but I I've been doing

48:08

primate brain computer interface like

48:10

closely neural decoding stuff since

48:12

2008. And so that was just like you had

48:15

to be pretty high agency and and like um

48:18

persistent in trying to like if like

48:21

follow through on that. But that only

48:22

works if you have a sense of where you

48:24

want to go. And so the first is like

48:25

figure out what you want. The thing I

48:27

didn't do was my so after college I

48:30

started a company um called transcryptic

48:32

that was a the it was a robotic cloud

48:35

laboratory. So the idea was and I also

48:37

in college had the experience of working

48:39

in a synthetic biology group where I

48:42

needed to go press a button on a device

48:43

called a plate reader every 3 hours for

48:46

3 days to take a measurement that I

48:47

wanted. And I was like in software like

48:50

this doesn't we wouldn't do this. Like

48:52

this just clearly doesn't make sense.

48:53

Like we would automate it. This was also

48:55

the time when AWS was just emerging and

48:57

cloud computing was becoming a thing.

48:59

And it seemed very obvious to me that

49:00

instead of every researcher having their

49:02

own lab and spending millions of dollars

49:03

for all their equipment and then like

49:04

needing to press these buttons, like

49:06

what we should build is a central

49:07

robotic cloud laboratory that expose

49:09

APIs that scientists can use to run

49:11

experiments over the internet. I did

49:12

that, raised a bunch of money when I

49:14

stepped down as CEO in in 20 uh

49:17

beginning of 2017 to to join Neuralink.

49:20

Um it had millions of dollars in revenue

49:22

like I felt like we got it to kind of an

49:23

early promising point. Um, and then

49:26

since then, over the last decade, um, it

49:28

that I don't that promise was not

49:29

fulfilled. That was still that was hard

49:31

mode. That was like a slog. That era

49:33

from 2012 to like 2016, I strongly

49:36

identified with Ben Horus's essay, The

49:37

Struggle. And I think the thing that I

49:39

should have done earlier is go work for

49:41

somebody like Elon cuz that just like so

49:43

dramatically leveled up like my ability

49:45

to do this and and know how the game is

49:47

played. And um, and I think that often

49:50

you'll see these really promising kids

49:51

who are just like, I'm going to do it

49:53

myself. like I don't want to work for

49:54

anybody else. I'm going to start my own

49:55

company. I'm going to plow through it.

49:56

And like sometimes that works. Like who

49:57

am I to say? But I can tell you that

50:00

very often startup like running a

50:02

startup is an oral tradition. There have

50:04

been a couple nucleating times in

50:06

history where like a really remarkable

50:08

group of people have kind of figured it

50:09

out from scratch. Like I think PayPal

50:11

was like this. But almost always beyond

50:13

that, it's like it's an oral tradition

50:15

that you pass down from one of like this

50:17

handful of Silicon Valley cultures that

50:20

can make a huge difference on the

50:21

trajectory of your career to get that

50:22

right when you're 20 versus when you're

50:23

26 or 28.

50:25

>> Well, science is the next. And it sounds

50:27

like you're assembling, you know, you've

50:29

already assembled a really accomplished

50:31

crew of people. And then what we've

50:34

learned from startups over the years is

50:36

that um once something works like more

50:39

and more resources, more and more smart

50:41

people sort of come together and then

50:44

you know zooming out that's what we

50:46

really hope happens uh a whole lot more

50:49

in exactly the spaces that you're in

50:51

right now. So you know science sounds

50:53

like one of those places to go to right

50:54

now.

50:55

>> It's pretty cool. Yeah. I mean, I'm I

50:56

definitely feel pretty lucky that I get

50:58

to that I get to do this because it's

51:00

such an interdiciplinary problem and the

51:02

to innovate on it, you need all these

51:04

different areas and really great people

51:05

in each of them, but at the same time,

51:08

it's there's the the things that you can

51:10

do today were unimaginable a few years

51:11

ago and and yeah, I mean, I think that I

51:13

think we have the best team in the in

51:15

the field. So I mean next 10 20 years of

51:19

you know science BCI like I guess where

51:22

do you see this going and you know what

51:25

are you most excited about? I have this

51:27

like event horizon at 2035 now like when

51:29

I was earlier in my life I always kind

51:31

of prided myself on the ability to see

51:34

the future and that is the next few

51:37

years I think I have a sense of but like

51:38

by 2035 it's just like impos there's

51:40

like I can't see past it. I think it is

51:42

very possible that the first people to

51:44

live to a thousand are alive right now.

51:45

And I think it might be many more people

51:47

than you think. It's not going to be

51:48

like one or two people on Earth today.

51:50

Earth as a whole is at a not not unique

51:53

like this moments in history like have

51:56

happened all the time before, but right

51:57

now it's a time of exceptional change.

51:59

This is going to be really really

52:01

influenced by the technological changes

52:03

that are happening. And I the the twin

52:05

plot lines of brain computer interfaces

52:07

and artificial intelligence. People are

52:09

are beginning to get that artificial

52:11

intelligence is real. It is still not

52:12

priced in. People still don't appreciate

52:14

it. I agree.

52:15

>> But they really don't get what's coming

52:17

in in what's possible with brain

52:18

computer interfaces. And those are

52:19

really parallel but very distinct

52:21

stories. Intelligence is going to become

52:23

widely available for those that have the

52:25

agency to deploy it. And I am generally

52:28

pretty optimistic about that. Like I

52:30

don't my my pdoom is pretty it's not

52:32

zero but it's it's not 50%. It's well

52:35

below that. Yeah. Yeah, I don't know if

52:36

we'll have cured um all disease. In

52:39

fact, I definitely wouldn't use that

52:40

term. I wouldn't say we'll have cured

52:41

all diseases by 2035. But I think that

52:43

there will be kind of new lateral

52:46

options that that totally reframe how we

52:48

think about the human condition on that

52:49

time scale

52:50

>> and totally reconfiguring basically that

52:54

sort of interface between computers and

52:56

humans. It's

52:56

>> Yeah. and humans and each other. If a

52:58

brain computer interface is equivalent

53:00

to a a braintobrain interface in many

53:02

cases, this takes you to like totally

53:04

new territory. Max, thank you so much

53:06

for joining us. Thanks for building the

53:08

future and we can't wait to see what you

53:10

build next.

53:10

>> Thanks, Gary.

Interactive Summary

Loading summary...