HomeVideos

The AI Frontier and How to Spot Billion-Dollar Companies Before Everyone Else — Elad Gil

Now Playing

The AI Frontier and How to Spot Billion-Dollar Companies Before Everyone Else — Elad Gil

Transcript

3182 segments

0:00

There are moments in time where it's

0:01

very smart to be contrarian.

0:03

>> And there are moments in time where

0:05

being consensus is the smartest possible

0:06

thing you can do. And I think right now

0:08

we're in a moment in time where being

0:10

consensus is very right. You know, you

0:13

can really overthink it. And what's a

0:14

contrarian thing? We should go do a

0:16

bunch of hardware stuff cuz blah blah

0:17

blah. You like maybe buy more AI. You

0:20

know what I mean? I think people make

0:21

these things way too complicated.

0:23

>> Yeah. True. In every aspect of life

0:25

probably. All nice to see you. Thanks

0:28

for making the time. appreciate it.

0:29

>> Yeah, as always

0:31

>> and I thought we could begin with

0:33

something we were chatting about or you

0:35

were explaining before we started

0:37

recording which is a new phenomenon of

0:41

sorts. Could you explain what we were

0:43

just talking about?

0:44

>> Oh yeah, we were just talking about some

0:45

of the acquisitions that are happening

0:47

in the AI world. We saw that XAI just

0:50

got an option to effectively purchase

0:52

cursor. It looks like obviously scale

0:53

was sort of partially taken by meta.

0:57

There have been a variety of these sort

0:58

of deals that have been happening over

0:59

the last year or two. And separate from

1:01

that, we're just talking about what does

1:03

that mean for the AI research community

1:04

and the AI community in general. And I

1:07

think the most interesting or one of the

1:08

interesting things that's happened over

1:09

the last year or so is Meta really

1:12

started aggressively bidding on AI

1:13

talent, which was a very rational

1:15

strategy, right? They're going to spens

1:16

dollars on compute. So, it made sense to

1:18

have a real budget to go after people.

1:20

And normally what happens in tech is a

1:24

single company will go public and a

1:26

bunch of people from that company will

1:28

be enriched and then a subset of them

1:29

will continue to be heads down and

1:31

working really hard and focused on their

1:32

original mission and a subset of people

1:34

start to get distracted. They may go and

1:36

work on passion projects for society.

1:38

They may get involved with politics.

1:39

They may go start a company. They may

1:41

just kind of check out and hang out or

1:42

go to the beach kind of thing. And what

1:45

happened recently is because of the meta

1:46

offers and then all the other major tech

1:48

companies having to match offers for

1:50

their best researchers somewhere between

1:52

50 and a few hundred people effectively

1:55

had an IPO but as a class of people. It

1:57

wasn't like they were at one company.

1:59

They were spread across Silicon Valley

2:01

but all of their pay packages suddenly

2:02

went up dramatically and they

2:03

experienced the equivalent of an IPO.

2:05

And that's really unusual. It's kind of

2:06

the personal IPO. And the only time in

2:09

history I can think of where I've seen

2:11

it happen before is in crypto where a

2:13

bunch of the really early crypto holders

2:14

or founders suddenly as a class all went

2:16

effectively public in 20 I guess 17ish.

2:19

>> Mhm.

2:19

>> And then again more recently. But this

2:21

is really interesting. It's kind of

2:23

under discussed. It may not have huge

2:25

long-term implications, but it does mean

2:26

a subset of people will change what

2:29

they're focused on. Try and do big

2:30

science projects to help humanity work

2:32

on AI for science. Maybe maybe some

2:34

people will go off and do personal

2:36

quests or you know things like that

2:38

>> or just quiet quit and do lots of drugs

2:40

and chase vices. I mean there's that

2:42

too. Definitely not.

2:43

>> In that case, you look around say

2:44

Austin, you've got the Delionaires,

2:46

which refers to Dell post IPO early

2:49

employees and so on. But as a class of

2:52

people, when that happens, I suppose we

2:54

don't know how how large or how

2:56

long-term the implications are, but

2:58

there seem to be implications. And I

3:01

don't know anyone well I know only a few

3:04

people who I would go to as

3:08

technical enough and also kind of broad

3:11

enough in their awareness and networks

3:13

to watch AI to the extent that someone

3:15

can watch it comprehensively. I would

3:16

put you in that bucket. And you wrote

3:18

this week just to talk about some of the

3:20

other kind of elements at play here, the

3:24

compute constraints that AI labs are

3:26

facing and the implications and maybe

3:28

for the next one to 5 years. This is in

3:32

a piece people should check out random

3:33

thoughts while gazing at the misty AI

3:36

frontier. Good headline by the way.

3:37

>> Very dramatic.

3:38

>> Yeah, very dramatic. I love it. It's

3:40

very evocative. Would you mind

3:43

explaining actually before we move to

3:44

the compute constraints because I do

3:46

want you to to hop to that next but for

3:48

people who don't have any real context

3:50

on the talent wars and what you were

3:53

just mentioning earlier with meta like

3:55

on the high end what does some of these

3:58

pay/equity

3:59

packages compensation packages look like

4:01

that are getting offered?

4:03

>> I don't have exact knowledge of the full

4:05

range and everything else the rumors and

4:06

the things that have kind of made it

4:08

into the press. The claims are that

4:10

these things are between tens of

4:11

millions and hundreds of millions of

4:12

dollars

4:13

>> per person. And again, it's a very small

4:16

number of people who would get anything

4:17

that's quite that upsized. But I think

4:20

the basic idea is we're in one of the

4:22

most important technology races of all

4:25

times. And the faster that we get to

4:28

sort of better and better AI, the more

4:30

economic value will effectively show up.

4:33

And therefore people were really willing

4:34

to pay in an outside way for the handful

4:36

of people who are the world's best at

4:37

this thing. And 5 10 years ago these

4:40

people were like well compensated but it

4:42

was a completely different ballgame.

4:43

They just wasn't the core of everything

4:45

that's happening in technology but also

4:47

honestly society and politically and you

4:49

know for education and health like it's

4:50

going to have all these really broad and

4:52

I think largely positive implications

4:54

for the world.

4:55

>> Mhm.

4:55

>> But it is the moment of transformation

4:57

and so suddenly these pay packages are

4:58

going way up. what are the compute

5:00

constraints that you discussed in your

5:02

recent piece?

5:03

>> So basically all the different people

5:06

call them labs now. That's open AAI,

5:08

that's Enthropic, that's Google, that's

5:09

XAI, etc. All the labs are basically

5:13

training these giant models and

5:14

effectively what you do is you buy a

5:15

bunch of chips from Nvidia and you're

5:18

actually building out a system. So you

5:19

have tips from Nvidia, you have memory

5:20

from Highex and Samsung and other places

5:23

and you're building a data center.

5:25

There's all these things that go into

5:26

building these big systems and data

5:27

centers and everything else. And you

5:29

basically have clusters of hundreds of

5:31

thousands or millions or the scale keeps

5:34

going up of systems that you're buying

5:36

from Nvidia and from others. Google has

5:38

their TPU. There's other systems as

5:40

well. And you're using that to basically

5:43

train an AI model. And what that means

5:46

is you're running huge amounts of data

5:48

against these these big clouds. And

5:50

eventually the crazy thing is your

5:52

output or your model is literally like a

5:53

flat file. It's like almost like

5:55

outputting a text dock or something. And

5:57

that text back is what you then load to

6:00

run AI, which is insane if you think

6:02

about it. You use a giant cloud for

6:04

months and months and months and your

6:05

output is like a small file. And that

6:07

small file is a mix of representing all

6:10

of humanity's knowledge that's available

6:11

on the internet plus logic and reasoning

6:15

and other things built into it. And you

6:17

can kind of think about that in the

6:18

context of your brain, right? You have

6:20

three or four billion base pairs of DNA

6:22

and that's more than enough to specify

6:24

everything about your physical being but

6:26

also your brain and your mind and how it

6:27

works and how you can see things and

6:30

talk and taste things and all your

6:32

senses and everything's just

6:33

encapsulated in these very small number

6:34

of genes actually. And so similarly you

6:36

can encapsulate all of human knowledge

6:38

into like the slot file effectively.

6:40

>> How do you think about the constraints

6:42

then? What are the constraints? every

6:44

year the constraint on building out

6:46

these big clouds to train AI and then

6:48

also what's known as inference where

6:50

you're actually using these chips to

6:51

understand to run the AI system itself.

6:53

You need lots and lots of chips from

6:55

Nvidia to do this or TPUs or others but

6:56

then you also need other things. You

6:58

need packaging to actually be able to

6:59

package the chips and so there's a whole

7:01

supply chain around building out these

7:02

systems and different parts of that

7:04

supply chain have constraints of them at

7:06

different times. And so right now the

7:07

major constraint is memory or a specific

7:10

type of memory that's largely made by

7:11

Korean companies although there's some

7:13

broader providers of it and people think

7:16

that that memory constraint will exist

7:17

for about 2 years maybe plus or minus

7:21

because ultimately the capacity of those

7:24

companies has been lower than the

7:26

capacity for everything else in the

7:27

system. People think other constraints

7:29

in the future may literally be building

7:30

out the data centers or power and energy

7:32

to run these things, right? But for

7:33

today, it's this memory. And so

7:35

everybody in the industry is constrained

7:37

in terms of how much compute they can

7:38

buy to throw out these things. And so

7:40

what that does is it creates a ceiling

7:42

on top of how big you can scale these

7:44

models up in the short run because every

7:47

lab is buying as much as it can. A bunch

7:48

of startups are buying as much of this

7:50

computer as they can and everybody's

7:52

constraint. What that means though is

7:54

you have an artificial ceiling on how

7:55

big a model can get in the short run and

7:59

how much inference can run or how many

8:01

things you can actually do with AI right

8:03

now. And that also means that you're

8:04

effectively enforcing a situation where

8:06

no one lab can pull so far ahead of

8:08

everybody else because they can't buy 10

8:09

times as much compute as everybody else.

8:11

>> And there are these scale laws that the

8:13

more compute you have, the bigger the AI

8:14

model you can build. In many cases, the

8:16

more performant it can be eventually.

8:17

>> Mhm. And so that may mean that over the

8:19

next 2 yearsish all these labs should be

8:22

roughly close to each other because

8:23

nobody has the capacity to pull ahead.

8:25

And when the constraint comes off there

8:27

is some world where you could make an

8:28

argument that suddenly somebody can pull

8:29

far ahead of everybody else. So right

8:31

now open AI anthropic Google you know

8:34

they're reasonably close in terms of

8:35

capabilities although some will pull

8:36

ahead on one thing versus another that

8:39

should roughly continue everybody thinks

8:41

for the next at least 2 years because of

8:42

this. Google is also constrained by the

8:47

memory from Samsung, Micron, etc.

8:50

They're they're similarly constrained as

8:52

the other players. Right now, everybody

8:53

is similarly constrained and you know a

8:56

subset of these labs either are already

8:58

making their own chips or systems like

9:00

Google has TPUs and other things. Amazon

9:03

has actually built its own chips called

9:04

traniums. And so there's basically like

9:06

different systems for different

9:08

companies, but fundamentally all of them

9:10

are are limited in terms of how much

9:12

they can either manufacture themselves,

9:14

purchase themselves. And a year or two

9:17

ago, the main constraint was packaging.

9:18

Now it's it's memory. Two years from

9:21

now, who knows? Maybe it's something

9:22

else. We constantly are hitting

9:23

bottlenecks as we're trying to do this

9:24

build out. This is probably going to be

9:26

a naive question because I'm a muggle

9:28

and not able to write technical white

9:30

papers or anything approaching that, but

9:33

it seems to me that I'm the first person

9:36

to say this, we're better at forecasting

9:38

problems than solutions potentially. And

9:40

so, for instance, way back in the day,

9:42

the price per gallon of gasoline or

9:45

petrol goes above a certain point. Okay,

9:48

people are forecasting doom and

9:50

destruction. But past a certain price

9:53

per barrel, suddenly new means of

9:55

extraction became feasible and there

9:57

were investments made in things like

10:00

fracking and so on. Is there sort of a

10:03

plausible scenario in which there is

10:05

some type of workound

10:07

>> along those lines if that makes any

10:08

sense? I don't know. Maybe there isn't.

10:10

As far as I know there so far at least

10:12

is not.

10:13

>> Mhm. Part of that is because the way

10:14

that some of these things are built and

10:16

it's basically the capacity that you

10:19

need for example for memory is basically

10:20

a type of fab

10:22

>> and so you need time to build out the

10:24

fab and to get the equipment and put the

10:26

lines in place.

10:27

>> Right.

10:27

>> So it's a traditional sort of capex and

10:30

infrastructure cycle.

10:32

>> Mhm.

10:32

>> And these companies basically

10:34

underinvested in that because they they

10:37

didn't quite believe the demand

10:38

forecasts that other people had around

10:39

this stuff.

10:40

>> Mhm. And so now they're trying to catch

10:42

up. And so it's it's one of these things

10:44

where everybody keeps saying, "Well, AI

10:45

is growing so fast. How can it possibly

10:47

keep growing at this rate?" But it keeps

10:49

growing at this rate. It just keeps

10:50

going. And that's because its

10:52

capabilities are so impactful and so

10:54

important. And so you look at the

10:55

revenue of these companies. And it's

10:58

interesting. I I can send you the chart

10:59

later, but Jared on my team pulled

11:01

together a graph of how long did it take

11:03

for companies to get to a billion

11:05

dollars in revenue and then from a

11:07

billion to 10 billion and then from 10

11:08

to like a hundred. And there's only a

11:10

small number of companies that have ever

11:11

done that. And you can literally look by

11:13

generation of company how long it took.

11:15

And so for example, I can't remember

11:16

it's ADP or somebody it took them 30

11:18

years to get to billion in revenue or

11:19

whatever it is. Enthropic openi did that

11:22

in like a year. For Google it took four

11:23

years or whatever. I don't remember

11:24

exactly what the numbers are, but it was

11:26

kind of like as you go through these

11:27

subsequent generations, it gets faster

11:29

and faster to get to scale. Right now,

11:32

OpenAI and Anthropic are each rumored to

11:34

be roughly around $30 billion run rate,

11:36

which is insane. That's crazy.

11:39

>> That's.1% of US GDP. So AI probably went

11:43

from 0 to half a% of GDP at least as a

11:46

revenue contributor. And you extrapolate

11:48

out and if they hit 100 billion in

11:50

revenue in the next year or two years,

11:51

whatever it is, then we're getting close

11:54

to a place where each of these companies

11:55

is a percent or two of GDP. That's

11:58

insane if you think about that.

11:59

>> It's bananas. Yeah, it's bananas.

12:02

>> Is really actually important when

12:03

useful. That doesn't include like the

12:05

cloud revenue for Azure for doing AI

12:08

stuff or you know Google GCP or like

12:12

it's just those two companies. It's

12:14

insane.

12:15

>> Mhm. I would love to dig into your

12:17

thinking because you're you're one of

12:19

the best kind of first principles and

12:22

also systems thinkers I've met and

12:26

I love having conversations with you

12:27

because I always learn something new and

12:29

it's not necessarily a data point but

12:31

often it might be a lens or a framework

12:34

for thinking about different things and

12:36

that framework evolves for you as well

12:38

but for instance if I was looking at

12:40

this interview you did this is a while

12:42

back with first round capital and you

12:44

were talking about sort of market first

12:46

and then strength of team second, but

12:48

you talked about passing on investing in

12:50

lift series C. This was at the time and

12:53

ultimately part of it seemed to hinge on

12:55

winner take all versus oligopoly versus

12:59

other. And I'm curious how you are

13:03

thinking about that within the AI space

13:05

because I mean you started skating for

13:07

that puck before almost anyone I know,

13:10

if not everyone I know. And how are you

13:13

thinking about that? And this ties into

13:15

something that you mentioned in your

13:16

piece that I haven't heard anyone else

13:19

talking about, but I'll give the

13:22

sentence as a cue. I don't think you'll

13:24

need it, but founders running successful

13:25

AI companies should all take a cold hard

13:28

look at exiting in the next 12 to 18

13:30

months, which might be a value

13:31

maximizing moment for outcomes. And you

13:33

sort of went back to the dotcom bust and

13:36

the sort of survival rates and then

13:38

breakout rates. Could you just explain

13:40

that sentence and then also explain how

13:43

you're thinking about whether you think

13:45

this will be winners take all igopoly

13:48

like what type of dynamic you think

13:49

emerges

13:50

>> in terms of the precedent and that

13:52

doesn't mean it's going to happen here

13:53

but if you look at every technology

13:54

cycle 90 95 99% of the companies in that

13:58

cycle go bust

14:00

>> and that dates way back even to what was

14:02

high-tech a 100 years ago which was the

14:04

automotive industry

14:05

>> in Detroit dozens of car companies and

14:08

hundreds of suppliers s and it collapsed

14:10

into a small number of auto companies

14:11

virtually. And so this is not a new

14:13

story. During the internet cycle or

14:15

bubble of the '90s, 450 companies went

14:18

public in 99. 450 or so companies went

14:21

public in the first few months of of

14:23

2000. And so that was 900 companies. And

14:27

say another 500,000 went public in the

14:30

couple years before that. So you had

14:32

somewhere between 1500 and 2,000

14:33

companies go public go public. So that

14:36

means they kind of made it.

14:38

>> Mhm. And of those, how many have

14:40

survived? A dozen, maybe two dozen.

14:43

>> Yeah.

14:44

>> And so out of 2,00 companies, 1,980

14:48

or so went under.

14:49

>> Mhm.

14:50

>> One form or another. Or maybe they got

14:51

bought for a little bit. And so there's

14:53

no reason to think the AI cycle will be

14:55

any different. And every cycle is like

14:57

that. SAS was like that and mobile was

14:59

like that and crypto was like that. So

15:01

most companies are not going to make it.

15:03

A handful will. And we can talk about

15:04

those. And so if you're running an AI

15:06

company right now, you should ask

15:08

yourself, what is the nature of the

15:10

durability of your company? And are you

15:13

one of that dozen or two that are going

15:14

to be really important 10 years from

15:16

now? Or is now a good moment for you to

15:18

sell because what you're doing will

15:19

start to get commoditized or will be

15:22

competed by a lab or will be something

15:25

that the market will shift or the

15:27

technology will shift and you'll become

15:28

obsolete. And there's a handful of

15:31

companies that will continue to be

15:32

great. They should never sell. They

15:33

should never exit. they should keep

15:34

going. But there's probably a lot of

15:36

companies that now or the next 12 to 18

15:39

months is the best moment for them

15:41

possible in terms of the value that

15:42

they'll get for what they're doing.

15:44

>> And for every company, there's a value

15:46

maximizing moment where they hit their

15:48

peak. And it's usually a window. There

15:50

usually, you know, 6 12 months where

15:52

what you're doing is important enough,

15:54

you're scaling enough, everything's

15:55

working before some headwind hits you.

15:58

>> And sometimes it's very predictable that

16:00

that headwind is coming and you can see

16:01

it. And often you see it in the second

16:03

derivative of growth, like how fast

16:05

you're growing starts to plateau a

16:06

little bit and you're either going to

16:07

keep going up or you should sell.

16:09

>> And so that's really what that's meant

16:11

to be. I'm incredibly bullish around AI

16:13

as you can tell from the rest of the

16:14

conversation.

16:15

>> And so it's it's less about the

16:17

transformation that's happening overall

16:18

because of this technology and more that

16:21

only a handful of companies are going to

16:22

continue to be really important. And so

16:24

are you one of them or not? If you're

16:25

one of them, you should never ever ever

16:26

sell.

16:27

>> So what are the characteristics of that

16:29

handful? the handful that have durable

16:32

advantage because you look back at 2000

16:34

it's like man what would you have used

16:37

to try to pick out Google and Amazon

16:40

>> and I'm not saying that's the best

16:42

comparator but within the many just

16:47

avalanche of AI companies

16:50

>> which are those that you think have

16:51

durable advantage I mean of course some

16:53

of the name brand labs come to mind

16:56

maybe they become the interface for

16:58

everything else who knows but How would

17:00

you answer that in terms of either

17:02

shared characteristics or actual names?

17:04

What sets apart the handful that you

17:07

think will make it?

17:09

>> I think the core labs will be around for

17:11

a while. So that's open AI, anthropic

17:13

Google, barring some accident or

17:15

disaster, some blow up, but it seems

17:17

like they're in a really durable spot.

17:18

And to your point on like market

17:20

structure, I wrote a Substack post, I

17:22

don't know, 3 years ago or something

17:23

predicting that that would probably be

17:25

an igopoly market and there'd be a

17:26

handful and be aligned with the cloud.

17:28

That's roughly kind of what happened. I

17:29

mean, there's Meta and there's XAI and

17:31

there's other players that may change

17:32

this. It didn't exist when I wrote that

17:34

post, but it feels to me like in the

17:36

short run that's an igopoly. Like

17:38

there's no reason for that to be a

17:39

monopoly market unless one of them pulls

17:41

ahead so much in capabilities that it

17:42

just becomes the default for everyone.

17:44

And that could happen, but so far it

17:45

hasn't. And again, this computer

17:47

constraint may prevent that in the short

17:48

run or at least provide an asmtote on

17:50

it. As you move up the stack and you

17:51

see, well, there's different application

17:53

companies. You know, there's Harvey for

17:54

legal, there's a bridge for health,

17:56

there's decagon and Sierra for customer

17:58

success. You know, there's these

17:59

different companies per application.

18:01

There's three or four lenses that you

18:02

can look at. One is if the underlying

18:05

model gets better, does your product or

18:07

service get dramatically better for your

18:09

customers in a way that they still want

18:10

to keep using you? Second, how deep and

18:13

broad are you going from a product

18:15

perspective? Are you building out

18:17

multiple products? Are they all

18:18

integrated in cohesive hole? Is it

18:20

really being built directly into the

18:21

processes in a company in a way that

18:23

it's hard to pull out? Often the issue

18:25

for companies in adoption of AI isn't

18:29

how good is the AI, it's how much do I

18:30

have to change the workflows and the

18:32

ways that my people do things in order

18:34

to adopt it. It's about change

18:36

management usually. It's not about

18:37

technology. And so if you've been able

18:39

to embed yourself enough into workflows

18:40

and how people do business and how they

18:42

work and how everything else kind of

18:43

ties together, that tends to be quite

18:45

durable.

18:46

>> Mhm.

18:46

>> Are you capturing and storing and using

18:48

proprietary data? Sometimes it's useful.

18:50

I think data modes in general are

18:51

overstated, but I think sometimes it can

18:54

be actually quite useful and that's

18:55

usually the system of record view of the

18:57

world. So, you know, there's a handful

18:59

of criteria around like will this thing

19:01

be long-term

19:03

defensible or not and the application

19:05

level that's often one potential lens on

19:08

it.

19:08

>> Mhm. So question if if people are

19:11

listening to this and they are in the

19:13

position of perhaps a founder who should

19:16

consider identifying their kind of short

19:20

period of maximum valuation and perhaps

19:23

hitting the parachute in some way. What

19:26

are the options? Because I think of some

19:28

of these companies I'm not going to name

19:30

them but there are multiple companies

19:31

that have multi-billion dollar

19:33

valuations. There's seems to be again

19:36

from a mostly lay person perspective

19:40

i.e. me

19:42

that that the labs

19:45

probably can build what they are

19:48

currently selling without too much

19:50

trouble. Do they aim to be acquired by a

19:53

lab in which case there's sort of a

19:55

build versus buy decision for the lab

19:57

itself? Are they aiming for one of not

20:00

the open AIs or anthropics, but maybe

20:03

somebody who's trying to get more skin

20:05

in the game like Amazon or fill in the

20:09

blank? What are the exit options? I

20:11

think there's a lot of exit options. And

20:13

the thing that's crazy right now is if

20:14

you go back 10 or 15 years, the biggest

20:17

market cap in the world was like 300

20:19

billion.

20:20

>> Mhm.

20:20

>> The biggest tech market cap was, I don't

20:22

know, 200ish or something. I think the

20:24

biggest one at the time was Exxon or

20:26

somebody like 15 years ago. Mhm.

20:28

>> And over the last 10 or 15 years, what

20:31

happens is we suddenly ended up with

20:33

these multi-trillion dollar market caps,

20:34

which everybody thought was nuts at the

20:36

time, but things will probably only get

20:37

bigger. There'll probably be more

20:38

aggregation versus less into the biggest

20:40

winners. And there's more and more

20:43

companies who have these market caps

20:44

between say 100 billion and a few

20:46

trillion

20:47

>> in a way that's just unprecedented. And

20:49

that means there's enormous buying power

20:51

because 1% of 3 trillion is 30 billion,

20:54

right? you can get 1% and pay $30

20:56

million for something which is insane,

20:58

right? That's that's pretty

20:59

unprecedented and that means that these

21:01

really big acquisitions can happen

21:03

>> for the companies that I'm imagining

21:04

again I don't want to name names that

21:07

may have seem to have a limited lifespan

21:09

right when I'm in these these small

21:12

group threads with friends of mine who

21:13

are often time not always but I'm in a

21:16

bunch of them and when they're

21:19

tech investors very successful tech

21:20

investors and I'm like okay these five

21:22

companies you've got 10 ships how would

21:24

you allocate your 10 ships there's

21:26

certain companies that consistently get

21:28

zero even though they're reasonably

21:30

wellnown. Why would one of the labs buy

21:34

one of those?

21:35

>> Depends on what it is. And it may be a

21:37

lab. It may be one of the big tech

21:38

incumbents and Apple, Amazon, right?

21:41

>> Google's kind of both things. There's

21:43

Oracle,

21:44

>> there's Samsung, there's Tesla, there's

21:47

SpaceX now in the market doing things.

21:50

There's a bunch of different buyers of

21:52

different types. There's Snowflake and

21:53

Data Bricks. There's Stripe. Coinbase if

21:56

you're doing financial service there's

21:57

just a ton of companies that actually

21:59

are quite large that's kind of the point

22:01

and so often you end up selling to one

22:03

of four things you can sell to one of

22:04

the big labs or hyperscalers or giant

22:07

tech companies you can sell to somebody

22:08

who cares a lot about your vertical so

22:10

for example a Thompson Reuters if you're

22:12

doing legal or accounting or things that

22:14

are kind of related to that

22:15

>> I mean I think actually one thing that

22:17

doesn't happen enough is merger of

22:18

competitors particularly private

22:20

companies where you can do that because

22:22

ultimately if your primary vector is

22:24

winning and you're neck and neck with

22:26

somebody and you're competing on every

22:28

deal and you're destroying pricing for

22:29

each other. Like maybe it's better to

22:30

just merge. It actually was X.com and

22:33

PayPal in the '9s, right? Elon Musk were

22:36

running different companies and they

22:37

merged because they said we're people

22:39

doing this. Why fight?

22:40

>> Yeah. Or Uber Lift way back in the day,

22:42

right? That might not have been a

22:43

merger. It might have been an

22:44

acquisition, but it's like

22:46

>> Yeah. And the rumor is that that almost

22:47

happened and then you know the Uber side

22:49

walked away from it. Mhm.

22:51

>> But all the money that Uber spent on

22:53

fighting Lyft for all those years maybe

22:54

would have been better spent just buying

22:56

them. Maybe not. I don't know the exact

22:57

math.

22:59

>> But often it actually does make sense to

23:01

say, you know what, like we'll just stop

23:03

fighting it out and we'll just combine

23:05

and just go win. Cuz if the primary

23:08

purpose is to win the market, you're

23:09

already fighting all these big

23:10

incumbents that already exist anyhow. So

23:12

why why make it even harder? as you

23:14

know, and we talk about this a lot, but

23:16

we'll talk about you with your investing

23:19

hat on. But before you even put that,

23:22

let's call it full-time investing hat

23:23

on,

23:25

you had a lot in your background that

23:27

may or may not have helped you. And I'm

23:28

curious if you look at your biology

23:32

background, the math background. Do you

23:35

think any of those things or other

23:38

elements materially contributed to how

23:41

you think about investing that has given

23:43

you an advantage in I suppose there are

23:46

different stages to kind of winning

23:48

deals but sometimes they're not crowded

23:50

but let's just talk about the selection

23:53

process the math stuff helped me I think

23:55

in two ways one is it's helped me with

23:59

certain aspects of like technical or

24:02

algorithmic CS and understanding And

24:04

sometimes that's useful

24:05

>> in the context of how certain things

24:07

work in AI or things like that or just

24:09

fluency of numbers and data and I to

24:12

call it nerd language or something.

24:14

>> And I did the math degree honestly just

24:16

for fun. And I think that's actually the

24:17

thing that was helpful.

24:19

>> We did an undergrad degree in math so I

24:20

didn't go that far with it.

24:22

I did the very sort of abstract pure

24:24

math stuff and I think that was a good

24:26

forcing function of how to really think

24:28

logically step by step about things

24:31

because roughly the way that at least I

24:34

learned how to do proofs was you do the

24:37

logical sequence but then some times you

24:39

do these intuitive leaps and then go

24:41

back and try and prove it to yourself

24:42

>> or flesh out the

24:44

>> the reasoning behind that intuitive leap

24:47

and I think sometimes investing is a

24:48

little bit like that. When did you first

24:51

have the inkling that you could be good

24:55

at investing? And that could be

24:57

investing at large. It could be maybe

25:00

within the context of our conversations,

25:02

startups and angel investing. When did

25:04

you first kind of go, "Huh, yeah, maybe

25:07

I could be good at this." Was there a

25:09

moment or a deal or anything like that

25:13

that comes to mind?

25:14

>> Not really. I'm really hard on myself,

25:16

so even now I second guess myself a lot.

25:18

Mhm.

25:19

>> Somebody was telling me that the two

25:20

people that always beat themselves up

25:22

the most in hindsight is me and this one

25:24

other person who's another well-known

25:26

founder/investor. And so I don't think

25:28

there's a single moment where I'm like,

25:30

"Wow, this makes sense for me to do." I

25:31

think it just kind of organically kept

25:33

going because I was getting into some

25:35

very strong companies and then, you

25:37

know, that allowed me to sort of

25:39

continue what I'm doing. But

25:41

>> okay.

25:41

>> Yeah. Wish I hadn't done it like that.

25:44

>> God damn it. You need to revise your

25:46

Genesis story like every every good

25:48

founder.

25:49

>> Yeah.

25:50

>> Yeah. I mean, ever since I was seven,

25:52

I've been thinking about investing in

25:53

technology.

25:54

>> All right. Now we're talking. So,

25:56

getting into those deals, right?

26:00

What allowed you to get into those

26:02

deals, right? Because some people have

26:03

anformational advantage and they put

26:05

themselves in a position to have

26:06

anformational advantage, right? And I

26:08

think that had I not I don't want this

26:11

to be a leading question. It's like had

26:12

I not moved to Silicon Valley

26:16

>> when I did like 2000

26:18

and then subsequently you know stayed

26:21

there moved to San Francisco

26:22

specifically like nothing that I was

26:24

able to do in angel investing would have

26:26

been possible. So but there's more to

26:28

your story because a lot of people moved

26:30

there with hopes of startup riches in

26:34

whatever capacity. Not saying that

26:35

that's why you moved there, but what was

26:38

it that allowed you to get into those

26:40

deals? There are certain things that

26:43

come to mind based on our prior

26:45

conversations, but I'll just leave it at

26:48

that. Like, why were you able to get

26:52

into or select those deals? I think

26:54

there's what happened early and what

26:55

happens now. And I think those two

26:56

things are different. I think to your

26:58

point, the single most important thing

27:00

for anybody wanting to break into any

27:02

industry is go to the headquarters or

27:04

cluster of that industry. Like move to

27:07

wherever that thing is. And all the

27:09

advice of you can do anything from

27:11

anywhere and everything's remote is all

27:13

BS. And you see that for every industry,

27:15

not just tech. You know, if you wanted

27:16

to get into the movie business, people

27:18

wouldn't say, "Hey, you can write a film

27:21

script from anywhere. You can digitally

27:22

score from anywhere. You can edit it

27:24

from anywhere. You can film it

27:25

anywhere." They're like, "Go to Dallas

27:27

and join their burgeoning, you know,

27:29

film scene." They'd say, "Go to

27:31

Hollywood." And if you want to do

27:32

something in finance and you're like,

27:33

"Well, you could raise money from

27:34

anywhere and come up with trading

27:35

strategies and a hedge fun strategy from

27:37

anywhere and you could do it from

27:38

anywhere." People wouldn't say, "Hey, go

27:40

to, you know, whatever, Seattle." They'd

27:42

be like, "Go to New York or go to XYZ

27:44

financial center." So, the same is true

27:46

for tech. And Shan and my team has been

27:50

performing this sort of unicorn analysis

27:51

of where is all the private market cap

27:53

aggregating for technology. And

27:55

traditionally about half of it's been

27:57

the US and then half of that has been

27:58

the Bay Area. But with AI 91% of private

28:02

technology market cap is the Bay Area.

28:04

91% of the entire global set of AI

28:07

market cap is all in one 10 by 10 area.

28:11

Right? So if you want to do stuff in AI,

28:14

you should probably be in the Bay Area.

28:15

Mhm.

28:16

>> Probably the secondary place is New York

28:17

and then after that it just drops off a

28:19

cliff. And really it's the Bay Area. If

28:22

you want to do defense tech, you

28:24

probably should be in Southern

28:26

California close to where SpaceX and

28:28

Anderl are and sort of Irvine, Orange

28:30

County, etc. or Elsaundo. There's a lot

28:32

of startups there. If you want to do

28:33

fintech and crypto, maybe it's New York.

28:35

But the reality is these are very strong

28:37

clusters. So to your point, number one

28:39

is I was just in the right location.

28:40

>> Mhm. I was in the right networks and I

28:43

default was I was running a startup

28:45

myself. I was at Google for many years

28:46

and then I left to start a company and

28:48

people just started coming to me for

28:50

advice and the way I ended up investing

28:52

in Airbnb is I was helping them when

28:54

they were eight people or something

28:55

raise their series A and I introduced

28:57

them to a bunch of people and help with

28:58

some of the strategy there in very light

28:59

ways right they would have done it

29:00

without me but and they said hey at the

29:02

end of it do you want to invest a little

29:03

bit I said great that sounds wonderful

29:06

so it's very organic or the way I

29:07

invested in Stripe is I'd sold sort of

29:10

infrastructure early API company to

29:12

Twitter and when Twitter was say 90

29:14

people or

29:15

And I sent an email to Patrick, the CEO

29:17

of Stripe, just saying, "Hey, I've heard

29:19

great things about you and I really like

29:20

what Stripe is doing and I want to use

29:21

it for my own startup." And I I sold

29:23

this API company myself. Do you want to

29:24

just talk about this stuff? And so I

29:26

went on a couple walks and then a week

29:29

or two later, he text me and he's like,

29:30

"Hey, we're doing a round. Do you want

29:31

to invest?" So the first few things that

29:33

I did were very organic where the

29:35

founders were like, "Want you on board?"

29:37

>> Mhm. I didn't think, oh, I should be an

29:39

investor and I'm going to chase things

29:40

and I just really liked talking to smart

29:42

people and I liked working on certain

29:44

business problems and I love technology

29:46

and his translation the and so it was

29:48

very like you know I was just a nerd and

29:50

I I met other nerds and we hit it off.

29:53

It's kind of the early like story for

29:55

me. It just struck me that I'm sure

29:59

people have heard or I'm sure you've

30:00

heard this before but you know if you

30:01

want money ask for advice and if you

30:03

want advice ask for money. It just

30:05

struck me that it it kind of goes the

30:07

other way around too. It's like if you

30:09

offer a bunch of advice, often times you

30:12

get to give money and if you try to give

30:14

money, you might get solicited for

30:16

advice.

30:17

>> Oh. Yeah, that's a good point.

30:19

>> When did you write the high growth

30:21

handbook? When was that published?

30:23

>> It's a while ago now. It's probably like

30:25

sevenish years ago. Something like that.

30:27

>> Seven years ago. All right. We're going

30:28

to come back to that in a minute

30:31

because you you were in the right place

30:34

geographically speaking, right? You were

30:36

in the center of the switchboard

30:38

and like you said, these some of these

30:41

initial kind of standout investments

30:43

came about very organically. And what

30:47

I'd be curious to hear because you also

30:49

said yourself not too long ago that

30:50

there's there's what I did then, there's

30:52

what I did now. There's also what you

30:53

did in between right along the way. And

30:56

I'm wondering for instance if you would

30:58

still stand by this. This is from that

31:01

first round interview I was mentioning.

31:02

As a general rule when I make

31:04

investments it's market first and the

31:06

strength of the team second. And there's

31:07

more to it. But would you still agree

31:09

with that?

31:10

>> 90% yes. every once while you meet

31:11

somebody exceptional and you just back

31:13

them or something maybe so early like

31:15

when I led the first round of perplexity

31:18

>> like the very very first round and the

31:20

way that came about was Arvin the CEO

31:23

just I think he like pinged me on

31:25

LinkedIn literally and this was when

31:27

nobody was doing anything in AI and he

31:28

was like an open AI engineer or

31:30

researcher and he's like hey I'm at open

31:31

AI which nobody cares about at the time

31:33

and I'm thinking of doing something in

31:35

AI and I heard that you're talking about

31:37

this stuff and nobody else is talking

31:38

about it and can we meet up and So we

31:40

just started meeting every two weeks and

31:42

brainstorming, right? And then that led

31:43

to like investing in that. And that was

31:45

kind of a a people first thing where he

31:46

was just so good and every time we

31:48

talked, he'd show up a week later with a

31:51

thing that we discussed built. Like who

31:53

does that?

31:53

>> Yeah, that's a good sign.

31:55

>> So good.

31:56

>> Or the way I ended up investing in

31:58

Anderil was Google shuts down Maven,

32:01

which was their sort of defense project.

32:03

And so I think well if if the incumbents

32:04

aren't going to do it what a great place

32:06

for startups to play because there's

32:08

been a long history of Silicon Valley

32:11

and the defense industry that's HP and

32:13

that's a lot of the you know early

32:14

brands and so I was just looking for

32:17

something there or somebody to work on

32:19

this area and it was very unpopular at

32:20

the time and I ran into I think it was

32:22

Trey Stevens who's one of the

32:23

co-founders of Vanderel who's also a

32:25

founders fund it's lunch or something

32:27

else again right city to be in and he

32:29

said oh I'm working on this new defense

32:31

thing and I said amazing let's talk

32:32

about about it. Sometimes it's just

32:34

looking for these things too in a market

32:35

and sometimes it's people. So Andrew was

32:37

looking for a market and then finding

32:39

amazing people. Perplexity was kind of

32:41

in between where it was like I was

32:42

looking at everything in AI cuz I

32:44

thought it was going to be incredibly

32:45

important but not very many people were.

32:47

And then I just ran across an

32:48

exceptional individual and that's when I

32:51

funded Open AI. That's when I funded

32:53

Harvey which is the early legal. I

32:55

funded a lot of really early stuff

32:56

because they were the only people doing

32:57

anything

32:58

>> in this market that I thought would be

33:00

really important. Let me come back to a

33:01

few things you said. So you mentioned

33:03

the Perplexity founder or later the

33:06

founder who said you're you're talking

33:07

about this stuff, right? Or he heard or

33:10

read or found you talking about this

33:11

stuff.

33:12

>> Where was that? Was that posted on your

33:14

blog? Was it somewhere else? How did he

33:16

actually find you talking about

33:18

anything? I mean, I think he pinged me

33:20

in part because I was involved with a

33:21

bunch of the prior wave of technology

33:23

companies. Airbnb, Stripe, Coinbase,

33:26

Instacart, Square, a bunch of stuff like

33:29

that. And so I think at that point I was

33:31

already known as founder and investor.

33:35

But then on top of that I was just I was

33:37

trolling AI researchers and just asking

33:39

them about what's going on because it

33:40

was so interesting. There's a bunch of

33:42

art that was being done with these

33:43

things called GANs at the time these

33:45

generative adversarial networks. And so

33:46

I was playing around with that. I tried

33:48

to hire engineers to build me

33:49

effectively wasn't mid Journey because I

33:51

just thought it'd be really cool to be

33:52

make it easy to make AI art. Okay. So

33:55

let me let me pause for a second because

33:56

this is my second question and it's a

33:58

good time.

33:59

when you mentioned, you know, AI, I

34:01

thought it would be incredibly

34:02

important.

34:02

>> Yeah.

34:03

>> What were the indicators of that, right?

34:07

What was the smoke in the distance where

34:09

you're like, "Oh, that's an interesting

34:11

direction." I think there was two or

34:12

three things. AI was one of those things

34:14

that people have always talked about.

34:15

So, when I was doing my math degree, I

34:17

took a lot of kind of theoretical CS

34:18

classes. There were the early neural

34:20

network classes and things like that and

34:22

the math behind it and and so there's

34:24

always this promise of building these

34:25

artificial intelligences of different

34:27

forms. And one could argue Google was a

34:29

first AI first company and back then it

34:31

was called machine learning and it was

34:33

different technology basis in some sense

34:35

and I think 2012 was when Alexet came

34:37

out and there's this proof that you can

34:39

start scaling things and have really

34:40

interesting characteristics in terms of

34:42

how AI systems work. And then 2017 is

34:44

when the team at Google invented the

34:46

transformer architecture which

34:48

everything is based on now or roughly

34:49

everything. And so for example, if you

34:51

look at GPT for chat GPT, the T stands

34:53

for transformer. And around 2020ish, I

34:57

think was when GPT3 came out and that

35:01

was such a big step from GPT2. And it

35:03

still wasn't good enough to really do

35:04

stuff with, but you you're like, "Oh

35:06

[ __ ] the scaling wallpapers are out.

35:09

The step function and capabilities was

35:10

huge." You suddenly have a generalizable

35:12

model available via an API that anybody

35:15

can ping. And so just extrapolate that

35:17

out to the next step. And this is going

35:18

to be really important. Mhm. So it's

35:20

basically looking at that capability

35:21

step and playing around with the

35:22

technology and then reading the scaling

35:24

law papers or just in general the the

35:26

scaling laws seem to work for everything

35:28

and you're like wow this is going to be

35:30

really really important so let me start

35:32

getting involved with it.

35:33

>> Do you think you would have or could

35:35

have done that without a mathematics

35:37

background? I mean I'm guessing there

35:39

were probably some other folks but that

35:41

leads me to the question of like how are

35:43

you finding and ingesting that right?

35:45

Was it the talk of the town? So it was

35:47

in a sense like within your social

35:48

circles and the networks that you're a

35:50

part of it was a open discussion so you

35:53

were engaged with it or are you

35:55

ingesting vast quantities of information

35:58

from different fields and this happened

35:59

to be something that really caught your

36:01

attention.

36:01

>> I guess it's three things. I mean I've

36:03

always ingested a lot of information

36:04

from a lot of different fields just cuz

36:05

I like learning about stuff and I was

36:07

always this mix of like math and biology

36:10

and anime and art and other things. So,

36:13

you know, it was always kind of a mix.

36:14

And then it was something that my

36:16

friends were talking about, but it was a

36:17

bit more like toy like, oh, this is cool

36:19

and look at what came out, but most

36:21

people didn't then extrapolate. It's

36:22

kind of like early crypto or Bitcoin.

36:24

Like, everybody was talking about it,

36:25

but very few people bought it.

36:26

>> Mhm.

36:26

>> And so, I think that was part of it. And

36:28

then third, honestly, I just thought it

36:29

was really neat stuff that I kept

36:30

playing around with. This is back to the

36:32

GAN stuff and the art where these

36:35

different models would come out and you

36:37

could mess around with them. And one of

36:39

the things that's really under discussed

36:40

in terms of the importance of it

36:43

relative to this wave of foundation

36:44

models and AI and everything else is the

36:46

way AI or machine learning used to work

36:49

is your team at a company or wherever

36:52

else would go and there'd be what's

36:54

known as an ML ops team operations team

36:56

whose whole thing was like helping you

36:58

set up all the data and the pipelines

37:00

and everything to train a model and you

37:02

train a model that was custom to your

37:03

use case and what you were trying to

37:04

accomplish. And then it was you had to

37:07

build a bunch of internal services to

37:09

interact with that model. So it's a huge

37:10

pain to get to the point where you had a

37:13

working ML system up and running in

37:14

production

37:16

and then suddenly you have a thing where

37:18

you just do an API call. So with a line

37:21

of code or a few lines of code, anybody

37:22

anywhere in the world can ping it. But

37:25

not just that, it's generalizable. So

37:26

it's not just specialized to one use

37:28

case like spell correction or whatever.

37:32

You can use it for anything. M

37:33

>> and it has all of the internet embedded

37:36

in it in some sense in terms of the

37:37

knowledge base

37:38

>> and it can start having these advanced

37:40

reasoning capabilities. But one of the

37:41

most important things is hey you can get

37:43

it with a couple lines of code. You

37:44

don't have to go and build an MLOps

37:46

team. You have to host it. You have to

37:47

interact with it. You don't have to do

37:48

all this extra stuff. It just works.

37:50

>> Mhm.

37:50

>> That's really important.

37:52

>> It's huge.

37:53

>> Yeah. It's kind of hard to overstate. I

37:56

have a million questions for you. The

37:58

problem with this is like the

37:59

embarrassment of riches of directions

38:01

that we could go.

38:02

>> Mhm. So I am using in my team claude

38:05

code and assorted tools for all sorts of

38:08

stuff right now, right? And one of them

38:11

it just so happens overlaps with an area

38:14

of great skill for you and experience

38:18

which is angel investing. So this is the

38:21

first time where I feel really enabled

38:24

to do and there is some manual effort

38:27

involved as you might imagine but to go

38:29

back and do an analysis of 20 years of

38:32

angel investing

38:33

>> to try to do any number of things and I

38:36

suspect that a lot of what interests me

38:38

is not particularly useful like doing

38:40

some counterfactuals what if I had held

38:42

each of these for three years for 5

38:44

years for whatever I mean that's kind of

38:45

like just opus day whipping myself in

38:47

the back. Yeah,

38:48

>> for the most part. But in doing an

38:50

analysis like that, there are certain

38:52

things that immediately come to mind for

38:54

me that might be of interest. And I want

38:56

to hear what you would do, if you would

38:58

even do this. I mean, part part of it is

39:00

frankly just curiosity, right? Are the

39:02

stories I tell myself about this

39:04

>> true or not?

39:06

>> And so I'm interested like who made

39:09

certain introductions? Are there certain

39:10

people who just took me their basically

39:12

people on in hospice care and like

39:15

shipped them over as like a last ditch

39:17

effort? Are there people who actually

39:18

sent me good stuff consistently etc etc.

39:21

>> So there are a million and one ways I

39:23

could try to interrogate the data and

39:26

enrich it. We're doing a pretty good job

39:27

of enriching it. I mean Claude is and

39:29

other tools you know OpenAI is very good

39:31

at this. What are some of the more

39:33

interesting questions or lines of kind

39:37

of examination you think looking back

39:40

like whatever it is in my case it's

39:42

roughly 20 years of stuff. The weird

39:44

thing I've been doing is uploading

39:46

pictures of founders and asking the

39:47

models to predict if they'd be good

39:49

founders. Oh wow. Okay. Because if you

39:52

think about it, we do this all the time

39:53

when we meet people, right? We quickly

39:55

try to create an assessment of that

39:57

person

39:58

>> and their personality and what they're

40:00

like. And there's all these micro

40:01

features like do you have crows feet by

40:03

your eyes which suggest that your smiles

40:05

are genuine and what does that imply

40:06

about the sense of humor you have or

40:08

fured your brow over time and what does

40:10

that you know so there's all these like

40:12

micro features

40:13

>> and when you meet people you actually

40:14

can get a pretty quick impression of

40:16

them pretty fast it doesn't mean it's

40:17

correct right

40:18

>> but we actually do this really fast as

40:20

people

40:21

>> so I have this whole like set of prompts

40:23

that I've been messing around with just

40:25

for fun

40:26

>> around can you extrapolate like a

40:28

person's personality based off of a few

40:30

images

40:31

>> and therefore can you be predictive

40:32

about their behavior in any way? I think

40:34

that's fun, right?

40:35

>> Yeah. Are you finding any signal there?

40:37

>> Yeah, it works pretty well.

40:38

>> Wow.

40:39

>> So, I've been doing the weird [ __ ]

40:40

right? Like

40:41

>> practice smiling people.

40:43

>> Yeah. Yeah. I think it's interesting,

40:44

right? Because we do this all the time

40:46

where we read people, right? And that's

40:49

part of the prompt. It's like you're a

40:50

very good cold reader of people based on

40:52

micro features and etc etc. kind of

40:55

spell it out and then based on that, not

40:58

only give me your interpretation of this

41:00

person, but explain the specific micro

41:02

features for each thing that you're

41:04

stating about the person

41:05

>> and it'll break it down for you.

41:07

>> It's amazing. Like, imagine what this

41:09

technology is. It's crazy. And again,

41:11

I'm not saying it's fully accurate and

41:12

I'm not saying it'll be predictive and

41:14

but it's done pretty well in terms of

41:17

nailing people. It's even done things

41:18

like, "Oh, this person probably has this

41:20

type of sense of humor," or, "This

41:22

person probably holds themselves back in

41:24

most social settings and then chimes in

41:27

with a witty ride thing that nobody

41:29

expects or what." I mean, it's very

41:30

specific.

41:31

>> It's very specific.

41:33

>> Wow, that's amazing. Right. And so, I've

41:35

been doing stuff like that, which may

41:37

not be your question, but I've been

41:39

finding it really fun. It's related,

41:41

right, in the sense that and I'm sure

41:43

I'm missing some steps, but I I love

41:46

angel investing and I the dose makes the

41:48

poison. So, there's usually a case to be

41:50

made when I get to a certain threshold,

41:53

I'm like, "Okay, this isn't fun

41:55

anymore." Like, I love dark chocolate,

41:56

too. But I don't want just to be

41:58

force-fed dark chocolate all day. But,

42:01

and you and I have talked about this,

42:02

right? But I really do enjoy the

42:05

learning and the sport of it frankly and

42:08

interacting with some very very smart

42:11

people. Not not all of them work out as

42:13

far as founders of companies, but

42:16

ultimately I'm trying to figure out how

42:18

to separate signal from noise. And also

42:22

it's fun to try to use anything but in

42:25

this case investing to sharpen your own

42:28

thinking, right? and to stress test your

42:30

own beliefs and the assumptions that

42:31

undergur some of your predictions things

42:33

like that. I'm just wondering if you've

42:35

ever done like sort of a retrospective

42:37

analysis of your startup investing or if

42:39

you're like no more market reason style

42:42

only forward.

42:43

>> Yeah. Early on when I was first starting

42:46

to invest, I would have this long grid

42:48

of things by which I would score each

42:49

company

42:50

>> and then I'd go back and see if it was

42:52

correct.

42:52

>> It was roughly correct. I think the hard

42:54

part is there's a lot of like randomness

42:57

in outcomes.

42:58

There's the company that sells for a few

43:00

billion dollars that you thought was

43:01

dead or whatever it is.

43:03

>> Sure. And so how do you score things

43:05

like that? Right now we're in this

43:07

really weird market moment where

43:09

trillions of dollars of market cap are

43:11

all chasing the same prize and so

43:13

they're going to do all sorts of stuff

43:14

that wouldn't happen normally.

43:16

>> Mh.

43:16

>> So it's really hard to account for that

43:18

kind of thing, right? Relative to all

43:20

this. I'm much more in the merk and

43:21

recent camp of like I think very little

43:24

about the past. Mhm.

43:25

>> I think close to zero about my own past.

43:27

I just am like, let's keep going.

43:30

>> Mhm.

43:30

>> And maybe that's bad and there should be

43:32

dramatically more self-reflection. And I

43:33

try to self-reflect in the moment, but I

43:35

don't try to re-extrapulate and examine

43:37

my entire life and decisions. And

43:39

>> Mhm.

43:40

>> If anything, most of the decisions have

43:42

been ones where I'm really upset with

43:44

myself for not being more aggressive on

43:46

something. Mhm.

43:46

>> In other words, I invested in the

43:48

company, but I should have tried even

43:50

harder to invest more even if I tried

43:51

really, really hard because there's a

43:53

handful of companies that really matter

43:55

and that's all that kind of matters as

43:57

an investor. Obviously, as a person, I

43:59

enjoy getting involved with different

44:00

companies and different founders and

44:02

helping them whether the thing works or

44:03

not or I think the technology is

44:04

interesting or whatever. But the reality

44:06

is from a returns perspective, there's a

44:09

very clear power law that people talk

44:10

about and it's true. And I remember a

44:13

friend of mine did this analysis. I

44:14

think it may have been

44:17

Drew Milner or someone where it's like

44:18

look at all the companies from like I

44:21

don't remember the exact states 2000 or

44:22

2004 until today in technology.

44:25

>> Mhm.

44:26

>> And it was something like a 100

44:27

companies drove like 90 something% of

44:29

all the returns.

44:31

>> Mhm.

44:31

>> And 10 companies total drove like 80% of

44:36

all returns over a two decade period in

44:38

technology.

44:40

>> Yeah.

44:41

>> If you weren't 10 companies, you were a

44:44

bad investor. Mhm.

44:45

>> And once you start dealing with these

44:47

power laws and these outcomes, how can

44:49

you rate that? Right. It's basically,

44:50

did you hit one of 10 things or not?

44:52

That's really the rating. That's

44:53

probably the correct rating for

44:54

investment.

44:55

>> I'd love to try to focus on some

44:58

earlyish decisions

45:01

on this podcast, right? Because like you

45:04

said, there are the earlier decisions.

45:05

There's how you did things then, there

45:07

you

45:08

say that one is better than the other,

45:11

but certainly what you do in the past

45:13

tends to inform what you're able to do

45:15

and what you do in the present. And what

45:17

I'm curious about, and we won't spend a

45:20

ton of time on this, but it might be

45:21

interesting to folks, is to discuss when

45:25

you moved from purely doing angel

45:27

investing yourself to

45:30

involving other investors in your deals,

45:33

right? And there are multiple ways to do

45:37

this, but the reason I want to ask this

45:41

is because you did a number of SPVS.

45:46

I'll explain what that is. Special

45:47

purpose vehicle, but for folks, you

45:48

might be familiar with venture capital

45:50

firm. They have funds and they raise,

45:53

let's just call it $100 million for a

45:55

fund. It can be more or less of course.

45:58

Then they invest in a bunch of different

45:59

companies. And then you sort of see who

46:02

wins, who lose, and then if their

46:04

profits, I guess conventionally, let's

46:06

just use the textbook example. The

46:08

venture capital firm takes 20% of the

46:10

upside, and then the the LPs, the

46:13

investors get 80%, and the venture

46:15

capital firm takes a management fee to

46:17

keep the lights on. Although it usually

46:18

does a lot more than keep the lights on.

46:20

With the SPVS, you're investing in,

46:23

let's just say for simplicity, a single

46:25

company, right? Mhm.

46:27

>> And there are advantages to that in

46:30

simplicity for somebody who's putting

46:32

together the SPV, but you also have a

46:35

lot of reputational risk cuz if you have

46:37

a fund and you have a couple of losers,

46:39

your investors don't automatically go to

46:41

zero, right? But even SPV and it goes to

46:43

zero,

46:45

that could really hurt you

46:46

reputationally. And when I look at some

46:49

of your early SPVS, which I think

46:51

included certainly a number of name

46:53

brands like Instacart and so on, how did

46:56

you choose

46:58

which companies to do the SPVS with,

47:00

right? Because it seems like a very

47:02

important set of decisions to lay the

47:04

groundwork for creating optionality for

47:06

what you do after that.

47:08

>> I think to your point, I've always been

47:09

terrified of losing other people's

47:11

money. Like I'm fine if I lose my old

47:13

money.

47:14

>> It's my decision. I'm an adult. It's

47:16

okay. But I've always been and people

47:17

giving me money are adults or

47:18

institutions etc to invest on their

47:21

behalf. But similarly there I was just

47:23

terrified of ever losing money for

47:25

people. And so I've tried over time to

47:27

be judicious behind the SPBs that I did

47:29

early on. And the focus was on things

47:31

that I thought would really be outsized

47:34

companies. And so that was to your point

47:35

Instacart. It was early Stripe. It was

47:38

Coinbase. It was a couple things like

47:39

that that were amongst my very first

47:41

SPVS. And the emphasis was very much on

47:43

do I think this can be a massive thing

47:45

and also do I think there's enough

47:46

downside protection in some sense that

47:48

even if it didn't work as well as I

47:49

thought it would still be a good outcome

47:50

for people. So yeah, I try to do that

47:52

very diligently. It's interesting

47:54

because a lot of people ping me for help

47:56

as they think about becoming investors

47:57

or they're scouts for a fund which means

47:59

basically they're given a small amount

48:00

of money by a venture capital fund.

48:02

Sequoa famously has this program. They

48:04

give people money and then those people

48:05

invest money on their behalf. And some

48:08

of the scouts that I've talked to

48:10

basically treat it like free money or an

48:12

option. They're just kind of like, I'll

48:13

just wrote a bunch of stuff. Maybe

48:14

something works. And I pointed out to

48:16

them, hey, if you actually want to

48:17

become a professional investor at some

48:18

point, this is kind of your track

48:20

record.

48:20

>> Mhm.

48:21

>> A, you're a fiduciary in some sense, so

48:22

maybe I'll be more careful from that

48:23

perspective. But B, you know, this will

48:26

establish like your track record. And do

48:27

you want to have a good one or bad one?

48:29

And how do you think about that? And

48:30

again, sometimes people just get lucky

48:31

and they hit the one thing out of a

48:33

hundred, but that more than returns

48:35

everything and they look great.

48:37

But it's hard to be consistently good at

48:38

this stuff or consistently hit great

48:40

companies.

48:40

>> I want to double click on a few things

48:42

you said and maybe you could walk us

48:44

through a pseudonmous example. It

48:47

doesn't need to be a named company, but

48:49

when you're talking about setting your

48:51

track record, right? You did an

48:52

excellent job of that before you then

48:54

went on later to raise funds and so on.

48:57

And I would love you to perhaps explain

49:00

some of the things you do in diligence

49:02

or how you weight things differently and

49:04

also how you think about like the capped

49:06

minimum downside. I'm not sure that's

49:08

the exact wording that you used in

49:11

selecting those deals, right? Because

49:12

you could have selected any number of

49:14

deals on a sort of due diligence level.

49:17

What's the kind of stuff that you focus

49:18

on maybe more than others? And what are

49:20

the things you pay less attention to

49:22

than others? I think there's a big

49:24

difference between early and late

49:25

things.

49:26

>> On the early side, there's a point

49:28

earlier I tend to spend a lot more time

49:29

in the market than most early stage

49:31

investors. Most early stage investors

49:32

say, "I just care about the team and how

49:34

good are they?"

49:35

>> But I've seen amazing teams crushed by

49:37

terrible markets and I've seen

49:38

reasonably crappy teams do very well.

49:39

And so, you know, at this point, I think

49:41

the market is more important. Although I

49:42

think obviously great teams can find

49:44

their way if they decide to shift around

49:45

a bit.

49:46

>> I index a lot on market early and that

49:48

may be customer calls. That maybe is

49:50

trying to understand, do I think

49:50

something could be big? It could just be

49:52

some intuition around, hey, you know,

49:55

defense is really important. Nobody's

49:56

doing defense. Let me find a defense

49:58

company. Right? I tend to index a lot on

49:59

that. And relatedly, I've tended to

50:01

avoid science projects. And there's some

50:04

people who get really distracted by,

50:06

wow, this is really cool. It's quantum

50:08

and it's this and it's that. And I've

50:10

largely avoided those things. And, you

50:11

know, sometimes I miss things that were

50:12

really good. But often that was the

50:14

right call. I actually think spaxs saved

50:17

sort of hard tech and science-based

50:19

investing industry because if you look

50:21

at what happened basically at the market

50:23

peak a bunch of spaxs took a bunch of

50:25

companies public that would not have

50:27

been able to raise money in private

50:28

markets later and they gave them enough

50:30

money to keep going but more importantly

50:31

they returned a bunch of money to these

50:33

hard tech funds and that saved them from

50:35

going under. It gave them all the

50:36

returns was basically the spack era. So,

50:38

Chimath basically saved hard techch. I

50:40

mean that seriously, not cheek. And I

50:43

largely avoided that kind of class of

50:44

companies. And I'm not saying it was

50:45

smart. I would have made money off of

50:47

it. I just thought there was all sorts

50:48

of capitalization issues and science

50:50

risk and market risk and other things to

50:52

them. For later stage stuff, the hard

50:55

part often is everything on paper gets

50:58

modeled out for a late stage company as

50:59

a 2 to 3x from that investment point,

51:03

>> right? because all the funds that are

51:05

driving the rounds underwrite against

51:06

some IRRa clock 25% IRRa whatever it is

51:09

and so they all come up with these

51:10

models and the models all say all these

51:12

companies are basically going to two to

51:13

3x and the art there or the science

51:16

there whatever you want to call it is is

51:18

that a.5x company is it going to drop in

51:20

value or is that a 10x and how do you

51:22

know it's a 10x versus a 2 to 3x versus

51:24

a.5 and that's the harder part of growth

51:27

investing and there's a subset of things

51:28

that you're like this thing will just

51:29

keep going and here's why but often it's

51:31

not mathematical often that's just like

51:33

some market dynamic or some core insight

51:36

or some market share question and people

51:38

tend to make that stuff really

51:39

complicated and they have these really

51:40

complicated multi-page models and

51:42

50-page memos and all the rest and often

51:45

these things boil down to one single

51:47

question. What is the one thing I need

51:49

to believe about this company that makes

51:50

me think it's going to continue to be

51:52

really big?

51:53

>> If it's three things, it's too

51:54

complicated. It's probably not going to

51:55

work. If it's no things, then it doesn't

51:57

make much sense. So usually there's one

51:59

or two things that are really the core

52:00

insights you need to understand like the

52:02

outcome for something.

52:03

>> Could you give an example of one of

52:06

those beliefs for any company that comes

52:08

to mind?

52:09

>> I'll give you two or three of them. I

52:10

mean Coinbase part of it was just hey

52:13

this is an index on crypto and crypto

52:14

will keep growing because if Coinbase

52:16

trades every main cryptocurrency and

52:18

they take a cut of every transaction and

52:20

have enough volume to effectively bought

52:22

a basket of every cryptocurrency by

52:23

investing in Coinbase.

52:24

>> Mhm.

52:25

>> That was the premise there. Stripe it

52:27

was they're an index on e-commerce and

52:29

e-commerce will keep growing back then.

52:30

Now it's much more complex and there's

52:31

all sorts of great drivers of its

52:33

performance. Android was hey machine

52:36

vision and drones are going to be

52:37

important. AI and drones are going to be

52:38

important for defense.

52:40

>> That's it.

52:40

>> I mean it was more complicated than

52:42

that. I'm just saying like that was the

52:43

fundamental

52:44

>> well that was it for the belief the core

52:45

belief

52:46

>> there was like cost plus model versus

52:49

hardware margin. You know, Andrew

52:51

actually had four or five things that

52:52

were important there that were kind of

52:54

like a checklist for a defense tech

52:55

company, but for a lot of the other

52:57

ones, it was like e-commerce is good.

52:59

>> This is probably two inside baseball,

53:01

but what were the stages of the

53:03

companies that you mentioned when you

53:06

created the SPVS?

53:08

>> Roughly.

53:08

>> Well, I first invested in Stripe when it

53:10

was like eight people and then I kept

53:11

following on and I ran out of my own

53:13

money, frankly, and that's when I

53:14

started doing SPVS. So, I think I did my

53:17

first SPB and Stripe around the series

53:19

Cish.

53:20

>> Mhm.

53:20

>> We're in there.

53:22

>> Mhm.

53:22

>> Something like that.

53:23

>> Got it. And were the others more or less

53:25

similarish? Instacart, etc.

53:28

>> It was probably roughly in that

53:29

ballpark, Ced D, kind of that that

53:32

range.

53:33

>> I didn't have funds and everything else.

53:35

And, you know, I was putting as much as

53:36

I could personally into these things

53:37

both earlier, but honestly, I just kept

53:39

going when I could. When you're looking

53:41

at trying to determine if something is

53:43

a.5x or a 10x in addition to the core

53:46

belief, what are other layers of due

53:48

diligence that you bring to bear on

53:50

trying to ascertain that where something

53:51

falls on that spectrum?

53:53

>> Oh, I mean I do enormous due diligence.

53:54

So meet with the CFO multiple times,

53:56

walk through all the financials, walk

53:57

through the financial model, walk

53:58

through customers, call customers, look

54:00

at executive team, you know, it's it's a

54:02

bunch of stuff. Mhm.

54:03

>> My fund is the only one I know that

54:05

actually does like cash reconciliations

54:06

where we'll go through and do a cash

54:08

audit to look at cash flows for later

54:10

stage things. So I do enormous diligence

54:13

cuz I want to make sure I'm not doing

54:15

something inappropriate. But the flip

54:16

side of it is most of it just collapses

54:18

into like what's the one thing? Mhm. So

54:21

when I work with a company, I actually

54:22

try to be very fast and straightforward

54:25

on the diligence in terms of saying

54:26

let's just talk about a we need to just

54:28

make sure financials are correct and you

54:29

know like there's the basics but like

54:31

let's collapse it down into one or two

54:33

core questions right that help us

54:34

understand if this thing will keep going

54:36

not here's 30 pages of questions that

54:38

don't matter

54:40

right

54:42

which is what a lot of people they're

54:43

like hey we need to know the secondary

54:46

cohort on this [ __ ] thing that's like

54:47

a tiny product that who cares they just

54:50

waste time. They waste the founders time

54:51

and the team's time. And I try very very

54:53

hard not to do that. As a former

54:55

entrepreneur myself, I know how precious

54:56

the time is and I know how annoying

54:58

those questions are.

54:59

>> I was actually going to at one point ask

55:01

you about this, but we don't need to

55:02

spend too much time on it. You have a

55:04

post, this is from a while back, 2011,

55:06

listing questions a VC will ask a

55:08

startup. You omitted some of the

55:11

questions like the one that you just

55:12

mentioned, but I am curious if any of

55:16

these questions or additional questions

55:17

come to mind when you are talking to

55:19

founders. could be early stage or later

55:20

stage that you actually apply yourself

55:23

and I know it's from 2011 so I'm not

55:25

expecting you to remember the post

55:27

itself. I haven't looked at that post in

55:29

a really long time. I'm actually writing

55:30

another book now that is sort of the

55:32

0ero to1 startup phase and it gets into

55:34

some questions like that.

55:35

>> Mhm.

55:36

>> I think the reality is venture capital

55:38

has changed dramatically since I wrote

55:40

that post. Right. Because in 2011

55:42

>> the venture capital funds were largely

55:44

doing seeds through series D maybe and

55:48

then companies would go public. Mhm.

55:49

>> Yeah. This whole 20-year private company

55:52

thing didn't exist. Do you know why

55:54

there's a four-year vest on stock?

55:55

>> No. Why is that? I can kind of guess now

55:58

that we're talking about IPOs, but go

55:59

ahead. Why?

56:00

>> In the 1970s, they came up with a

56:02

four-year vest on stock options for

56:03

employees because companies would go

56:05

public within four years. And so then

56:06

you're done.

56:06

>> Yeah. Yeah.

56:08

>> Literally, right? And so it's like a

56:10

four-ear clock usually. And then when

56:12

Google took six years to go public,

56:14

everybody's like, "Oh my gosh, it took

56:15

them so long to go public. six years

56:19

like they just sat on their hands. Do

56:20

you know what I mean?

56:21

>> Literally people would say that, right?

56:23

>> And so what happened is venture capital

56:25

used to be very early stage and then

56:27

what we now call growth investing was

56:29

public market investing, right? That was

56:31

a stop that

56:33

>> people in the public markets would do

56:34

after four or five years of a company's

56:36

life. And so public markets used to be

56:37

involved very early. And then as Sarbain

56:40

Oxley came out and companies decided

56:42

they didn't want to go public and

56:43

there's more private capital available,

56:44

the timeline until going public

56:46

stretched out, right? And so suddenly

56:48

venture capital firms are doing all the

56:50

growth investing that used to be public

56:52

market investing.

56:53

>> Mhm.

56:54

>> And in 2011 that really wasn't happening

56:56

much. It was kind of Yuri Milner from

56:58

DST and a few other folks, but it wasn't

57:00

that much of an industry. And so the

57:02

nature of venture capital has shifted

57:03

radically over the last 15 years. And

57:05

that means those questions that I listed

57:08

there didn't include what I'd consider

57:10

more growth centric questions because

57:12

there wasn't a lot of growth investing

57:13

in venture.

57:14

>> What would be examples of growth centric

57:16

questions?

57:16

>> Honestly, it would overlap with some of

57:18

the early stages. You know, by the time

57:19

you hit a very late stage, it's very

57:20

financially driven.

57:21

>> Mhm.

57:22

>> And so often what at least I and my team

57:24

look at is what is just the core

57:26

business and how do we extrapolate that

57:27

going and then what are these ancillary

57:30

things that the company's doing that are

57:31

almost like options in the future that

57:32

may or may not come through. And so

57:33

usually we base our investment on that

57:37

core. Can they just keep doing the thing

57:39

they're doing forever? Cuz most

57:40

companies mainly get big off of one

57:42

thing at least for the first decade,

57:43

right?

57:44

>> Yeah.

57:44

>> There's very few companies that end up

57:46

with multiple things that all work

57:47

usually with one thing and then 10 years

57:48

later you maybe come up with the second

57:49

thing that really works, right?

57:51

>> Mhm.

57:51

>> It's like Google Cloud for Google,

57:53

although obviously there's YouTube and

57:54

there's a bunch of other stuff and Whimo

57:56

and all these interesting things now,

57:57

but it took a while, right? For a long

57:59

time just search search and ads.

58:00

>> Mhm. But then sometimes there are these

58:02

extra things that are potential really

58:04

interesting drivers on a business. Like

58:06

SpaceX was launch and then it became

58:08

satellite, right? It became Starlink.

58:10

>> Yeah, man. Starlink, what a thing. It's

58:14

too bad I have so much tree cover here.

58:17

Can't use it anywhere I spend time. But

58:19

let's turn to the high growth handbook

58:21

for a second. So that that was let's

58:23

just call it 7ish years ago. It is an

58:26

outstanding book. People should really

58:27

check it out, especially if you're

58:29

playing in the ventureback game. What's

58:30

the subtitle? The subtitle is scaling

58:33

startups from 10 to 10,000 people.

58:36

There's a lot of good advice in this

58:38

book. I wanted to ask you if there's

58:41

anything in this book that you wish

58:44

startup founders the book was intended

58:45

for would pay more attention to or if

58:48

there's anything that you would add or

58:50

expand to the book. So, when I read the

58:53

book, I had an outline for it that was

58:55

two, three times the length of the

58:56

actual book in terms of chapters. So

58:59

there's a lot of stuff I didn't write

59:00

about sales and marketing and growth and

59:02

a bunch of other other stuff. But the

59:04

book was basically written as sort of

59:06

like a tactical guide. It wasn't meant

59:07

to be read it from start to finish.

59:09

There's a bunch of interviews with

59:10

different people who are think amongst

59:11

the best practitioners in the world at

59:13

those areas. But fundamentally it was

59:15

meant to be more like you're suddenly

59:16

involved with the M&A, jump to the

59:18

chapter and read that and then put it

59:19

aside until you something else comes up

59:20

around hiring that you need to look at

59:22

or whatever. And so it it really is

59:23

meant to be like a handbook or guide or

59:26

companion to a founder versus, hey, I'm

59:28

just going to read it start to finish

59:29

and there'll be some pathy quotes in it

59:31

or whatever or one concept over 500

59:34

pages. You know, I try to avoid stuff

59:35

like that. It's very tactical. It's very

59:38

tangible. It's very specific. And this

59:41

new book that I'm working on is

59:43

basically the zero to one version of

59:45

that.

59:45

>> Mhm.

59:46

>> It's like how do you hire your first

59:47

five employees as a startup? How do you

59:48

somebody tries to buy you, what do you

59:50

do? How do you raise your first round of

59:52

funding? You know, it's that kind of

59:53

stuff. So, it's kind of like the 0ero

59:55

to1 technical guide.

59:56

>> Let me ask you about one specific

59:58

section. I think this is chapter two.

60:00

This is on boards.

60:02

And if this is getting too in the weeds,

60:04

tell me. We can hop to something else.

60:05

But I am curious if you could talk about

60:08

there are two things. Take a better

60:10

board member over a slightly higher

60:11

valuation. And if you want to revise

60:12

these, that's fine, too. There are two

60:14

things I'd love to hear you talk about

60:15

just because this is something that you

60:18

know founders I've been involved with

60:20

bump up against constantly take a better

60:23

board member over a slightly higher

60:24

valuation and then write a board member

60:26

job spec and then it specifically for

60:29

independence maybe we I would love to

60:31

hear you

60:32

maybe just elaborate but could you speak

60:34

to either or both of those a bit and if

60:37

you want to take it a different

60:37

direction I mean it's really just boards

60:39

writ large

60:40

>> so I think when founders pull together

60:42

boards Often the early boards are

60:44

investors because the investors ask for

60:46

a board seat as part of it or as part of

60:48

the investment and sometimes the

60:50

founders want somebody on board who's

60:51

really committed to the company and will

60:53

help out extra. And to some extent when

60:54

somebody takes a board seat it really

60:55

means or it should mean that they're all

60:57

in to help you versus you can have lots

60:59

and lots of investors if you have very

61:00

few board members. Reed Hoffman has this

61:02

thing which is like a board member at

61:05

its best is like a co-founder that you

61:06

wouldn't be able to hire otherwise and

61:08

so you bring them onto your board. It's

61:10

somebody that you want to spend more

61:11

time with on specific issues related to

61:13

the company.

61:14

>> Mhm.

61:14

>> Fundamentally, your board should be able

61:15

to help with different areas of the

61:16

company. It could be strategic

61:18

direction. It could be closing

61:19

candidates. It could be product areas.

61:21

It could be customer intros. It could be

61:22

a variety of things. And usually, you

61:25

want to kind of think of your board

61:26

members as a portfolio of people. It's

61:28

going to change between an early stage

61:30

company and a late stage and a public

61:32

one. You only need different types of

61:33

people over time usually.

61:36

But most companies are very reactive on

61:38

their board versus proactive.

61:40

>> And so they tend to end up with a couple

61:42

investors and then they kind of add

61:43

somebody from an industry seat and they

61:45

don't really think through like who they

61:46

want and why. And

61:48

>> if your co-founder is kind of like your

61:50

spouse, your work spouse, your work

61:52

husband or your work wife, your board

61:54

members are like your in-laws. You know,

61:57

you have to see them at Thanksgiving and

61:59

you have to chat with them all the time.

62:01

And so hopefully you have somebody you

62:03

want to steal all the time and who's

62:04

helpful and wonderful. And the bad

62:06

version is like gh it's the like

62:08

father-in-law or mother-in-law who's

62:09

always like berating you or whatever.

62:10

And so you kind of need to find the

62:12

right person. And it's for many many

62:15

years, right? You end up sometimes with

62:16

people on your board for a decade. And

62:17

if they're an investor, you can't get

62:18

rid of them. You literally can't fire

62:20

this person

62:21

>> because they have a contractual ability

62:23

to be on your board because of the

62:24

investment.

62:25

So that's why it's really important to

62:27

figure out the right person. And that's

62:29

back to valuation. Sometimes founders

62:31

will take a better price from a worse

62:33

person because it's a better price. And

62:36

our mutual friend Naval has this great

62:38

quote that valuation is temporary but

62:40

control is forever.

62:42

>> Yeah.

62:42

>> Very nolved.

62:44

>> Very nol.

62:45

>> And I think that's very true. And so if

62:47

you're choosing a board member and part

62:48

of that is a control thing. People who

62:50

control the board can in some cases fire

62:52

the CEO. You really want to choose the

62:54

right people and maybe take a worse

62:55

price for somebody who's really going to

62:57

be helpful and they're minimally

62:59

non-destructive and hope you get to have

63:02

around for 10 years. Any other books or

63:04

resources for people outside of the high

63:08

growth handbook who specifically want to

63:10

learn about boards, recruiting,

63:12

incentivizing

63:14

the co-founders that you couldn't hire

63:17

to join the board, etc., etc. any

63:19

particular approach you would take there

63:20

if they wanted to get more conversant?

63:23

>> I don't have anything super useful

63:24

there. I think the best thing is to call

63:26

other founders, other people who've

63:28

added people to their board and see how

63:30

they approached it. I do think writing

63:31

up a job spec, you write a job spec for

63:33

everything else in your company. Why

63:34

wouldn't you write one for a board

63:35

member?

63:36

>> Mhm.

63:36

>> So, it's good to write that up and say,

63:38

what am I actually looking for and why

63:39

and what am I optimizing for? So,

63:41

there's a common view of that. You can

63:43

use search firms, you can ask people,

63:44

you can target people that you know, you

63:47

know, if you have angel investors,

63:48

getting to know them is a great way to

63:50

see if you want to add one of them

63:51

eventually to your board.

63:53

>> That's what we did. We eventually added

63:55

Sue Wagner, who was a co-founder of

63:56

Black Rockck

63:58

>> onto our board. Her other board seat

64:00

were Apple, Black Rockck, and Swiss when

64:03

she joined our board, but I just got to

64:04

know her through just like she invested

64:06

and we just started working together and

64:08

really enjoyed her feedback and insights

64:10

and so we added her to the board there.

64:12

So it's kind of like that you you kind

64:14

of want to maybe get to know some

64:15

people.

64:16

>> Next I want to come to our we were

64:18

joking earlier about the in some case

64:20

sort of revisionist history

64:23

genesis stories.

64:25

>> So I'm looking at this is from 2018.

64:28

This is a while back. This is on why

64:31

combinators blog and you're being

64:33

interviewed about the high growth

64:34

handbook. But the sort of end of this

64:36

piece that I'm looking at says these

64:38

stories are never told. People always

64:40

say, "Oh, these things just grew

64:41

organically and isn't it amazing?" But

64:44

almost every company that ended up tens

64:46

of billions or hundreds of billions in

64:47

market gap did this, which is taking an

64:49

aggressive approach to distribution.

64:52

>> Whether that's Google and the Firefox

64:54

story or Facebook running ads against

64:57

people's names in Europe. I just wanted

64:59

to hear you tell some of these stories

65:01

because it is the stuff that kind of

65:02

conveniently that gets left out of TED

65:04

talks later. Do you know what I mean?

65:07

>> Yeah. Yeah. I mean actually the origin

65:08

stories for founders is always like ever

65:11

since Sarah was three years old she

65:14

dreamed of starting an accounting

65:15

software firm you know like come on you

65:17

know what I mean

65:19

and so a lot of the stories that are

65:21

told about founders are very revisionist

65:23

and

65:23

>> they make it the life's passion of this

65:25

you know and sometimes it really is but

65:27

you're like no when they were five they

65:29

did not you know collect things and then

65:33

that turned into Pinterest 30 years

65:35

later or whatever they always dreamed

65:37

dreamed of building AGI when they were

65:39

four and that's why Sam started OpenAI

65:42

or whatever.

65:43

>> So I think a lot of these things are

65:44

very kind of ridiculous in terms of how

65:46

they're written later. And I think the

65:49

product really really matters and I

65:50

think sometimes great product just wins

65:53

and the reason great product just wins

65:54

is it opens up a form of distribution

65:56

that didn't exist before or people will

65:57

buy it despite the lack of distribution

65:59

or relationships for a company.

66:01

>> Mhm. And the flip side of it is though

66:03

the companies that are really good have

66:05

an enormously good product engine and

66:08

then they have an amazing distribution

66:10

engine and sometimes that distribution

66:12

engine is built into the product that's

66:13

like cursor or wind surf just

66:15

distributing through product like growth

66:17

where developers just find it and start

66:18

using it and it helps them and so they

66:20

tell other developers and it spreads

66:21

word of mouth but often there's very

66:25

aggressive sales marketing other

66:27

components to it

66:28

>> and so for example when I was at Google

66:30

they were spending hundreds hundreds of

66:31

millions of dollars a year, which at the

66:33

time was real money, on distributing

66:35

search. And they had this little thing

66:37

called the toolbar that would like fit

66:39

into a browser cuz right now browsers

66:40

like with Chrome, you type in Words or

66:43

whatever, and then it instantly searches

66:44

it. Back then the main browsers were

66:46

like Netscape and Internet Explorer,

66:48

etc., and the browser bar thing didn't

66:51

exist. And they had this little client

66:52

app that you'd install, and they paid

66:54

basically every company on the internet

66:56

to cross download it. Mhm.

66:58

>> In other words, you're installing Adobe,

67:00

you're installing some malware detector

67:02

thing, it and it would always download

67:03

the toolbar because they got paid to

67:05

distribute it, right?

67:06

>> So, very aggressive distribution

67:08

tactics. And to your point, that was

67:10

Facebook and Facebook buying ads against

67:12

people's names in Europe.

67:14

>> Can you explain that? What are they

67:15

doing? What was their endgame?

67:17

>> They're basically trying to create

67:18

network liquidity in markets where they

67:20

were earlier behind. And so, they would

67:23

basically buy ads of literally a

67:25

person's name. And one of the most

67:26

common queries is people searching

67:28

themselves. And so you'd be like, "Oh,

67:30

let me look up Tim Ferrris on Google or

67:32

whatever." And there'd be a Facebook ad

67:33

saying, "Hey, Tim Ferrris on Facebook."

67:35

And you'd click and you land on the

67:36

signup blow for Facebook. Right? This

67:38

was years ago. This was Tik Tok and bite

67:41

dance, right? It was basically they

67:42

spent billions of dollars

67:45

distributing Tik Tok so they could build

67:46

enough of a network to train AI

67:48

algorithms to start telling people what

67:49

to do and also to get content creators

67:51

on. Where did they spend that money on

67:53

distribution? In this case of say Tik

67:55

Tok,

67:56

>> my says it's ads. Again,

67:57

>> yeah,

67:58

>> you kind of see this over and over

67:59

again. I mean, for enterprise, Snowflake

68:00

spent billions of dollars on salespeople

68:02

and compensation and channel

68:03

partnerships.

68:05

So, again, like distribution is really

68:06

important.

68:07

>> Mhm.

68:08

>> Every once in a while, you see a company

68:09

that actually wins not because of

68:10

product, but because they're just better

68:12

at sales and marketing and distribution.

68:14

And often that's a bummer for

68:15

technologists such as myself because

68:17

you're like, you know, the best product

68:18

should always win. Mhm.

68:19

>> Sometimes it does, but sometimes it's

68:20

just who was early and developed a brand

68:22

or who got ahead on distribution. You

68:24

know,

68:24

>> I'm looking at a piece in front of me.

68:26

This is from a while ago, but it's you

68:31

discussing long-held dogma that ends up

68:35

being unviable. So, for instance, the

68:36

common held belief after PayPal's sale

68:39

to eBay that fraud will kill you in the

68:40

payment space, right?

68:41

>> Yeah. And I'm wondering how you orient

68:44

yourself as an investor to

68:47

stress test those types of dogma. It's

68:50

really hard because you often end up you

68:54

start off with some set of beliefs. You

68:56

think something's interesting or maybe

68:58

you invest in it, maybe you start a

68:59

company in it, and then it turns out the

69:02

thing you think is really interesting

69:03

turns out to be really hard and you get

69:04

killed and then 5 years later a company

69:06

comes up that actually does it and wins.

69:08

>> Mhm.

69:10

And the question is why? Why did the

69:13

thing suddenly work when it didn't

69:14

before? Or there's 10 attempts to do X

69:17

and then suddenly is it the technology

69:19

got good enough. It could be a

69:21

regulatory change. It could be a market

69:22

shift. It could be whatever. An example

69:24

that may be Harvey and legal where

69:26

selling to law firms traditionally has

69:27

been awful and Harvey is not much

69:29

broader than that, right? They also had

69:31

very strong enterprise adoption and lots

69:34

of different people using them in

69:35

different ways. But the dogma was always

69:36

like building stuff for law firms is

69:38

crappy as a business and you should

69:39

never do it. But what AI did is it

69:41

shifted things from selling tools to

69:43

selling work product or selling units of

69:45

labor. That's really the shift in

69:47

generative AI. We're going from seats

69:50

and we're going from software and SAS

69:52

and we're moving into a world where

69:54

we're selling human labor equivalents.

69:56

We're selling work hours or labor hours

69:58

or whatever you want to call it

70:00

>> of cognition. And so Harvey is

70:02

effectively helping really augment

70:03

lawyers in different ways. And part of

70:06

that's a knowledge corpus, but a lot of

70:07

it is this tooling that really helps

70:09

lawyers achieve the goals that they have

70:10

in different ways in a collaborative

70:12

manner in some cases. And so it's just a

70:14

fundamentally different type of product

70:15

from what people were selling before.

70:16

And so it opened up the market in a way

70:18

that the market wasn't open before.

70:20

There's actually a broader conversation

70:21

around is the world market limited or

70:24

founder limited in terms of

70:26

entrepreneurial success. The Y cominator

70:28

school of thought is that we just don't

70:30

have enough founders and if we had 10

70:31

times as many founders, we'd have 10

70:32

times as many big companies. And there's

70:35

an alternate school of thought which is

70:36

how many markets are actually open in

70:38

any given moment in time. And those are

70:40

the ones where you can build big

70:41

companies because if the market isn't

70:42

open to innovation or change or whatever

70:44

or hasn't is undergoing a shift, you

70:47

can't really build anything. So why do

70:49

it? And the striking thing about AI is

70:51

it's opened up tons and tons of markets

70:53

that were closed for a long time. And

70:56

it's opened it up because of

70:57

capabilities, but it's also opened it up

70:59

because every CEO is asking themselves,

71:00

"What's my AI story?"

71:02

>> And we're way more openness to try

71:04

things than I've ever seen in my life.

71:06

And so we have this odd moment in time

71:08

where things are massively available for

71:10

founders to do new things.

71:12

>> And if you're an AI company and you're

71:14

not seeing explosive growth quickly,

71:16

something's fundamentally broken because

71:18

the markets are so open

71:20

that you can suddenly grow at a rate

71:22

that you've never grown before. Mhm.

71:23

>> There's always been cases of companies

71:25

that just go like this, but again, you

71:27

look at the ramps of open anthropic and

71:29

it's the fastest ramps to tens of

71:30

billions ever percentages of GDP. It's

71:32

like crazy. If we come back to your

71:35

comment of not necessarily market first

71:38

and strength of team second all the

71:40

time, but like you said, you 90% agree

71:42

with that, right? And if you have an

71:45

excellent team and a terrible market,

71:46

like that's going to be that's going to

71:48

be a difficult one to execute. How do

71:51

you determine what is a good versus

71:53

great market or just what is a great

71:55

market? What do you look for? And the

71:57

example you gave, I might be overreading

72:00

this, but when you said that when Google

72:02

shut down, I think it was Maven, right?

72:04

That's an interesting kind of

72:06

event-based approach as an input to

72:09

investing, right? Cuz you're like, okay,

72:10

if they're not going to build it,

72:13

>> that suddenly creates

72:16

a playing field for startups.

72:19

>> Yeah. to play in that space. So could

72:21

you speak to more of how you determine

72:24

or look for great markets?

72:25

>> I mean there's a few different ways to

72:26

think about it. One is like some people

72:28

take the framework of why now. What's

72:30

shifted now that makes it suddenly an

72:32

interesting market because people have

72:33

been trying to do things for a long time

72:34

in every market. And so that may be a

72:36

regulatory shift, right? Some SAR the

72:38

fleet management company benefited from

72:39

the fact that suddenly there's

72:40

regulation around needing incap

72:42

monitoring of drivers. So you had

72:44

suddenly cameras watching people so they

72:45

don't fall asleep while they're driving

72:46

trucks on the road. Right.

72:47

>> Mhm. And so that was another entry point

72:49

to start building out a suite of

72:50

software. But it was a regulatory shift.

72:52

Sometimes there's technology shifts like

72:54

what's happening in AI. And the crazy

72:57

thing about the AI shift is the

72:59

foundation models instantly plugged into

73:02

a massive set of markets which is

73:04

basically all enterprise data and

73:06

information and email and just all way

73:09

color work was suddenly available to AI

73:11

because it was the perfect technology

73:12

for that. It also plugged into code

73:14

which is a type of white color work. So

73:16

it's just suddenly it just inserts into

73:17

language and language is used everywhere

73:19

in in enterprises as well as in consumer

73:21

and so there's just a massive market to

73:22

tap into and transform or set of

73:24

markets. Robotics is a little bit

73:25

different from that because even if you

73:26

had the world's best robotic model the

73:28

subm markets that already have robotic

73:30

hardware are quite small on a relative

73:31

basis and so you don't have that instant

73:34

runway that you would with language

73:37

unless you come up with something new

73:39

there. That's kind of an aside but I

73:40

think robotics is really interesting and

73:42

will be important. And it's more just

73:43

that nuance of like what's that instant

73:44

thing you plug into commercially. And

73:46

then there's regulatory shifts, there's

73:48

technology shifts, there's incumbency or

73:51

company shifts, competitive shifts. A

73:53

company may blow itself up. It may get

73:55

bought by a competitor. One company I'm

73:57

excited about on the security side is

73:59

called Infysical and they're basically

74:00

competing in part with Hashi. Hashi got

74:02

bought by IBM. Anytime you get bought by

74:04

IBM, you slow you slow down a lot

74:06

usually.

74:06

>> Mhm.

74:07

>> Suddenly it creates more opportunity for

74:08

a startup. So, I just feel like there

74:10

are these different things that can

74:11

change at a given moment in time.

74:13

>> It could be the market's growing really

74:15

fast. That's Coinbase and crypto, right?

74:16

You just have suddenly this adoption and

74:18

proliferation of token types. There's

74:20

lots and lots and lots of different

74:21

markets that are interesting. The

74:22

commonality is usually like, is it also

74:24

big? Is there a big enough TAM? And

74:26

there's two types of TAMs. There's fake

74:27

TAM.

74:28

>> Just for people listening who might not

74:29

have it, yeah, total addressable market.

74:31

>> Total addressable market. So, what's a

74:32

market you're in? And sometimes people

74:34

come up with these fake markets. They're

74:35

like, "Oh, well,

74:37

we are facilitating

74:40

global e-commerce and global e-commerce,

74:42

I'm making up the numbers, $30 trillion

74:44

a year, and so we're in a $30 trillion a

74:46

year market." And if we get just a tenth

74:47

of a percent of that is 300 billion of

74:49

revenue, you're like, "That's not that's

74:51

not your market. Your market is like you

74:53

built this little optimization engine

74:55

for SMB websites or whatever. That's not

74:58

a $30 trillion market." And so really,

75:01

it's kind of defining the market.

75:02

There's a really famous example of this

75:05

where defining your market changes how

75:06

you think about it. And so that was

75:08

Coca-Cola, right? So Coke and Pepsi were

75:10

roughly neck andneck in terms of market

75:12

share for decades.

75:15

And then one of the Coke CEOs said,

75:17

"Hey, maybe we should be thinking about

75:20

our shares share of liquids sold like

75:24

drinks, not share of soda." And so we

75:27

just went from 50% market share to 5%.

75:30

And that's why they bought Dani and

75:32

that's why they entered all these other

75:33

markets, right? Because they said our

75:36

definition of our market is wrong.

75:38

>> We're not in the soda pop business.

75:39

We're in the drinks business. And so I

75:40

think also sometimes reconceptualizing

75:41

what you're doing can really help change

75:43

your scope of ambition or how you think

75:45

about what you're doing. If you're

75:47

trying to spot

75:49

along the lines of the fraud kill you in

75:51

the payment space, any dogma in the AI

75:56

world, the sphere of AI, right?

75:59

anything anything hop to mind where you

76:01

think uh maybe that's not true now or

76:05

maybe in like 2 years it'll be

76:07

completely untrue but people will have

76:08

latched on to this belief as one of the

76:12

thou shalt not or thou thou shalt

76:15

commandments. I don't I mean, there's

76:17

some things that have circulated in the

76:18

past around what's the ROI on the capex

76:20

spend of the will it ever be paid back

76:22

and but I just like I think that stuff

76:24

is probably off but yeah I think

76:26

fundamentally there are moments in time

76:28

where it's very smart to be contrarian

76:31

>> and there are moments in time where

76:32

being consensus is the smartest possible

76:34

thing you can do and I think right now

76:36

we're in a moment in time where being

76:37

consensus is very right and you can

76:40

really overthink it and what's a

76:42

contrarian thing we should go do a bunch

76:43

of hardware stuff cuz blah blah blah you

76:46

may just buy or AI, you know what I

76:47

mean? I think people make these things

76:48

way too complicated.

76:50

>> Uh yeah, true. In every aspect of life,

76:53

probably. Let's just say you were

76:56

mentoring. This is somebody you really

76:57

care about, right? We can make up an

77:00

avatar, whatever. like

77:03

nephew of one of your best friends or

77:05

son of one of your best friends or

77:06

daughter who's really smart, got an

77:09

engineering degree, came out of MIT, has

77:12

a couple of hits in angel investing, and

77:15

they're like, "All right, I think I'm

77:16

going to raise a fund."

77:18

>> They don't have the access necessarily

77:22

that you do to AI, let's just say. Are

77:25

there any things categorically you would

77:29

say would be on the do not invest list

77:32

because they're likely to be annihilated

77:35

or consumed or replicated by AI. I think

77:39

the reality is that when people start

77:40

off as investors a lot of the times the

77:43

reason they have early stage funds is

77:44

because you can always get access to the

77:46

earliest stages of companies if you just

77:48

start helping people.

77:49

>> I mean that's kind of what I did

77:50

accidentally but the reality is I've

77:51

seen it over and over. You follow in

77:54

with the right group of people because

77:56

the smartest people all self- aggregate

77:57

together and you just start helping

77:59

people out and they just ask if you want

78:00

to invest and you start investing and

78:02

suddenly you have a great track record

78:03

and you raise bigger funds and then you

78:04

go later stage cuz that same cohort has

78:07

grown up and they've started doing later

78:09

stuff and

78:09

>> Mhm.

78:10

>> when suddenly you can get access to

78:11

everything else. That's kind of the

78:13

traditional venture story and it has

78:14

been I think for decades in some sense.

78:16

So I think that's still very tenable and

78:19

you can still do it for AI, you can do

78:20

it for anything. I don't think you have

78:21

to go off and do like energy investing

78:23

or something.

78:24

>> You have mentioned in the past a key

78:27

learning maybe that's an overstatement

78:29

but you can correct me from Venote Kosla

78:32

and I think the wording is along the

78:34

lines of your market entry strategy is

78:36

off it different from your market

78:38

disruption strategy. Yeah.

78:40

>> Could you speak to that? There's sort of

78:42

two or three versions of this. version

78:44

one is you do something that's really

78:45

weird and it starts off looking like a

78:47

toy and then it turns out to be really

78:48

important and that would be Instagram or

78:50

Twitter or some of these more social

78:51

products, right? Where the initial use

78:53

case is very different from how it's

78:54

used today and it kind of evolved as a

78:56

product and how people perceive it and

78:58

use it and that's one version of it and

79:00

that's usually more consumercentric.

79:01

Another version of that would be SpaceX

79:02

and Starlink where they started off with

79:04

launch and getting things up into space

79:05

and they realized hey they have a cost

79:06

advantage for satellites and then they

79:08

built out the Starlink network which is

79:10

now like a major driver of their

79:11

business, right? And so what they did

79:13

expanded a lot and kind of shifted in

79:15

terms of their market entry with space

79:17

launch, their disruption is Starlink in

79:19

some sense. So I do think there's lots

79:21

of examples like that over time.

79:23

>> Coming back to information and just

79:27

consumption,

79:29

how do you consume most of your

79:31

information? like what would the pie

79:33

chart break down to in terms of if he

79:37

listens to podcasts versus books versus

79:40

X versus white papers versus something

79:43

else. I think a lot of what I've done

79:45

has collapsed into three things. It's X.

79:48

It's reading some technical

79:50

papers/journals in some cases if it's

79:52

more the biology side. Although I don't

79:53

do biology investing, I just like it.

79:56

But you know papers, although the papers

79:57

in the AI industry have really dropped

79:59

off given the competitive nature of

80:00

everything now.

80:01

>> Mhm. and then talking to people. I found

80:04

that like 20 minutes with somebody

80:05

really smart on a topic gives me more

80:08

information and insights and leads on

80:09

what to go read about than doing some

80:12

exhaustive search. Actually, the fourth

80:13

thing is now using models to do research

80:15

for me.

80:16

>> Mhm.

80:16

>> That could be open, that could be cloud,

80:18

that could be that could be Gemini. But

80:19

and for each of them, I actually use

80:21

different things or I do different

80:22

things with each of them.

80:23

>> What do you do with the different

80:24

models?

80:25

>> I'll just give you one example versus go

80:27

through every single one of them. But

80:28

>> sure,

80:29

>> Gemini, I actually feel like if I'm

80:31

looking up more like activities, like,

80:33

hey, I'm planning a trip somewhere, I

80:35

actually feel like the Google Corpus and

80:37

all the stuff they built over time is

80:38

quite useful for like travel tips of

80:40

certain types.

80:41

>> And so that'd be a Gemini specific

80:42

thing. That doesn't mean the other

80:44

models can't do it well. It's more just

80:45

like I've tended to get more accurate

80:47

like rankings of things that way and it

80:49

allows for like breakdowns and

80:52

>> rankings across multiple dimensions and

80:54

all the stuff for scoring of things. I

80:57

did like a deep dive on a few different

80:58

areas of ADHD and ASD.

81:00

>> What's ASD?

81:01

>> Oh, I'm sorry. It's autism spectrum.

81:03

>> I see. I got it.

81:04

>> So, basically, like if you look at

81:05

autism, it went from I'm going to

81:08

misquote the numbers, so you know, I

81:09

should look these up later, but I think

81:10

it's something like one in a few

81:12

thousand of the population was diagnosed

81:14

with autism like 30 years ago, 40 years

81:16

ago, and now it's like 3%.

81:18

>> Mhm.

81:19

>> So, you're like, well, what is that? Is

81:20

that a change in older parents having

81:23

more kids, which it turns out that

81:24

that's not the driver? Is it some shift

81:28

in the environment? Is it? It turns out

81:29

it's just diagnostic criteria shifted.

81:31

Yeah.

81:31

>> And there's a lot of incentives to

81:32

actually diagnose people in the schools.

81:34

That's roughly the summary of why we

81:35

have so many kids that are classified as

81:37

either having attention deficit where

81:40

there's also like a financial incentive

81:41

for doctors to do it because they can

81:42

prescribe drugs.

81:43

>> Mhm.

81:44

>> Versus autism. But both have gone up

81:45

dramatically in terms of diagnoses.

81:47

Right. And

81:49

>> it's unclear to me that more people

81:50

actually have it.

81:51

>> It's just diagnosed dramatically more

81:53

broadly. Which model were you

81:54

investigating that with?

81:56

>> Usually when I do things like that, I

81:57

use two or three models at once and then

81:58

I ask for primary literature and then

82:00

ask for summary charts and I actually

82:01

have this whole breakdown of like stuff

82:02

that I ask for it to output so that I

82:04

can go back and double check the data

82:06

>> and then reread through the literature

82:08

and everything else. And there's really

82:09

interesting things that came out of the

82:11

autism one in particular because it

82:13

turned out maternal age actually has a

82:14

bigger impact than paternal age

82:16

>> in some of the studies. And people

82:18

always talk about paternal age.

82:19

>> Mhm.

82:20

>> And then you're like, why are people

82:21

only talking about paternal age? Is

82:22

there a societal incentive for that? Is

82:24

it a political belief system? Like why

82:26

is that the point of emphasis?

82:28

>> Which I thought was really interesting.

82:29

Right.

82:30

>> So there's other things that kind of

82:32

come out of that in terms of questions

82:33

in terms of the why of things.

82:35

>> But why were you looking into that

82:37

specifically?

82:38

>> I thought it was interesting.

82:39

>> Yeah. Okay.

82:40

>> Seems like it's gone up a lot. Let me

82:42

try and understand why.

82:43

>> Mhm.

82:44

>> And so I started looking into it.

82:45

>> Mhm. I was also talking to a friend of

82:48

mine who is in her sort of mid to late

82:51

30s and she was dating a guy who was in

82:54

his late 40s, early 50s and she brought

82:57

up oh she was worried about autism and

83:01

what would happen with them if they had

83:03

kids and all this stuff. And so then I

83:05

did this deep dive as part of that too.

83:07

>> Mhm.

83:08

>> And the takeaway was I can't remember

83:10

exactly what it was. I'm making it up so

83:12

please don't quote me on this. I can

83:13

look it up later, but it was like

83:14

there's a 10% increase for every 5 to 10

83:17

years incremental paternal and maternal

83:19

age. And again, maternal was actually a

83:20

little bit stronger in some of the data

83:22

sets. And the thing is though, if you

83:24

believe that it's one in 5,000 or one in

83:27

whatever in the population, that 10% 20%

83:30

difference doesn't matter.

83:31

>> Mhm.

83:31

>> Right. From a population frequency

83:33

perspective, is this diagnostic criteria

83:34

went way up.

83:35

>> That's it's true for a lot a lot of

83:38

diagnosis. a lot of stuff, but like

83:40

society we're told, oh, it's like the

83:42

age of the parents that's driving all

83:44

these autism rates up. And you're like,

83:46

no, it's like all these incentives. And

83:47

then you look at some of the school

83:48

systems, it's like 60% of all the autism

83:51

diagnoses, and I think it was the state

83:52

of New Jersey or something were not

83:56

actually based on any clinical criteria.

83:58

It's just a teacher randomly saying,

83:59

"This person has autism."

84:00

>> Oh god, terrible,

84:02

>> right? And so you start digging into

84:04

these things and you're like, "Wow, this

84:05

is super interesting and these models

84:07

are really valuable and helpful for

84:08

that." So, I've been doing a lot of back

84:10

to your question of where do I get

84:11

information? Part of it has been these

84:12

deep dives with models into like

84:14

questions that I just find interesting

84:15

where I ask them to aggregate clinical

84:17

trial data or aggregate different types

84:19

of information and they give me the

84:20

primary sources and then give me

84:22

summaries and double check things. And

84:24

so I have like a whole series of prompts

84:26

around that to kind of also clean data

84:28

and check it. And that's really fun. And

84:30

then I always set it up in multiple

84:32

models and just see like what they each

84:33

come up with

84:34

>> when you talk to people. And this may be

84:37

too much of a kind of

84:40

amorphous topic for us to dive into in a

84:42

meaningful way, but let's just say you

84:45

find somebody you want to talk to for 20

84:47

minutes. How do you typically find those

84:49

people? I suspect there are a lot of

84:50

ways, but are you finding them on X

84:52

versus finding them in a technical paper

84:55

versus finding them somewhere else just

84:56

to get an idea? And then when you get on

84:59

the phone with such a person, are there

85:02

repeating trains of questioning or

85:04

certain ways that you like to approach

85:05

it? I think there's three different

85:07

types of things. One is, hey, I'm doing

85:10

a deep dive in an area just cuz I think

85:11

it's interesting or maybe it's relevant

85:13

to like an area I want to invest in.

85:14

Often, honestly, just is it interesting?

85:16

And then I'll try to quickly triangulate

85:18

who are the smartest people on the

85:19

thing. And that may be technical papers.

85:20

That may just be asking each person I

85:23

talk to who's really smart. There's one

85:25

form of that which is hey it's very

85:26

informational and I'm trying to do a

85:27

deep dive on something. I mean I work

85:29

with some of the early AI researchers at

85:30

Google. That's how I knew like Nom

85:31

Shazir who started character and then

85:32

went back to Google and that's how I met

85:34

a bunch of other folks. But some of the

85:35

people I just met you know just

85:38

interesting paper let me look them up or

85:40

hey everybody says this person is really

85:41

smart let me talk to them. That's one

85:42

form. A second form is I do think like

85:44

really smart people tend to aggregate

85:46

and so if you're just hanging out with

85:47

smart people you keep meeting other

85:48

smart people.

85:49

>> Mhm. And people who are polymathic tend

85:51

to hang out with people who are

85:52

polymathic and it's kind of like like

85:53

attracts like for all sorts of things.

85:55

So that's sort of a second set. Those

85:56

are probably the two main things. I mean

85:58

sometimes people also just refer people

86:00

over to me. They'll say, "Hey, I think

86:01

you two would like chatting."

86:02

>> Mhm.

86:03

>> There's a separate thing which is

86:04

there's people that I go back to

86:05

recurrently, right? Which is more like I

86:07

think this is one of the smartest people

86:09

about where AI is heading and let me

86:11

talk to them all the time. Or this is

86:14

one of the smartest people about

86:15

longevity. Like Kristen, the CEO of

86:17

BioAge, I call sometimes about random

86:19

longevity related things because she

86:21

knows so much about every topic in it.

86:23

She's very thoughtful. She's very

86:26

willing to question her own assumptions.

86:27

It's very just like truth seeeking

86:31

>> in a way that most people aren't and

86:32

people always use that term, but she

86:34

really is just like what's correct? Let

86:36

me just figure it out.

86:37

>> Mhm.

86:37

>> She's like a PhD and postto in like

86:40

binformatics and aging. She's super

86:42

legit. And so that's an example of

86:43

somebody that'll call for like longevity

86:46

stuff.

86:47

>> Mhm.

86:47

>> So I just have certain people I'll call

86:49

for certain topics.

86:50

>> So you have literacy in biologies. It's

86:53

kind of quaint how you know I went to

86:55

the first quantified self meetup and

86:58

whenever it was 2008 or something with

87:00

12 people sitting around in Kevin

87:02

Kelly's house talking about measuring

87:04

things with Excel spreadsheets. The

87:07

world has changed. So there are armies

87:09

of tens of thousands of self-described

87:12

biohackers and so on talking about

87:14

longevity. There's a lot of nonsense for

87:16

yourself personally. Where have you

87:19

landed in terms of interventions or

87:22

thinking about interventions for

87:24

yourself?

87:24

>> I haven't done a ton. You know, it feels

87:27

like a lot collapses into like sleep

87:29

well, exercise a lot, you know, etc.

87:32

Like there's a handful of things that

87:33

kind of matter. Eat well.

87:35

>> And so I've kind of collapsed on that

87:36

stuff. I think there's one or two things

87:38

that maybe you can take that are helpful

87:39

and then there some things I always

87:41

thought it'd be fun to experiment with

87:42

that I haven't done yet.

87:43

>> Like what

87:44

>> I thought it'd be cool to try like a

87:45

rapy impulse or something.

87:47

>> Mhm.

87:48

>> So stuff like that. But the reality is

87:50

that I'm kind of waiting for the real

87:52

drugs to come out and then maybe I'd use

87:54

those. Some of the ones that I actually

87:55

think will really impinge on longevity

87:58

or certain systems like we were talking

88:00

earlier about as you age the muscle that

88:02

holds the lens of your eye weakens and

88:03

that's part of the reason that your

88:05

ability to focus kind of gets screwed up

88:07

and so there should be eye drops for

88:09

that. Like there's a bunch of stuff

88:10

around neurosensory aging that I'd love

88:11

to fund a startup.

88:13

>> There's a bunch of stuff around the

88:14

cosmetics of aging that I've long been

88:15

talking about trying to fund. I actually

88:16

funded a clinical trial at Stanford to

88:19

work on that for example

88:21

>> because I think it's very undervested in

88:22

and peptides to me is basically that I

88:25

think a lot of those people are taking

88:26

peptides is like certain forms of health

88:28

but also certain forms of cosmetic

88:29

applications like 5HKCU and melatanin

88:33

and all these things are basically

88:34

cosmetic in nature.

88:36

>> You mentioned a handful of things that

88:37

seem helpful to take. Are those just the

88:40

b you know vitamin D or are we talking

88:42

about other things? What are what are

88:43

more on that short list? Vitamin D and

88:45

creatine.

88:46

>> Yeah, got it.

88:47

>> If you want to list, I don't know.

88:48

What's on your list? I mean, you've

88:49

thought about this so much more than I

88:50

have.

88:51

>> What are you taking or what are you

88:52

thinking about or

88:53

>> I'm much more conservative than I think

88:56

people would expect. You know, I've

88:57

played around with a lot of things in my

89:01

earlier days and a lot of it is very, I

89:06

would say, capped risk if you're

89:08

experimenting as I was with first

89:10

generation Dexcom continuous glucose

89:12

monitors in 2008, right? They were or

89:15

2009 very unpleasant to wear.

89:17

>> Yeah.

89:17

>> And I wasn't aware of any non-type 1

89:19

diabetics using them at the time. But I

89:22

wasn't using much in terms of let's just

89:26

say questionable gene therapy flying to

89:29

other countries to use something like a

89:32

fist statin. Not to throw it under the

89:34

bus, but I feel like the generalistic of

89:36

no biological free lunch. I recognize

89:38

it's very simplistic, but it's pretty

89:40

helpful. at least it will aid you in

89:42

avoiding a lot of pitfalls. Right? So I

89:46

mean there are things I'm experimenting

89:47

with different forms of ketone esters

89:50

and salts for instance I think some

89:53

could be very very interesting for

89:56

cerebral vascule and since I have

90:00

Alzheimer's disease Parkinson's etc in

90:03

my family including for people who are

90:05

ApoE33 so there are certainly many other

90:07

risk factors I'm paying a lot of

90:09

attention to that side of things you

90:12

know obetropib I think is one to keep an

90:16

eye on that's not yet ready for prime

90:18

time. But rapomy is interesting. I do

90:20

think rapamy is interesting with a lot

90:23

of asterisks because you can screw

90:25

yourself up if you don't know what

90:26

you're doing. And if you're playing with

90:29

any amunosuppressant, I mean, you just

90:31

have to be very careful. But looking at

90:35

combining that for instance, one of the

90:37

experiments that I might do is and I

90:40

would have a cleaner read of signal if I

90:43

only did one intervention. But real life

90:45

is different from

90:48

>> waiting for science sometimes.

90:51

So possibly combining Norwegian 4x4

90:53

interval training with rapamy pulsing to

90:57

look at volutric changes if any in the

90:59

hippocampus and other areas like I think

91:02

that's a pretty interesting hypothesis

91:05

worth testing but otherwise it's basic

91:08

basic right it's creatine it's the

91:12

vitamin D's look if you have methylation

91:14

issues or you're taking medication as I

91:18

am like omerazol which can inhibit

91:21

magnesium absorption and other things

91:22

like you want to keep an eye on that but

91:25

not too fancy you know I think uralithna

91:27

is pretty interesting

91:29

>> the data keeps mounting on that I do

91:31

have a keen interest in mitochondrial

91:34

health so if there are things which

91:37

could also include regular intermittent

91:40

fasting and occasional 3 to 7-day

91:43

fasting which could be a fast mimicking

91:45

diet most recently for me based on the

91:48

input from Dr. or Dominic Dagustinino

91:50

trying to foster autophagy and mphagy

91:54

with some regularity. Not all the time.

91:57

Sure.

91:58

>> I'm not trying to optimize for that all

91:59

the time.

92:00

>> One thing I've been wondering, so if you

92:01

look at like a computer and often the

92:05

key to fixing your laptop or the key to

92:07

fixing any system is you just [ __ ]

92:09

reboot it, right? You reload the system

92:10

and it just works magically.

92:12

>> Is there like a equivalent of that? Is

92:14

it like going under for anesthesia?

92:18

Is there some nerve freezing thing that

92:20

some people have been doing recently?

92:22

>> Yeah, I don't know. Sounds scary. Oh,

92:25

maybe stellite ganglen block.

92:27

>> Yeah, that's it. The st gang block.

92:29

>> Yeah.

92:29

>> Yeah. I mean, the rebooting.

92:34

Oh, man. I'm like letting out an exhale

92:37

because I there are some interesting

92:40

options for very specific use cases. It

92:43

makes sense conceptually. I mean, you're

92:46

more qualified to speak to this, but I

92:48

would say just spending a lot of time

92:50

around neuroscientists and I I spend a

92:52

lot of my time in terms of information

92:54

intake, reading or doing my best.

92:57

Fortunately, with AI tools, it's become

92:59

a lot easier, not just getting a

93:02

synopsis, but actually using it to help

93:03

you learn concepts that you can kind of

93:08

layer in some rational sequence. Sure.

93:11

But I read a lot of neuroscience stuff

93:13

and a lot of optical stuff. There's

93:15

actually a surprising amount of I mean

93:17

there's maybe not so surprising like

93:20

very strong intersection there. So if

93:22

you're looking at like PBM and like

93:24

photobiomodulation through the eyes, I

93:26

mean you can do it transcranally as

93:27

well. I would give a note of caution for

93:29

that for folks. But the reboot side I

93:32

would say for instance and people have

93:33

experienced this to a lesser extent with

93:36

GLP-1 agonists. If they take it for

93:39

weight loss, maybe they stop smoking or

93:42

they cut back on drinking or

93:45

they have these

93:47

kind of systemwide decreases or

93:49

increases in in impulse control.

93:52

>> Yeah. For someone who's say an opiate

93:54

addict, I think that I gain which in the

93:59

future may take the form of an active

94:01

metabolite or something like that in

94:04

flood dosing at least that's it seems

94:07

pretty necessary at this point

94:08

relatively high doses under medical

94:11

supervision because you can have fatal

94:12

cardiac events. Co-administration of

94:15

magnesium seems to help but it's

94:17

dangerous stuff. People should be

94:18

careful.

94:20

You can, and there are lots of people

94:23

historically who deserve a lot of credit

94:24

for this, like Howard Loff

94:27

and his wife, but

94:30

opiate addicts can go through blood

94:34

dosing of Ibeane and come out and

94:36

they're basically given a window with

94:38

which they won't experience withdrawal

94:41

symptoms, physical withdrawal symptoms.

94:43

And I think there are probably

94:45

applications to other things with ibeane

94:48

or pharmacological interventions like

94:51

ibeane. I mean some of the craziest

94:53

stuff honestly related to that molecule

94:55

is

94:57

the and I'm skeptical of this simple

95:01

description but sort of reversal in

95:02

brain age. It's a changes in the brain

95:06

based on MRIs. Nolan Williams, rest in

95:09

peace, and his lab looked at this pretty

95:11

closely, pre and post-dosing of ibagane

95:14

for veterans with traumatic brain

95:16

injury. And some of that might be due to

95:20

something called gal derived

95:22

neurotrophic factor, right? People might

95:24

be familiar with like BDNF.

95:27

So Ibeain is one interesting option.

95:30

Anesthesia, I've become a lot more

95:32

cautious with general anesthesia.

95:36

>> Yeah. M like I just had surgery

95:37

yesterday and I opted for local

95:39

anesthesia which in this case was not a

95:41

big deal cuz it was just you can see it

95:43

like had something cut out of my head.

95:46

But coming back to the and I'm going to

95:50

riff for a second here but the autism

95:54

spectrum disorder and ADHD example you

95:57

were unpacking where you talked about

95:59

the incentives they might be perverse

96:01

incentives to diagnose.

96:04

Well, I mean, not to quote Munger,

96:09

right? But it's like follow the money,

96:10

right?

96:12

And a lot of people are put under

96:14

general who really don't need to be put

96:16

under general, but it adds a very, very,

96:18

very huge line item to the tab. And

96:23

there are people who go under anesthesia

96:27

and wake up and do not retain the same

96:31

ability to recall memories and so on.

96:34

like their personalities become

96:37

in some way destabilized. And the fact

96:40

of the matter is that a lot of

96:42

anesthesia is very poorly understood. We

96:45

know it works, but it's very poorly

96:47

understood. And I don't think a lot of

96:51

people realize because why would they

96:54

unless they've, you know, just spending

96:56

a lot of time looking into this. There

96:58

are lots of medications that are

97:00

incredibly

97:02

well-known, commonly prescribed for

97:05

which the mechanisms of action are

97:06

really poorly understood, if they're

97:08

understood at all. You know, like we

97:10

know based on studies, they appear to be

97:12

well tolerated. Like side effects

97:14

profiles include A through Z and it

97:17

certainly seems to exert this effect or

97:20

have an impact on biioarker X, but we

97:23

don't actually [ __ ] know how it

97:25

works, you know? And there's just a lot

97:28

of stuff that falls into that bucket.

97:30

And so I am cautious with a lot of it.

97:32

But to come back to your question, I

97:33

went off on a bit of a TED talk. The

97:35

most interesting reboot that I've seen,

97:37

and I I don't want to really water it

97:39

down to like the dopamineergic system

97:41

because there's a lot more to it, but I

97:43

think more so than I itself shows what

97:48

is possible. And I I don't know if

97:50

that's limited to drugs. I am very

97:53

bullish and there going to be fuckups.

97:55

There are going to be some sidebars that

97:57

don't look so good, but brain

97:59

stimulation and bioelectric medicine,

98:03

broadly speaking, is one of the great

98:06

next frontiers, certainly in treating

98:08

what we might consider psychiatric

98:10

disorders,

98:12

but also for performance enhancement.

98:14

And

98:16

we're at a point kind of looking for

98:17

those external why now answers, right?

98:21

There are actually some really good

98:22

answers to why now for this as a field.

98:25

And I think people will be experimenting

98:28

a lot with this, but without the use of

98:30

pills and potions and IVs and actually

98:33

non-invasive brain stimulation, maybe

98:35

some invasive in the case of implants.

98:38

So that's a long answer, but yeah,

98:40

that's somewhat I'm thinking about and

98:41

tracking. I mean, some of this stuff

98:43

we'll see, but I think a lot of this

98:45

stuff could be outpatient procedure. You

98:46

walk in, you're in there for an hour or

98:48

two, and then you're out.

98:49

>> Mhm.

98:50

>> So, we'll see. Let me ask just a couple

98:52

of last questions and then if there's

98:53

anything else we want to bat around, we

98:55

can bat it around. But I appreciate the

98:56

time. A lot of five years from now is

98:59

looking back at a lot of today.

99:01

>> Yeah. Are there any beliefs, positions,

99:04

could be related to AI or otherwise that

99:06

you think are more likely than others to

99:08

be wrong?

99:10

>> H that's a good question. I think

99:12

there's all sorts of things I'm going to

99:13

get wrong. And I think we're living

99:14

through a period of big change, which

99:16

means big uncertainty. And so I wouldn't

99:18

be surprised if half the things I think

99:20

are going to happen don't or happen even

99:22

more so or whatever it may be. And

99:24

that's part of the fun of it in terms of

99:25

if we had a perfectly predictive future,

99:27

it'd be very boring, right? Cuz we we'd

99:29

know exactly what's coming and that'd be

99:30

awful. Ties into notions of free will

99:32

and all sorts of other things, right?

99:33

I'm sure there's a lot. I think there's

99:35

a separate question of just one exercise

99:37

I've been going through recently is, and

99:39

I've never done this before. You know, a

99:41

lot of what you do in life, it's back to

99:42

the John Lennon quote, life is what

99:43

happens when you're making other plans.

99:45

for the first time I'm actually thinking

99:46

like what's my 10-year plan right across

99:48

a few different dimensions of life and

99:50

the basic question is I won't get it

99:53

right right I can try and have a plan

99:55

for 10 years of course it's not going to

99:56

be what I think but it's more does it

99:59

change the scope of ambition that you

100:02

have does it change how you think about

100:03

life

100:04

>> and so I've been trying to think in

100:06

those terms like what do I want to do

100:07

over the next decade and that what does

100:10

that mean in terms of the near-term what

100:11

I do in order to get there in 10 years

100:14

and so I think That's been very eye

100:16

opening for me in terms of shifting some

100:18

of my mindset around what I should be

100:20

trying or not trying to do. Now the AGI

100:23

pilt people will say well in two years

100:24

we have AGI so it doesn't matter what

100:25

your plans are but I find that to be a

100:28

very kind of defeist view of the world

100:29

you know it's like I'm going to give up

100:30

because I was versus saying great I'm

100:33

going to have this plan and I can adjust

100:34

it as needed but through this time of

100:36

change there'll be some really

100:37

interesting things for me to do in the

100:38

world. Well do you have anything else

100:40

you'd like to say comments requests for

100:42

the audience? things to point people to

100:44

anything at all before we wind to a

100:46

close. People can find you on

100:47

xilladgill.com

100:51

certainly the Substack blog

100:52

blog.gill.com

100:54

and elsewhere we'll link to everything

100:55

in the show notes but anything else that

100:57

you'd like to add.

100:58

>> Yeah, it was wonderful to chat with you

100:59

as always. I really enjoy it. So, thanks

101:01

for having me on.

101:02

>> Yeah, thanks man. Always a pleasure. And

101:04

to everybody listening or watching, we

101:07

will link to everything in the show

101:08

notes tim.blog/mpodcast.

101:10

And until next time, as always, be a bit

101:13

kinder than is necessary to others, but

101:15

also to yourself. Thanks for tuning in.

Interactive Summary

The discussion explores the current landscape of AI, highlighting the

Suggested questions

11 ready-made prompts