HomeVideos

Silicon Valley Insider EXPOSES Cult-Like AI Companies | Aaron Bastani Meets Karen Hao

Now Playing

Silicon Valley Insider EXPOSES Cult-Like AI Companies | Aaron Bastani Meets Karen Hao

Transcript

2298 segments

0:00

I went to school with a lot of the

0:01

people that now build these

0:03

technologies. I went to school with some

0:04

of the executives at OpenAI. I don't

0:07

find these figures to be towering or

0:09

magical. Like I remember when we were

0:11

walking around dorm rooms together in

0:13

our pajamas and it it instilled in me

0:17

this understanding that technology is

0:18

always a product of human choices. And

0:21

different humans will have different

0:23

blind spots. And if you give a small

0:26

group of those people too much power to

0:30

develop technologies that will affect

0:31

billions of people's lives, inevitably

0:34

that is structurally unsound.

0:37

[Music]

0:42

Artificial intelligence is the backbone

0:45

of some of the biggest companies in the

0:46

world right now. Multi-trillion dollar

0:48

companies can talk about nothing else

0:51

but AI. And of course, whenever it's

0:53

discussed in the media by politicians

0:55

and civil society, it's compared

0:58

invariably to the steam engine. It is

1:00

going to be the backbone for a new

1:03

machine age. Some people are really

1:06

optimistic about the possibilities it

1:08

will bring. They are the boosters, the

1:11

techno optimists, the technoutopians.

1:14

Others are doomers. They're down on AI.

1:17

AGI, artificial general intelligence,

1:19

it's never going to happen. And if it

1:21

does, well, it's going to look like The

1:23

Matrix or maybe even the Terminator and

1:26

Skynet. We don't want that, do we?

1:29

Today's guest, however, is not

1:31

speculating about the future. Instead,

1:34

they're very much immersed in the

1:36

present and indeed the recent past of

1:38

the artificial intelligence industry.

1:41

Karen How went to MIT. She studied

1:44

mechanical engineering. She knows the

1:47

STEM game inside out. But she made a

1:50

choice to go into journalism and media

1:53

to talk about these issues with a

1:55

fluency and a knowledge that very few

1:58

people have. Rather than speculate, what

2:00

Karen has done with this book is talk to

2:03

people in the field. 300 interviews with

2:06

260 people in the industry, 150

2:09

interviews with 90 employees of OpenAI,

2:13

both past and present. She has access to

2:15

emails, company Slack channels, the

2:17

works. This is the inside account of

2:22

open AI and the perils of artificial

2:25

intelligence, big tech, and big money

2:28

coming after well, pretty much

2:31

everything. It's an amazing story told

2:34

incredibly well. I hope you enjoy this

2:37

interview. Karen, how welcome to

2:39

Downstream. Thank you so much for having

2:41

me, Aaron. It's a real pleasure to have

2:42

you on. Right. We say we say that to

2:45

everybody, every guest. Uh, but I have

2:47

to say, and this has had rave reviews,

2:49

even though I've lost the uh the dust

2:51

jacket, Empire of AI with this huge poem

2:55

you've written, 421 pages, I think, not

2:58

including the acknowledgements.

3:01

Really, really interesting book. It's

3:02

about AI, this burgeoning industry in

3:04

the United States around artificial

3:06

intelligence. That word has been in

3:08

circulation since the 1950s, I believe.

3:10

Yeah. Before we drill down into your

3:12

book, what is AI and what do people mean

3:16

by AI when they talk about it in 2025 in

3:19

Silicon Valley? This is you would think

3:21

this is the easiest question, but this

3:23

is always the hardest question that I

3:24

get because artificial intelligence is

3:27

quite poorly defined.

3:30

We'll go back first to 1956 because I

3:32

feel like it helps understand a little

3:34

bit about why it's so poorly defined

3:36

today. But the term was originally

3:38

coined in 1956 by this Dartmouth

3:41

professor, assistant professor John

3:42

McCarthy. And he coined it to draw more

3:45

attention and more money to research

3:47

that he was originally doing under a

3:49

different name. And that was something

3:50

he has explicitly said a few decades

3:52

later. He said, "I invented the term

3:54

artificial intelligence to get money for

3:56

a summer study." And that kind of that

4:00

that marketing route to the phrase is

4:04

part of why it's really difficult to pin

4:07

down a specific definition today. The

4:10

other reason is because generally people

4:12

say that AI refers to the concept of

4:14

recreating human intelligence in

4:16

computers. But we also don't have a

4:18

scientific consensus around what human

4:20

intelligence is. So quite literally when

4:24

people say AI, they're referring to an

4:26

an umbrella of all these different types

4:28

of technologies that appear to simulate

4:31

different human behaviors or human

4:35

tasks. Um, but it really ranges from

4:38

something like Siri on your iPhone all

4:41

the way to chat GBT, which behind the

4:43

scenes are actually really, really

4:45

different ways of operating. They're

4:47

totally different scales in terms of the

4:49

consumption of the technologies. Um, and

4:52

of course they they often have different

4:53

use cases as well. So right now when

4:56

OpenAI meta when they use those words AI

4:59

in regards to their products

5:00

specifically, what are they talking

5:01

about? Most often they are now talking

5:04

about what are called deep learning

5:06

systems. So these are systems that train

5:10

on loads of data and you have software

5:14

that can statistically compute the

5:16

patterns in that data and then that

5:18

model is used to then make decisions or

5:21

generate text or make predictions. So

5:24

most modern-day AI systems built by

5:27

companies like Meta, by OpenAI, by

5:29

Google are now these deep learning

5:31

systems. So deep learning is the is is

5:33

the same as machine learning is the same

5:36

as neural networks. Deep learning is

5:39

these are synonyms. Deep learning is a

5:41

subcategory of machine learning. Machine

5:43

learning refers to a specific branch of

5:45

AI where you build software that

5:48

calculates patterns in data. Deep

5:51

learning is when you're specifically

5:54

using neural networks to calculate those

5:56

patterns. So you have a what I call um

6:00

one of the founding fathers of AI used

6:01

to call AI a suitcase word. So you

6:04

because you can put whatever you want in

6:05

the suitcase and suddenly it AI means

6:07

something different. So we have this

6:09

suitcase word of AI and then under that

6:11

any datadriven AI techniques are called

6:14

machine learning and then any neural

6:16

network datadriven techniques are called

6:19

deep learning. So it's the smallest

6:22

circle within this broader suitcase

6:24

work. So deep learning and neural

6:26

networks are kind of interchangeable.

6:28

Not exactly in the sense that neural

6:29

networks are referring to a piece of

6:31

software and deep learning is referring

6:32

to the process that the software is

6:34

doing, right? Yeah. Do you get upset

6:37

when when when politicians so in this

6:39

country we have a prime minister called

6:40

Karma, you know, and they say we think

6:43

the NHS can save, you know, 20% by, you

6:46

know, using AI applications, right? Do

6:48

you sort of think my good like these

6:50

people have no idea what they're talking

6:51

about because that is such an expansive

6:53

term. It can't really it's its political

6:55

convenience is precisely doesn't mean

6:57

anything. It it does frustrate me a

7:00

little bit. I so I often use the analogy

7:04

that AI is like the word transportation.

7:06

I mean if transportation can refer to

7:09

bicycles or rockets or self-driving cars

7:11

or gas guzzling trucks, you know, like

7:13

they're all different modes of

7:14

transportation, serve different

7:15

purposes, different costbenefit

7:17

analyses. And you would never have a

7:20

politician say we need more

7:22

transportation to mitigate climate

7:24

change. You would be like but what kind

7:27

of trans like what are you talking

7:29

about? Well yeah

7:32

we need more transportation to stimulate

7:34

the economy. I mean maybe in that case

7:37

it's like it's just yeah like there is a

7:40

vagueness

7:42

around the AI discussion that is really

7:45

unproductive and I think a lot of that

7:48

leads to confusion where people think AI

7:50

equals one thing and AI equals progress

7:53

and so we should just have all of it but

7:55

actually if we were to use the

7:58

transportation analogy you know like

8:00

having more bicycles having more public

8:02

transit sounds great but if someone were

8:04

actually referring to just like using

8:06

rockets to commute from, you know, um,

8:08

Dublin to to London and we were like,

8:11

everyone should get a rocket now, like

8:13

that's going to bring us more progress,

8:16

you'd be like, what are you talking

8:17

about? And that's effectively what these

8:19

companies are doing with general

8:21

intelligence. When you're giving people

8:22

tools for free with regards to

8:25

generative AI to just generate stupid

8:27

images of nonsense, that's kind of what

8:29

we're doing, right? I I presume you

8:30

would take that analogy to that level.

8:32

It's like saying, "Let's use a rocket to

8:34

get from Dublin to London to Paris."

8:36

Yeah, exactly. Like, it's not fit for

8:38

the task. Um, and the the extraordinary

8:40

amount of environmental costs for flying

8:42

that rocket when you could have flown a

8:46

much more efficient plane to do the same

8:48

thing is like what are you doing, you

8:50

know? Um, and that's some one of the

8:54

things that people don't really realize

8:55

about artificial or about generative AI

8:58

is that the resource consumption

9:01

required to develop these models and

9:04

also use these models is quite

9:06

extraordinary and often times people are

9:08

using them for tasks that could be

9:10

achieved with highly efficient different

9:13

AI techniques and you're but because we

9:17

use the sweeping term AI to mean

9:19

anything then people just think, "Oh,

9:21

yeah, right, right. I'm just going to

9:22

use Chat GBT for my one-stop shop

9:25

solution for anything AI related." So,

9:28

right now, data centers globally, I

9:29

think, are about 3 3.5% of CO2

9:31

emissions. I think the the data centers

9:34

for AI are a tiny fraction of that, but

9:36

obviously they're growing at an

9:38

extraordinary pace. Yeah.

9:40

Are there any numbers out there with

9:42

regards to projected CO2 emissions of

9:45

data centers globally 5, 10, 15 years

9:47

from now or is that also it's so recent

9:50

that we can't really speculate about the

9:52

numbers involved? There are numbers

9:54

around the energy consumption which you

9:56

could then use to kind of try and

9:58

project uh project carbon emissions. So

10:00

there was a McKenzie report that

10:02

recently projected that based on the

10:04

current pace of data center and

10:07

supercomputer expansion for the

10:08

development and deployment of AI

10:10

technologies, we would need to add

10:12

around half to 1.2 times the amount of

10:15

energy consumed in the UK annually to

10:19

the global grid in the next 5 years.

10:22

Wow. Yeah. And most of that will be

10:25

serviced by fossil fuels. This is

10:27

something that Sam Alman actually even

10:29

said in front of Senate the Senate a

10:31

couple weeks ago. He said it will most

10:34

probably be natural gas. So he actually

10:36

picked the nicest fossil fuel. But we

10:38

already seeing reports of coal plants

10:40

having their lives extended. They were

10:42

meant to be retired, but they're no

10:43

longer being retired explicitly to power

10:47

data center development. We're seeing

10:48

reports of Elon Musk's XAI, the giant

10:52

supercomputer that he built called

10:54

Colossus in Memphis, Tennessee. It is

10:56

being powered with around 35 unlicensed

10:59

methane gas turbines that are pumping

11:01

thousands of toxic air pollutants into

11:04

the air into that community. So this

11:08

data center acceleration is not just

11:10

accelerating the climate crisis. It also

11:12

is accelerating the public health crisis

11:16

of people's ability to access clean air

11:18

as well as clean water. So one of the

11:21

aspects that's really undertalked about

11:24

with this kind of AI development, the

11:26

OpenAI's version of AI development is

11:29

that these data data centers need fresh

11:31

water to cool because if they used any

11:34

other kind of water, it would erode

11:36

corrode the equipment. It would lead to

11:38

bacterial growth. And so most often

11:40

these data centers actually use public

11:42

drinking water because when they enter

11:44

into a community that is the

11:46

infrastructure that's already laid to

11:47

deliver the fresh water to companies, to

11:51

businesses, to residents. And so one of

11:53

the things that I highlight in my book

11:55

is there are many many communities that

11:58

are already they do not have sufficient

12:01

drinking water even for people. And I

12:04

went to Monte Vido Uruguay to speak with

12:07

people about a historic level of drought

12:10

that they were experiencing where the

12:12

Monte Vido government literally did not

12:14

have enough water to put into the public

12:16

drinking water supply. So they were

12:18

mixing toxic waste water in just so

12:20

people could have something come out of

12:22

their taps when they opened them. And

12:24

for people that were too poor to buy

12:26

bottled water, that is what they were

12:27

drinking. And women were having higher

12:29

rates of miscarriages. uh elderly were

12:31

having an exacerbation or inflammation

12:33

of their chronic diseases. And in the

12:36

middle of that, Google proposed to build

12:38

a data center that would use more

12:40

drinking water. This is called potable

12:42

water, right? This is a potable water.

12:44

Yeah. Exactly. You can't use seaw water

12:46

because of the saline aspect that you

12:48

Exactly. Exactly. And Bloomberg recently

12:51

had a story that said 2third of the data

12:53

centers now being built for AI

12:55

development are in fact going into water

12:58

scarce areas.

13:02

You said a moment ago about um XAI

13:05

unlicensed energy generation using

13:07

methane gas. When you say unlicensed,

13:09

what do you mean? As in the company just

13:12

decided to completely ignore existing

13:14

environmental regulations when they

13:15

installed those methane gas turbines.

13:18

And this is actually a really

13:21

one of the one of the things that I

13:23

concluded by the end of my reporting was

13:26

not only are these companies really

13:29

corporate empires, but also that if we

13:32

allow them to be unfettered in their

13:35

access to resources and unfettered in

13:37

their expansion, they will ultimately

13:40

erode democracy. Like that is the

13:41

greatest threat of their behaviors. And

13:43

what XAI is doing is a perfect example

13:47

of at the smallest level

13:51

the they're enter these companies are

13:53

entering into communities and completely

13:55

hijacking existing laws, existing

13:57

regulations, existing democratic

13:59

processes to build the infrastructure

14:02

for their expansion. And we're seeing

14:04

this hijacking of the democratic process

14:06

at every level, the smallest local

14:09

levels all the way to the international

14:10

level.

14:12

It's kind of that that orthodoxy of seek

14:13

permission after you do something is now

14:15

I mean when you start applying this is

14:17

business as usual for those companies

14:19

that's part of their expansion strategy

14:21

which we'll talk about and we're going

14:23

to talk about um the sort of global

14:25

colonial aspect as well with regards to

14:26

resource consumption resource use just

14:28

bring it back to the US again because at

14:31

the top of this conversation I want to

14:32

offer a bit of a primer to people out

14:34

there who they maybe know what AI is

14:36

they maybe have used chat GPT

14:39

what are the major companies we're now

14:41

talking about in this space particularly

14:43

in the United States of America over the

14:45

last 5 years who who are the people in

14:47

this race to AGI Mhm. allegedly

14:50

um artificial general intelligence

14:52

something which you know either might be

14:54

sentient probably not or capable of

14:56

augmenting its own intelligence more

14:58

plausible who are the major players in

15:00

that field right now one caveat on AGI

15:03

is that it's as illdefined as the term

15:06

AI um so I like to think of it as just a

15:09

rebranding you know the the entire

15:12

history of AI has just been been about

15:13

rebranding and the term deep learning

15:15

was also a rebranding so anyway but the

15:18

players First, OpenAI of course they

15:21

were the ones that fired the first shot

15:23

with chat GBT anthropic major competitor

15:27

Google Meta Microsoft they're the older

15:31

uh internet giants that are now also

15:33

racing to deploy these technologies um

15:37

super safe super intelligence which spun

15:39

out of also uh o an open AI splinter

15:43

there are many openai splinters so this

15:45

was founded very recently by the former

15:48

chief scientist of OpenAI and Thinking

15:50

Machines Lab founded very recently by a

15:53

former chief technology officer of

15:55

OpenAI and Amazon is now trying to get

15:58

into the game as well. So basic and

16:01

Apple is also trying to get in the game.

16:02

So basically all the older generation

16:04

tech giants as well as a new crop of AI

16:08

players are all jostling in this space

16:11

and that's just the US, right? And

16:12

that's just the US, right? So the the

16:15

Chinese ecosystem is interesting because

16:18

they're not so

16:21

um they don't really use the term AGI

16:24

like that this is like a very kind of

16:27

unique thing about the US ecosystem is

16:30

that there's a quasi religious fervor

16:33

around that underpins the construction

16:35

of AI products and services whereas in

16:38

China it's much more like these are

16:39

businesses we're building products that

16:41

users are going to use. So, if you're

16:43

just looking at companies that are

16:45

building chat bots that are sort of akin

16:47

to chat GBT, then we're talking about

16:50

Bite Dance, owner of Tik Tok. Um,

16:52

Alibaba, the equivalent of Amazon, BYU,

16:55

the equivalent of Google, Huawei, the

16:57

equivalent of Apple, and uh, Tencent,

17:00

the um, what is the equivalent of

17:02

Tencent? I I guess Meta is the

17:04

equivalent of Tencent. So, they're also

17:06

building on these things. And there's

17:08

similarly a crop of startups that are

17:10

moving into the generative AI space. And

17:13

in Europe, we've got the little tidlers

17:14

like Mistral in France, you know, really

17:16

not not at the races cuz we're Europe.

17:18

Um what's the business case for all

17:21

this? Because obviously you've got

17:22

massive companies

17:24

often driven by maximizing shareholder

17:26

value, multi-trillion dollar valuations.

17:30

You do these things, you invest money to

17:32

make money as a capitalist society. So

17:33

what what is the business case made by

17:35

say Microsoft when they have their

17:38

shareholder meetings and they say we're

17:39

going to allocate 4050 billion dollars

17:41

towards building data centers and so on.

17:44

So it's really it's interesting that you

17:46

mentioned Microsoft because Microsoft

17:49

has recently been pulling back their

17:51

investments in data centers. They they

17:52

went all in and now they're really

17:54

rapidly starting to abandon data center

17:56

projects. So to answer your question, it

17:59

is really unclear what the business case

18:01

is and Microsoft has been one of the

18:03

first companies to start acknowledging

18:05

that and Satia Nadella has come onto

18:08

some podcasts recently where he actually

18:11

stunned some people in the industry by

18:13

being quite skeptical of whether or not

18:15

this race to AGI was productive. Um but

18:20

one of the things that I I really felt

18:24

after reporting what is driving the

18:27

fervor is you can't actually fully

18:29

understand it as just a story about

18:31

money. It has to also be understood as a

18:34

story of ideology because when in the

18:38

absence of a business case then you ask

18:40

why are people still doing this? And the

18:43

answer is there are people who genuinely

18:46

fervently believe and they talk about it

18:49

as a belief in this idea that we can

18:52

fundamentally recreate human

18:53

intelligence and that if we can do that

18:56

there is no other more important thing

18:58

in the world because what else like how

19:02

else you should you be dedicating your

19:03

time other than to bring about this

19:05

civilizationally transformative

19:07

technology. And so that's part of why

19:11

what drives open AI, what drives

19:13

Enthropic, what drives safe super

19:15

intelligence, these other smaller

19:17

startups. And then the bigger giants

19:20

which are more business focused and more

19:23

classic companies that actually care

19:25

about their bottom lines, they end up

19:27

getting pressured because shareholders

19:30

are seeing the enormous amounts of

19:32

investment by these startups and they're

19:35

seeing users start shifting from Google

19:38

search to using chat GBT as search. Chat

19:40

GBT should not be used as search but

19:42

consumers think that it is. And then

19:45

shareholders ask in Google's shareholder

19:48

meetings, what are you doing with AI?

19:50

What is your AI strategy? Why aren't you

19:52

investing in this technology? And so

19:54

then all of the other giants end up

19:57

racing in the same direction.

19:59

What does Warren Buffett make of it?

20:01

That's what I want to know. Is he sort

20:02

of like you guys if he's like you guys

20:04

are wasting your money? He's like he's

20:05

he's probably right. I have no idea. Has

20:06

he invested in AI? No, I don't I don't

20:08

think so. He just sticks to Coke and

20:10

these sorts of things, doesn't he? I

20:11

mean there's there's two rational. So I

20:13

think one is like you say a quasa

20:16

religious fervor has inflected the

20:18

investment decisions of some of the

20:20

world's most um valuable companies which

20:23

is just an extraordinary thing to even

20:24

think about. I suppose the other one is

20:26

that a lot of people in this space, as

20:28

we'll talk about in a moment, are

20:29

heavily influenced by people like Peter

20:31

Teal. And Peter Teal's orthodoxy is that

20:33

competition is for idiots, right? If

20:35

you're going to start a business, it has

20:36

to be a monopoly. And I can only presume

20:39

that companies like Microsoft, etc.,

20:41

Although maybe that's not the best

20:42

example now given recent events, but

20:44

XAI, Open AI, Meta, the only reason you

20:48

would invest ultimately hundreds of

20:50

billions, trillions of dollars into this

20:52

is because first mover advantage gives

20:54

you a monopoly on the most

20:56

transformational technology since the

20:57

steam engine. Yeah. I mean, that's the

20:59

only way I can make sense of it, right?

21:02

Have Have they has anybody in that space

21:04

kind of said that we want a we want the

21:06

monopoly on AGI? We want to be the the

21:08

Facebook of AGI. Well, what what OpenAI

21:11

often says to investors is if you make

21:14

this seemingly fantastical bid into our

21:17

technology, you could get the biggest

21:21

returns you've ever seen in your life

21:23

because we will then be able to use your

21:25

funding to get to AGI first. So, it's

21:28

still riding on this concept of the fact

21:30

that there might be an AGI, which is

21:32

high, it's not like rooted in scientific

21:34

evidence. Um, and even if we fail, we

21:39

will successfully be able to automate a

21:42

lot of human tasks to the point where we

21:44

can convince a lot of executives to hire

21:46

our software instead of a labor force.

21:49

So that in and of itself could

21:51

potentially end up generating enough

21:53

returns for you more than you've ever

21:55

seen before. So that's usually the pitch

21:57

that they make. But you know it is a

22:00

huge risky bargain that these investors

22:04

are actually pitching into. And and you

22:05

know a lot of investors they they have a

22:08

bandwagon mentality like they aren't

22:10

necessarily doing their own analysis to

22:12

say let me do this investment. They're

22:13

just seeing everyone glom onto this

22:16

thing and they're like well I don't want

22:17

to miss out. Why don't we glom on as

22:19

well? But you know, there are some

22:22

investors that have actually recently

22:24

reached out to me to be like, one of the

22:25

most under reportported stories right

22:27

now is the amount of risk that is not

22:29

just being taken on by these VCs is

22:31

actually being taken on by the entire

22:33

economy because the money that these

22:35

investors are investing comes from like

22:37

university endowments and things like

22:39

that. So if the bubble pops, it doesn't

22:42

just pop for Silicon Valley, it actually

22:44

has will have ripple effects across the

22:46

global economy. I mean, when you look at

22:48

the the the sort of e-commerce bubble in

22:51

the late 90s, okay, it was a bubble, you

22:54

know, pets.com or whatever. It was, you

22:56

know, had these crazy valuations, but,

22:58

you know, buying and selling goods and

23:00

services offline and then taking that

23:02

online. I mean, that makes sense. That's

23:03

a that's a plausible sort of commercial

23:06

model, but like you say, nobody's really

23:08

done that with artificial intelligence.

23:10

It does kind of feel like, you know, you

23:12

read these stories about Chulip Mania in

23:13

17th century Holland, and it does kind

23:15

of feel very similar. Um, you mentioned

23:19

Open AI and we've talked about it many

23:21

times and of course OpenAI is is the

23:23

central organization in this book.

23:26

What's the big idea behind Open AI? When

23:28

it starts and when does it start? Let's

23:30

let's end of 2015. 2015. So, it's 10

23:32

years old. What are the animating values

23:36

that give birth to open AAI? So open

23:39

started as a nonprofit which many people

23:41

don't realize based on the fact that

23:43

it's one of the most capitalistic if not

23:45

the most capitalistic organization in

23:46

Silicon Valley today.

23:48

But it was co-founded by Elon Musk and

23:51

Sam Alman as a bid to try and create a

23:56

fundamental AI research lab that could

23:58

develop this transformative technology

24:01

without any kind of commercial

24:03

pressures. So they positioned themselves

24:05

as the anti-Silicon Valley, the anti-

24:07

Google because Google at the time was

24:10

the main driver of AI development. They

24:12

had developed a monopoly on some top AI

24:15

research scientists and Musk in

24:18

particular had this really great fear of

24:20

not just Google but Google's deep mind

24:22

uh Google's acquisition of deep mind

24:25

where he was very worried that this

24:27

consolidation of some of the brightest

24:30

minds would lead to the development of

24:33

AI that would go very badly wrong. And

24:37

what he meant by very badly wrong was it

24:39

could one day develop sentience

24:41

consciousness, go rogue and kill all

24:44

humans on the planet. And because of

24:48

that fear, Alman and Musk then thought,

24:52

we need to do a nonprofit, not have

24:55

these profit- driven incentives. We're

24:57

going to focus on being completely open,

24:59

transparent, and also collaborative to

25:02

the point of self-sacrificing if

25:04

necessary. If another lab starts making

25:08

faster progress than us on AI and on the

25:11

quest to AGI, we will actually

25:15

just join up with them. We will we will

25:17

dissolve our own organization and join

25:19

up with them and uh that didn't hold for

25:22

very long.

25:24

So what's their theory behind that?

25:26

Because you know at that point Google is

25:28

now about maybe 2015 is maybe the

25:31

world's most valuable company. I don't

25:32

know. is certainly up there and this is

25:34

a nonprofit. Yeah. So how how are they

25:36

going to achieve AGI before Google?

25:40

So initially the bottleneck that they

25:42

saw was talent like right Google has

25:44

this monopoly in talent. We need to chip

25:46

away at that monopoly and get some of

25:48

those Google researchers to come to us

25:50

and also start acquiring PhD students

25:52

that are just coming out of uni. And

25:55

because of that I have come to speculate

25:59

this is not based on any documents that

26:00

I read or anything. I've come to

26:02

speculate that part of the reason why

26:04

they started as a nonprofit in the first

26:05

place is because it was a great

26:08

recruitment tool for getting at that

26:11

bottleneck.

26:12

They could not compete on salaries with

26:15

Google, but they could compete on a

26:17

sense of mission. And in fact, when

26:20

Alman was recruiting the chief

26:21

scientist, Deia Sudskver, who was the

26:23

critical first acquisition of talent,

26:27

that then led to many other scientists

26:28

being really interested in working for

26:30

OpenAI. He appealed to Sudskver's sense

26:33

of purpose, like do you want to do you

26:35

want a big salary and just to work for a

26:37

for-profit company or do you want to

26:39

take a pay cut and do something big with

26:41

your life? And it was actually that

26:43

reason that Sutzk said, you know what,

26:45

you're right. I I do want to work for a

26:46

nonprofit. And so that's how they

26:50

initially conceived of competing with

26:52

Google was we we're starting a little

26:54

bit late to the game. How do we first

26:57

get a bunch of really really smart

26:58

people to join us? let's create this

27:02

really big sense of mission and and the

27:05

I open the book with two quotes in the

27:07

epigraph and one of them is from Sam

27:09

Alman writing a blog post in 2013 and he

27:13

quotes someone else that says successful

27:17

people build companies more successful

27:20

people build countries the most

27:22

successful people build religions and

27:25

then he reflects on this and says it

27:27

seems to me that the most successful

27:29

founders in the world don't actually set

27:31

off to build a company. They set off to

27:33

build a religion. And it turns out

27:35

building a company is the easiest way to

27:37

do so. And so, you know, it's not like

27:41

2013 and then 2015 he creates OpenAI as

27:44

a nonprofit.

27:46

It's important to say as well, Sam

27:47

Melman is not some sort of idealistic

27:50

um porpa, you know, he's working at Y

27:52

Combinator. He is very much inshed

27:54

within the Silicon Valley elite. Um,

27:57

I suppose also there's tax as well,

28:00

right? If you're a nonprofit, you've got

28:01

the mission, you've also got a bunch of

28:02

tax breaks which you don't have as a for

28:04

profit. So maybe there's a very cynical

28:06

genesis there. Um, but I suppose just

28:10

reading your book and becoming more

28:11

familiar with the arguments over time,

28:13

you know, clearly the amount of compute

28:16

you have is is was always going to be

28:18

critical. And if you if you believe on

28:20

the in the um neural network model, the

28:23

deep learning model, the amount of

28:25

compute you have is always going to be

28:26

critical. And it just seems implausible

28:27

that a nonprofit could ever have been

28:29

able to compete with Google, for

28:32

instance, ever. like it seems

28:33

implausible because you have to spend as

28:35

we now see tens of billions, hundreds of

28:37

billions of dollars on compute.

28:40

Did nobody say that? Did nobody say,

28:41

"Hey, you know, like the bottleneck

28:43

isn't just talent action. It's being

28:44

able to spend hundreds of billions of

28:46

dollars on these Nvidia GPUs." It's so

28:49

interesting because at the time the idea

28:54

that you needed a lot of compute was

28:56

actually

28:58

neither very popular nor one that was

29:01

seen as that scientifically rigorous.

29:04

Right? So there were there were many

29:06

different ideas of how to advance AI.

29:08

One was we already actually have all the

29:10

techniques that we need and we just need

29:12

to scale them. But that was considered a

29:14

very extreme opinion. And then on the

29:16

other extreme it was we don't even have

29:17

the techniques yet. And interestingly

29:19

recently there's a New York Times story

29:21

that says why we likely won't get to AGI

29:23

anytime soon by Cade Mets. And he cites

29:26

this stat that 75%

29:29

of the longest standing most respected

29:32

AI researchers actually still think to

29:34

this day we don't actually have the

29:36

techniques to get to AGI if we will

29:38

ever. So, it's we're we're kind of

29:41

coming full circle now and it is

29:43

starting to become unpopular again. This

29:45

idea that you can just scale your way to

29:47

so-called intelligence, but that was the

29:52

research vibe when openi started was we

29:56

can actually maybe just innovate on

29:58

techniques, right? And then very quickly

30:02

because Ilas Sutskver in particular was

30:05

a scientist who anomalously did think

30:08

that scaling was possible and because

30:11

Altman loved the idea of adding zeros to

30:14

things from his career in Silicon Valley

30:17

and because Greg Brockman the chief

30:19

technology officer also very Silicon

30:21

Valley entrepreneur liked that idea as

30:23

well then they identified why don't we

30:26

go for scale because that is going to be

30:28

the fastest way to see whether we can

30:32

beat Google.

30:34

And once they made that decision about

30:39

less than a year in roughly is when they

30:41

started actually talking about that,

30:44

that's when they decided we actually

30:46

need to convert into a forprofit because

30:50

the bottleneck has shifted now from

30:52

acquiring talent to acquiring capital.

30:55

And that is also why Elon Musk and Sam

30:57

Alman ended up having a falling out

30:59

because when they started discussing a

31:01

for-profit conversion, both Elon Musk

31:04

and Sam Alman each wanted to be the CEO

31:06

of that forprofit. And so they couldn't

31:09

agree. And originally Ilascover and Greg

31:11

Brockman chose Musk. They thought that

31:14

Musk would be the better leader of

31:17

OpenAI. But then Altman essentially, and

31:20

this is something that is very classic,

31:23

a very classic pattern in his career,

31:25

became very persuasive to Brockman, who

31:28

he had had a long-term relationship with

31:30

about why it could actually be dangerous

31:32

to go with Musk and like like I would

31:36

definitely be the more responsible

31:37

leader so on and so forth. And then

31:39

Brockman convinces Susper and the two

31:42

chief scientist, chief technology

31:44

officer pivot their decision and they go

31:46

with Alman and then Musk leaves in a

31:48

huff and says I don't want to be part of

31:50

this anymore which has become rather

31:53

typical of the man hasn't it

31:54

subsequently but that is incredible

31:56

really. So by 2016 there's a recognition

31:58

that in terms of capital investment

32:00

they're going to have to go toe-to-toe

32:01

with maybe at that point the world's

32:03

biggest company and they're a nonprofit.

32:05

Yeah. I just find it weird that and but

32:07

lots of people bought the propaganda

32:08

that open AI was in some way open. Yeah.

32:12

What did the open stand for by the way?

32:14

The open originally stood for open

32:16

source which in the first year of open

32:19

AI they really did open source things.

32:21

They did research and then they would

32:22

put all their code online. So it it it

32:25

really was like they did they did what

32:27

they said and then the moment that they

32:29

realized we got to go for scale then

32:31

everything shifted. It's such an amazing

32:34

story and so emblematic of the 2010s

32:37

that you have this organization which

32:39

presents itself as effectively an

32:41

extension of activism. Yeah. You know,

32:43

ends up becoming today some people value

32:45

open AI at $300 billion. Yeah. Um and

32:48

it's doing all these terrible things

32:49

which we're going to talk about. Sam

32:51

Alman specifically,

32:53

who is he? What's his background? How

32:55

does this guy who nobody's heard of

32:57

become the CEO of a company which today

33:00

is you know it's it's almost more

33:02

valuable than any company in Europe for

33:04

instance. Yeah. Altman is he's spent his

33:08

entire career in Silicon Valley. He was

33:10

a first a founder, a startup founder

33:11

himself and he was part of the first

33:14

batch of companies that joined Y

33:18

Combinator, one of now today one of the

33:21

most prestigious startup accelerators in

33:24

Silicon Valley. But at the time he was

33:26

he was the very first class and no one

33:28

really knew what YC was. He did that for

33:31

seven years. He was running a company

33:34

called Looped, which was a mobile-based

33:36

social media platform, effectively a

33:38

Foursquare competitor, but which

33:40

actually started earlier than

33:41

Foursquare. It didn't do very well. It

33:44

was sold off for parts and but what he

33:46

did do very well during that time was

33:50

ingratiate himself with very powerful

33:53

networks in Silicon Valley. So, one of

33:56

the first and longest mentors that he

33:58

ended up having throughout his career is

34:00

Paul Graham, the founder of Y

34:02

Combinator, who then plucked Sam Alman

34:05

to be his successor. And Sam Alman then

34:07

at a very young age became president of

34:09

YC. And then he ended up doing that for

34:12

around 5 years. And during his tenure at

34:14

YC, he dramatically expanded YC's

34:17

portfolio of companies. He started

34:19

investing not just in software companies

34:21

but also pushing into quantum into

34:24

self-driving cars into fusion and really

34:27

going for those hard tech engineering

34:30

challenges. And if you look at how he

34:33

ended up then as a CEO of OpenAI,

34:36

I think that he basically was trying to

34:40

figure out what is going to be the next

34:43

big technology wave. Let me test out all

34:46

of these different things. position

34:48

myself as involved in all of these

34:50

different things. Um, so in addition to

34:52

all his investments, he started

34:53

cultivating this idea of AI also seems

34:55

like maybe it'll be big. Let me start

34:57

working on an idea for a fundamental AI

35:00

research lab that becomes open AI. And

35:02

once open AI started being the fastest

35:04

one taking off, then Alman hops over and

35:07

becomes CEO.

35:09

He hops over. So how does that happen?

35:12

Where does he come from? Cuz like you

35:14

say originally it's got people like he's

35:15

there. Who's there first? him or I sits

35:18

technically Alman recruited Satzkever

35:21

but Alman was only a a chairman he he

35:26

didn't take an executive role at OpenAI

35:28

even though he founded the company right

35:31

and similarly with Musk Musk didn't have

35:33

an executive role he was just a

35:34

co-chairman so it was just the two of

35:36

them that were chairman of the board and

35:39

Ilia Szver and Greg Machmann were the

35:41

main people the main executives that

35:43

were actually running the company

35:44

dayto-day in the beginning I mean I have

35:46

to say reading the book Sam Alman he

35:48

comes across as a a master manipulator

35:50

like masterful manipulator and

35:52

understander of human psychology there's

35:53

this great quote let me get it up uh

35:56

which you have I think it's from Paul

35:58

Graham

35:59

um Sam Alman has it you could parachute

36:02

him into an island full of cannibals and

36:04

come back in 5 years and he'd be the

36:06

king if you're Sam Alman you don't have

36:08

to be profitable to convey to investors

36:10

that you will succeed with or without

36:12

them I mean

36:15

he just sounds He's also described, by

36:16

the way, as a once in a generation

36:18

fundraising talent. I think that's by

36:20

you. Yeah. Um,

36:22

how how is he able to just basically

36:24

come out of nowhere and compete with

36:25

people like Elon Musk, Zuckerberg as

36:28

this kind of intellectual heavyweight in

36:30

Silicon Valley in regards to one of the

36:31

major growth technologies of our of our

36:33

decade. So, from the public's

36:35

perspective, he came out of nowhere. But

36:37

within the tech industry, everyone knew

36:38

Sam Alman. You know, like I I as someone

36:42

who worked in tech, like I knew Sam

36:43

Alman ages ago because Y Cominator was

36:46

just so important. It was as a CEO of

36:49

potential company that valuable. Was it

36:50

always something that he might be in?

36:52

No, I don't think people ever thought

36:55

that he would jump to become the CEO of

36:58

a company because he has such an

36:59

investor mindset and his approach has

37:02

always been to be involved in many many

37:05

companies. I mean he invested in

37:07

hundreds of startups as both the

37:10

president of YC and running some uh

37:12

personal investment funds as well but

37:16

people he was well respected within the

37:19

valley. He was seen as a critical

37:23

lynchpin of the entire startup ecosystem

37:26

and not just by people within the

37:29

industry but by policy makers which is

37:31

key. He started cultivating

37:33

relationships with politicians very very

37:36

early on in his tenure as the president

37:38

of YC. And for example, I talk in my

37:41

book about how Ash Carter, the head of

37:44

the Department of Defense under the

37:45

Obama administration, came to Altman

37:49

asking, "How can we get more young tech

37:51

entrepreneurs to partner with the US

37:54

government?" So, he was seen as a

37:56

gateway into the valley. And

38:00

obviously the valley isn't just made of

38:01

of of startups. There's also the tech

38:03

giants. But back then like starting a

38:05

startup was way cooler than working at a

38:08

tech giant because Google, Microsoft,

38:11

they were considered the older safer

38:13

options if you really wanted job

38:15

security. But if you wanted to be an

38:16

innovator, if you wanted to do

38:19

breathtaking things, you would build a

38:23

startup. And then that start your number

38:25

one goal as a startup founder was to get

38:27

into YC. So Altman was the pinnacle. He

38:31

was he was a he was emblematic of the

38:34

pinnacle of success in the valley. And

38:36

he even if his net worth wasn't the same

38:38

as other people in terms of his social

38:40

capital, his networking, he understood

38:42

early on that's where the real value

38:43

lies. Exactly. So interesting. I mean

38:45

some notes that I wrote down um cuz

38:49

there are there are points where I'm

38:51

thinking why on earth is this gentleman

38:52

the CEO of such a valuable company he

38:54

seems kind of useless and the notes I

38:56

had down were um people pleaser yes liar

39:00

conflict averse

39:04

how' you become the CEO of such a

39:06

successful company

39:08

maybe you think that or don't think that

39:10

I don't know I mean at points it kind it

39:12

comes across as almost psychotic the

39:14

capacity to to lie.

39:17

Here's an interesting question for me

39:18

and I don't know I don't know how

39:20

comfortable you are with answering it.

39:22

In writing this book, there's another

39:24

alternative timeline where you basically

39:26

write a hography of Sam

39:28

and you leave all of that out, right?

39:30

There are other writers out there, I

39:31

won't name them, they sell a ton of

39:33

books, and they write very positive,

39:35

affirming um biographies of these

39:39

visionary leaders, whether it's Elon

39:40

Musk or Steve Jobs, etc.

39:43

Why didn't you just write that book

39:44

about Sam Alman? You know, you would

39:46

have made a ton more money.

39:48

Right. And I'm but I'm reading this

39:50

stuff and I'm thinking, my good this and

39:52

it's so deaf and nuance your your

39:54

portrait of Sam Alman. I just think the

39:56

guy I mean this this is going to really

39:58

hurt him when he reads this stuff. I

40:00

imagine why didn't you do that? Take the

40:03

easy route. I don't know that that would

40:05

have been the easy route.

40:07

I mean, I just wrote the facts and the

40:12

facts come out that way, you know, like

40:14

I interviewed over 260 people across 300

40:18

different interviews and over 150 of

40:21

those interviews were with people who

40:23

either worked at the company or were

40:26

close to Sam Alman. And that's just what

40:28

they presented was all of the details

40:31

that I ended up putting in. And one of

40:33

the things that he that just came

40:36

through again and again and again, well,

40:37

two two things that came through again

40:39

and again,

40:41

no matter how long someone worked with

40:43

him or how closely they worked with him,

40:45

they would always say to me, at the end

40:48

of the day, I don't know what Sam

40:49

believes. So that's interesting. M and

40:53

then the other thing that came through

40:55

was I would ask them well what did he

40:58

say to you he believed in this meeting

41:00

at this point in time for why the

41:04

company needed to do this XYZ thing and

41:08

the answer was he always said he

41:10

believed what that person believed

41:13

except because I interviewed so many

41:14

people who have very divergent beliefs

41:16

and I was like wait a minute he's saying

41:18

that he believes what this person

41:20

believes and then what that person

41:21

believes and they're literally

41:22

diametrically opposite. So yeah, so I

41:26

just I just ended up documenting all of

41:28

those different details to illustrate

41:30

how people feel about him. I mean, he's

41:33

a polarizing figure both extreme in the

41:36

positive and negative direction. Some

41:38

people feel he is the greatest tech um

41:40

leader of our generation and they but

41:44

they don't say that he is honest when

41:48

they say that. They just say that he's

41:52

one of the most phenomenal assets for

41:56

achieving a vision of the future that

41:58

they really agree with. And then there

42:01

are other people who hate his guts and

42:04

say that he is the greatest threat ever.

42:06

And it really also comes down to whether

42:08

or not they agree with his vision and

42:10

they don't. And so then his persuasive

42:13

powers suddenly become manipulative

42:15

tactics. M I mean if you compare him to

42:18

somebody like Elon Musk as a CEO who is

42:20

obviously far from perfect but Elon Musk

42:22

makes makes big bets. He has gut

42:25

instincts. He's very happy to alienate

42:27

people if he thinks he's right about

42:29

something. And you know obviously I

42:31

don't agree with him on many many things

42:33

but that's that's quite a sort of

42:34

there's an archetype with regards to a

42:35

business leader that that looks like

42:37

that. And then you got somebody like Sam

42:39

Alman. He's doing all of these things.

42:40

Like I say, the peopleleasing, the

42:42

conflict aversion, and yet he's managed

42:45

to lead this company to essentially a

42:47

third of a trillion valuation. He must

42:49

obviously be doing something right as

42:51

well. So what are his sort of

42:52

comparative advantages as a business

42:54

leader cuz on paper I read all that

42:56

stuff and I think the guy wouldn't be

42:57

able to get up in the morning and make

42:59

breakfast and yet he's accomplished some

43:00

extraordinary things. Yeah, I think it

43:02

really comes down to he really he does

43:04

understand human psychology very well,

43:07

which not only is helpful in

43:10

getting people to join in on his quest.

43:14

So, he's great at at acquiring talent

43:16

and then he's said himself like I'm I'm

43:19

a visionary leader. I'm not an

43:20

operational leader and my best skill is

43:23

to acquire the best people that then

43:25

operationalize the thing. So, he's he's

43:27

good at persuading people into joining

43:29

his quest. He's good at persuading what

43:33

whoever has access to whatever resource

43:35

he needs to then give him that resource

43:36

whether it's capital, land, energy,

43:38

water,

43:40

laws, you know. Um, and then he is

43:45

people have said that he instills a very

43:49

powerful sense of belief in his vision

43:53

and in their ability to then do it.

43:57

He's good. We say in English soccer, we

44:00

would say good man manager. He can

44:02

inspire people. He inspires people to do

44:04

things that they didn't think that they

44:05

would be able to do. Yeah. Um but yeah,

44:09

but I mean

44:11

this is this is why there's so much

44:13

controversy. He is such a polarizing

44:15

figure because people who

44:19

encount everyone has a very personalized

44:23

encounter relationship with him because

44:26

he he often um he he does his best work

44:30

in one-on-one meetings when he can say

44:33

whatever he needs to say to get you to

44:35

do believe achieve whatever it is that

44:38

he needs you to do. And that's also part

44:41

of the reason why there's so many

44:42

diverging like people that are like,

44:45

"Oh, I think he believes this. I think

44:47

he believes that." And they're like

44:48

totally diverging. It's because he's

44:49

he's having these very personalized

44:51

conversations with people. Um, and so

44:55

some people end up coming out of those

44:57

personalized meetings feeling totally

44:58

transformed in the positive direction,

45:00

being like, I feel super human. I can

45:02

now do all these things and it's in the

45:03

direction that I want to go. It's I'm

45:06

building the future that that he sees

45:09

and I see. and we're like aligned and

45:11

and then other people end up coming out

45:13

of these meetings feeling like was I

45:15

played you know like was this was he

45:18

just telling me all these things to try

45:20

and get me to do something that's

45:21

actually fundamentally against my

45:22

values. You said you spoke to 150 people

45:25

who were connected with open AI um over

45:28

150 interviews. Yeah. Yeah. Sorry. 150

45:30

interviews 250 interviews altogether. 27

45:35

people altogether. The the numbers were

45:37

No, but it's absolutely incredible. I

45:39

should have said this right at the start

45:40

really. What's your what's your personal

45:42

sort of bio on all this stuff? Because

45:44

of course when people out of journalism

45:46

media cover technology, the intersection

45:49

of that with politics, we go, well, they

45:50

don't really know what they're talking

45:51

about. They're generalists because they

45:52

come out of journalism. What's your

45:54

background? Because it's quite

45:54

particular. I studied mechanical

45:57

engineering at MIT for undergrad and I

46:00

went and worked in Silicon Valley

46:02

because that's what I thought I wanted

46:03

to do. I lasted a year before I realized

46:07

it was absolutely not what I wanted to

46:09

do. And then I went into journalism. And

46:11

the reason why I had such a visceral

46:15

reaction against Silicon Valley is

46:17

because I was quite interested in

46:20

sustainability and how to mitigate

46:22

climate change. And the why I went to

46:25

study engineering in the first place was

46:26

I thought that technology could be a

46:28

great tool for social change and shaping

46:31

consumer behaviors to to prevent us from

46:34

planetary disaster. And I realized that

46:38

Silicon Valley's technology incentive

46:41

structure incentive structures for

46:42

producing technology

46:45

were not actually leading us to develop

46:48

technologies in the public interest. And

46:49

in fact, most often it was leading to

46:52

technologies that were eroding the

46:55

public interest. And the problems like

46:57

mitigation of climate change that I was

46:59

interested in were not profitable

47:02

problems. But that is ultimately what

47:04

Silicon Valley builds. They want to

47:05

build profitable technologies. And so it

47:09

just seemed to me that it didn't really

47:11

make sense to try and continue doing

47:13

what I wanted to do within a structure

47:15

that didn't reward that. Yeah. And then

47:18

I thought, well, I've always liked

47:20

writing. Maybe I can use writing as a

47:21

tool for social change. So I switched to

47:23

journalism. You went to MIT Review,

47:25

right? And then I went to a few

47:27

publications and then eventually MIT

47:29

Technology Review to cover AI and then

47:31

Wall Street Journal. And then the Wall

47:33

Street Journal. I mean, these are big

47:34

just just so people know there's real

47:35

there's real credibility behind this.

47:36

All these interviews, this CV. Um,

47:40

and it's interesting as well you say, I

47:41

wouldn't write a hography, I just wrote

47:43

what was there. I mean, maybe that's

47:45

partly an an extension of your sort of

47:47

STEM background, right? You know, rather

47:49

than writing like propaganda on a puff

47:51

piece, which let's be honest is is most

47:53

coverage of of the sector, but it's

47:55

true, right? Well, you know, people

47:57

often ask me this is like how much does

47:59

did my engineering degree help me in

48:01

reporting on this? And I think it helps

48:03

me in ways that are not what people

48:06

would typically assume. I went to school

48:09

with a lot of the people that now build

48:11

these technologies. I went to school

48:12

with some of the executives at OpenAI,

48:15

you know, and so for me, I do not find

48:20

there to be magic. I don't find these

48:22

figures to be towering or magical. Like

48:25

I remember when we were walking around

48:27

dorm rooms together in our pajamas and

48:30

it it instilled in me this understanding

48:32

that technology is always a product of

48:34

human choices. And different humans will

48:38

have different blind spots. And if you

48:40

give a small group of those people too

48:44

much power to develop technologies that

48:46

will affect billions of people's lives,

48:49

inevitably that is structurally unsound.

48:53

You like we should not be allowing small

48:57

groups of individuals to concentrate

48:59

such profound influence on society when

49:02

it is not you cannot expect any

49:06

individual to have such great visibility

49:10

into everything that's happening in the

49:11

world and perfectly understand how to

49:14

craft a one-sizefits-all technology that

49:17

ends up being profoundly beneficial for

49:19

everyone. Like that it just doesn't make

49:21

sense at all. Um, and I think the other

49:25

thing that it really helps me with is

49:28

it g Silicon Valley is an extremely

49:30

elitist place and it allows me to have

49:35

an honest conversation with people

49:36

faster because if they start

49:39

stonewalling me or like trying to

49:42

pretend that there's certain things that

49:44

these technologies are capable of that

49:46

they're not actually capable of, I will

49:48

just slap my MIT degree down and be

49:50

like, "Cut the bull crap." like tell me

49:52

what's actually happening. And it is a

49:54

shortcut to getting them to just speak

49:56

more honestly to me, but it's not

49:58

actually because

50:00

of what I studied. It's more just that

50:02

it signals to them that they need to

50:05

speed up their throat clearing. That's

50:08

really interesting though. Yeah, because

50:10

I do I do feel like lots of coverage of

50:12

this sector. I mean I again I can only

50:13

speak in regards to the UK and we're a

50:15

tidler compared to to you guys but at

50:17

the intersection of particularly

50:18

politics and technology the coverage by

50:20

political journalists at Westminster you

50:22

know K star and Rachel Reeves say we're

50:24

going to build more data centers isn't

50:25

that fantastic actually not necessarily

50:27

they're not going to create that many

50:28

jobs once they're built they can use a

50:29

ton of energy ton of water what's the

50:31

upside for the UK taxpayer there is very

50:33

little interrogation of just the press

50:35

releases yeah um and it's really

50:37

interesting to me that you've come out

50:39

of MIT and then you've taken this

50:40

trajectory is This stuff you just talked

50:43

about knowing these people, this tiny

50:44

group of people whose decisions now

50:46

affect billions already. Is this stuff a

50:50

on the present trajectory? Is it an

50:52

existential challenge to democracy? And

50:55

challenge is is speculative. Is it going

50:58

to end democracy?

51:00

I think it is greatly threatening and

51:02

increasing the likelihood of democracy's

51:05

demise. But I I never make predictions

51:08

of

51:10

this outcome will happen because it

51:11

makes it sound inevitable. And one of

51:13

the reasons why I wrote the book is

51:15

because I very much believe that we can

51:17

change that and people can act now to

51:20

shape the future so that we don't lose

51:22

democracy. But on this trajectory,

51:23

right, if the next 20 years, like the

51:26

last 20 years, on this trajectory for

51:27

sure, I think it will end democracy.

51:29

Yeah. How quickly?

51:34

We've really screwed up in the last 20

51:35

years, right? I wonder, you know, it's

51:37

kind of Gosh. Yeah.

51:39

I'll give it maybe 20 years. 20 years.

51:42

Yeah. Yeah. We used to have this thing

51:44

called privacy, high streets, childhood,

51:47

all gone. Um, you've said that um what

51:51

OpenAI did in the last few years is they

51:52

started blowing up the amount of data

51:54

and the size of the computers that need

51:56

to do this training in regards to the um

51:59

in regards to the um deep learning.

52:02

Give me a sense of the scale. We've

52:05

talked a little bit about the data

52:06

centers, but how much energy, land,

52:08

water is being used to power open AI

52:11

just specifically as one company. Yeah.

52:13

To power open. That's really hard. Um

52:15

because they they don't actually tell us

52:17

this. So we only have figures for the

52:21

industry at large and the amount of data

52:23

centers. So it's not in their annual

52:24

reports for instance. No. Well, they

52:26

don't have annual reports because

52:27

they're not a public company. Of course.

52:29

Yeah. Huh. So that's, you know, one of

52:32

the ways that and and actually it

52:35

doesn't matter if they're a public

52:37

company because Google and Microsoft,

52:40

they do have annual reports where they

52:42

say how much capital they've spent on

52:46

data center construction. They do not

52:48

break down how much of those data

52:49

centers are being used for AI. They also

52:52

have sustainability reports where they

52:54

talk about the water and carbon and

52:56

things like that, but they do not break

52:59

down how much of that is coming from AI

53:00

either. And they also massage that data

53:04

a lot to make it seem better than it

53:07

actually is. But even with the

53:10

massaging, there was that story 2 years

53:13

ago or sorry la last year 2024 where

53:17

both Google and Microsoft reported I

53:20

think it was a 30% and 50% jump in their

53:23

carbon emissions. Yeah. Because largely

53:26

driven by this data center development.

53:29

Yeah. And also the context here is over

53:32

the last it was one of the good news

53:33

stories of the last sort of 10 to 15

53:35

years is that CO2 emissions per capita

53:37

in the US has kind of plateaued right

53:40

across the west had kind of plateaued

53:42

and actually in the UK energy

53:43

consumption dropped I mean we stopped

53:46

making things but still

53:48

you know everything's made in East Asia

53:50

now but no but it's it it was kind of a

53:52

good story and I kind of bought it right

53:54

I thought that you know we'd have we'd

53:56

kind of plateaued obviously the global

53:58

south would consume more energy But we

54:00

are as well. Um,

54:04

should we look at these companies as

54:05

kind of analogous to the East India

54:07

Company of the 19th century? That is the

54:09

analogy that I have increasingly started

54:11

using, especially with the Trump

54:12

administration in power because the

54:15

British East India Company very much was

54:18

a corporate empire and started off not

54:23

very imperial. They just started off as

54:25

a company, very small company based in

54:27

London. And of course through economic

54:31

trade agreements with India gained

54:33

significant economic power, political

54:36

power and eventually became the apex

54:38

predator in that ecosystem and that's

54:40

when they started being very imperial in

54:41

nature and they were the entire time

54:45

abetted by the British Empire the nation

54:49

state empire. So you have a corporate

54:51

empire, you have a nation state empire

54:53

and I literally see that dynamic playing

54:56

out now where the US government is also

55:00

in its empire era. The Trump

55:02

administration has quite literally used

55:05

words to suggest that he wants to expand

55:09

and fortify the American empire and he

55:12

sees these corporate empires like OpenAI

55:15

as his empire building assets. And so I

55:20

think he is probably seeing it in the

55:22

same way that the British crown saw the

55:24

British East India Company of let's just

55:26

let this company acquire all these

55:28

resources, do all these things and then

55:30

eventually we'll nationalize the company

55:32

and then India formally becomes a colony

55:34

of the British Empire. So Trump whatever

55:36

the equivalent modern day equivalent

55:38

would be of nationalizing these these

55:40

companies is his endgame. like he is

55:45

helping them strike all these deals and

55:48

installing all this American hardware

55:49

and software all around the world with

55:52

the hope that then those become national

55:54

assets and then you know there was

55:57

actually just a recent op-ed in

55:58

Financial Times from Marate Shake one of

56:01

the former EU par parliamentarians who

56:04

pointed out like isn't it so convenient

56:07

for the US to get all of this American

56:10

infrastructure installed everywhere

56:12

around the world so that the US

56:13

government could literally turn it off

56:14

at any time. I mean, if you want to talk

56:17

about empire building, there's that. But

56:20

at the same time, these corporate

56:22

empires are also trying to use the

56:26

American empire as an asset to their

56:29

empire building ambitions. So there's a

56:32

very tenuous alliance between Silicon

56:34

Valley and Washington right now in that

56:35

each one is trying to use the other and

56:38

ultimately trying to dominate the other.

56:40

And there's a growing popularity in

56:41

Silicon Valley of this idea of a

56:43

politics of exit. This idea that

56:45

democracy doesn't work anymore. We need

56:47

to find other ways of organizing

56:48

ourselves in society. And maybe the best

56:51

way of organizing ourselves is actually

56:53

a series ofworked companies with CEOs at

56:56

the top. So I don't ultimately know

57:00

who's going to win like the nation state

57:01

empire or the corporate empire. But

57:03

either version is bad because all of the

57:06

people in power now both the business ex

57:09

executives and the politicians do not

57:11

actually care at all about preserving

57:14

democracy. I mean the analogy of India

57:17

is really interesting. So I think I

57:18

might have my dates wrong. Um East India

57:20

Company is running things until 1857.

57:23

You have the Indian mutiny, basically an

57:26

uprising against the East India Company

57:27

and then of course that commercial

57:29

endeavor has to be underpinned by the

57:31

organized violence of the British

57:33

imperial state. Um and it does feel it

57:37

does feel like that could be the next

57:38

step of what happens with regards to US

57:41

interests overseas. I suppose one retort

57:44

would be well hold on it sounds kind of

57:46

good. I'm a I'm a socialist. I kind of

57:48

like the idea of SpaceX being

57:50

nationalized. I kind of like the idea

57:51

of, you know, the federal government

57:53

having a 51% stake in Open AI and Tesla

57:57

and Meta. What would you say to that? I

58:01

don't necessarily know if my critique is

58:02

of the nationalization of the company

58:04

more as like why are they nationalizing

58:06

these companies and what are they what

58:09

you know like the because of this

58:12

endgame mentality of let's just let

58:14

these companies run rampant around the

58:16

world so that ultimately whatever their

58:17

assets are become our assets is leading

58:20

the Trump administration to have a

58:22

completely hands-off approach to AI

58:25

regulation they're quite literally they

58:27

proposed the big beautiful bill which

58:29

passed the House and is now going up to

58:31

the Senate with a clause that would, if

58:33

implemented, put a 10-year moratorium on

58:37

AI regulation at the state level, which

58:38

is usually the state level is usually

58:41

where regulation, sensible regulation

58:43

happens in the US. So they're doing all

58:45

of these actions now with wide-ranging

58:51

repercussions that will be very

58:53

difficult to unwind in the name of this

58:57

idea that maybe if they just allow these

59:00

companies to act with total impunity

59:02

that it will ultimately benefit the

59:04

nation state. How do people like Sam

59:06

Alman look at the rest of the world

59:07

outside the US? These kind of tech

59:10

leaders and how do they look at Little

59:11

Britain and Italy and how do they look

59:14

at us? What do they think about us? You

59:16

know, you've you've been inside their

59:17

minds.

59:19

It's it's Yeah, I mean, they see them as

59:22

resources. They see different

59:24

territories as different types of

59:25

resources, which I mean is what older

59:30

empires did. you know, they would look

59:31

at a map and just draw out the resources

59:33

that they could acquire in each

59:34

geography.

59:36

We we're going to go here and acquire

59:38

the labor. We're going to go here and

59:39

acquire the lands. We're going to go

59:40

here and acquire the minerals. I mean,

59:41

that's literally how they talk. Like

59:45

when I was talking with some OpenAI

59:47

researchers about their data center

59:48

expansion, you know, there was this one

59:50

OpenAI employee who said, "We're running

59:53

out of land and water." and he was just

59:55

saying, "Yeah, we're just like trying to

59:58

we're just trying to look at the whole

60:00

world and see where else we can place

60:02

these things. Where what other

60:04

geographies can we find all the

60:07

conditions that we need to build more

60:08

data centers? Land without earthquakes,

60:11

without floods, without tornadoes,

60:12

hurricanes, all these natural disasters

60:15

and can deliver massive amounts of

60:17

energy to a single point and can cool

60:20

the systems." And they they are they

60:22

they're looking at that level of

60:24

abstraction

60:26

to what are the different pieces of

60:29

territory and resources that we need to

60:31

acquire and that includes other parts of

60:34

the west. Yeah. That's not just the

60:37

global south. No, it includes other

60:38

parts of the west as well. Yeah. So

60:40

there have been rapid data center

60:41

expansion in rural communities in both

60:43

the US and the UK and they it always

60:47

ends up in economically vulnerable

60:49

communities because those are the

60:50

communities that often actually opt in

60:52

to the data center development initially

60:54

because they are not informed about what

60:57

it will ultimately cost them and for how

60:59

long. And so I spoke with this one

61:02

Arizona legislator who said I didn't

61:06

know it had to use fresh water. And for

61:08

the UK audience, Arizona is a desert

61:11

territory. There is no there's there's a

61:13

very very stringent budget on

61:15

freshwater. And after that legislator

61:20

found out, she was like, I would have

61:22

never voted for having this data center

61:24

in. But the problem is that there are so

61:26

few independent experts for these

61:29

legislators, city council members to

61:31

consult that the only people that they

61:34

rely on for the information about what

61:37

the impact of this is going to be are

61:38

the companies. And all the companies

61:40

ever say is we're going to invest

61:42

millions of dollars.

61:44

We're going to create a bunch of

61:46

construction jobs up front and it's

61:48

going to be great for your economy.

61:50

Yeah. I mean, that's all we hear about

61:52

data sense in this country. And it's a

61:53

great it's a great top line for the

61:55

chancellor and the prime minister

61:56

because they can say tens of billions of

61:58

pounds worth of investment. Okay. But in

62:00

terms of long-term jobs, how many? And

62:02

also, by the way, for that rural

62:03

community in God knows where, you know,

62:05

the northeast of England or whatever.

62:07

Yeah. You're not telling them that

62:08

actually they can't use their hose pipes

62:09

for 3 months a year because all the

62:11

water is going to that local data

62:12

center. Exactly. And

62:15

it's quite extraordinary. and and and

62:16

the most scary thing about all of it is

62:18

in the UK at least the politicians don't

62:20

know any of that. I sincerely don't

62:22

think the chancellor knows any of that.

62:24

Uh and there's no real I mean even if

62:26

you use the prism of colonialism,

62:28

imperialism with regards to exploitative

62:30

economic relations between the United

62:31

States and other parts of the world,

62:33

they think you're a troskist, right?

62:35

That's that's the crazy things. They

62:37

can't even look after their own people

62:38

because if looking after your own people

62:40

boils down to being too leftwing well I

62:42

think part of it is also that they don't

62:43

really realize that it's literally

62:44

happening in the UK. So the so to

62:48

connect it to the UK, data center

62:50

development along the M4 corridor is has

62:53

literally already led to a ban in

62:56

construction of new housing in certain

62:58

communities that desperately need more

63:00

affordable housing. And it's because you

63:03

cannot build new housing when you cannot

63:05

guarantee deliveries of fresh water or

63:07

electricity to that housing. And it was

63:10

due to the massive electricity

63:12

consumption of the data centers being

63:14

built in that corridor that led to that

63:16

ban. That's nuts. I mean, that's the

63:18

most valuable real estate for housing in

63:20

in the country, the M4. Yeah. And do you

63:23

think UK politicians are are aware of

63:25

that contradiction or is that just

63:26

that's I mean, you know, I don't know if

63:29

they are aware if maybe they're they

63:33

don't have awareness or maybe they are

63:35

aware and they're also thinking of other

63:37

tradeoffs. I mean now in the UK and and

63:39

in the EU at large there's just this

63:42

huge conversation around data

63:44

sovereignty and of course technology

63:46

sovereignty there's this whole concept

63:48

of developing the EU stack and why is it

63:52

that we don't have any of our tech

63:54

giants why don't we have any of this

63:55

infrastructure um and like here Starmer

63:58

just said this week during London tech

63:59

week we want to be AI creators not AI

64:02

consumers so I think in their minds

64:06

maybe this is a viable trade-off. We we

64:09

skimp a little bit on housing for the

64:12

ability to have more indigenous

64:15

innovation. But I think the thing that

64:17

is often left out of that conversation

64:19

is this is a false trade-off. People

64:22

think that you need colossal data

64:24

centers to build AI systems. You

64:28

actually do not. This is specifically

64:31

the approach that OpenAI decided to

64:34

take. But actually before open AI

64:36

started building large language models

64:38

and generative AI systems at these these

64:40

colossal scales, the trend within the AI

64:43

research community was going the

64:45

opposite direction towards tiny AI

64:47

systems. And there was all this really

64:49

interesting research looking into how

64:52

small your data sets could be to create

64:54

powerful AI models and how little

64:56

computational resources you needed to

64:58

create powerful AI models. So there were

65:00

there were interesting papers that I

65:01

wrote about where you could have a

65:04

couple hundred images to create highly

65:07

performant AI systems or uh you could

65:09

have AI systems trained on your mobile

65:12

device. That's like a sing not even a

65:15

one computer chip on running on your

65:17

mobile device. And OpenAI took an

65:19

approach that is now using hundreds of

65:21

thousands of computer chips to train a

65:23

single system. And those hundreds of

65:26

thousands of computer chips now are

65:27

consuming, you know, city city loads of

65:30

energy. And so if we divorced the

65:33

concept of AI progress with this scaling

65:37

paradigm, you would realize then you can

65:40

have housing and you can have AI

65:43

innovation.

65:44

But once again, there's not a lot of

65:47

independent experts that are actually

65:50

saying these things. Most AI experts

65:53

today are employed by these companies.

65:58

And this is basically the equivalent of

66:00

if most climate scientists were being

66:02

bankrolled by oil and gas companies.

66:04

Like they would tell you things that are

66:07

not in any sense of the word

66:08

scientifically grounded, but just good

66:10

for the company. I interviewed a great

66:12

guy um twice actually now, a guy called

66:14

Angus Hansen who's really just on it

66:16

with regards to the exploitive nature of

66:18

the increasingly exploitive nature of um

66:21

uh of the United States um economic

66:24

relations with the UK. Just fascinating.

66:26

fascinating uh a book and man and I just

66:31

don't think it's cut through to our

66:32

politicians here how bad it's getting

66:33

and you're saying about AI consumers or

66:35

or or creators I mean ultimately you're

66:39

talking about meta you're talking about

66:41

alphabet you're talking about XAI you're

66:42

talking about open AI we are consumers

66:44

we are dependent it's a colonial

66:46

exploitative relationship with regards

66:47

to big tech has been for a really long

66:50

time our smartest people which the

66:52

taxpayer trains here go to the US I

66:55

think one of the top people at Slack as

66:56

a UK national demis you know um uh deep

67:01

mind now working under the you know the

67:03

sort of the the umbrella of alphabet and

67:08

yeah it just doesn't make it just

67:09

doesn't make sense for me with regards

67:11

to that formulation they they simply

67:14

don't get it you know every I came here

67:17

using my Mastercard

67:19

millions of Brits use Apple Pay and

67:21

Google Pay and Mastercard and Visa and

67:23

every time we do 0.1 0.2% 2% crosses the

67:25

Atlantic and and it's it just goes over

67:28

the heads of um our political class

67:30

which is is very unnerving in regards to

67:32

the efficiency of these um smaller

67:35

systems. Where does where does deepseek

67:37

fit in all of this? Because of course

67:39

the scaling laws at the heart of open AI

67:42

which is you get to AGI by more compute,

67:45

more parameters, more data is kind of

67:48

untethered a bit by the arrival of

67:50

deepseek. Yes. What? Deepseek is such an

67:53

interesting and complicated case because

67:57

they basically they it was it's a

68:00

Chinese AI model that was created by

68:02

this company Highf flyier and they were

68:05

able to create a model that essentially

68:07

matched and even exceeded some

68:09

performance metrics of American models

68:11

being developed by OpenAI and Anthropic

68:14

with orders of magnitude, less

68:17

computational resources, less money.

68:20

That said, it's not necessarily the

68:24

perfect. It's like I I don't think the

68:27

world should suddenly start using Deep

68:28

Seek and saying DeepS solves all these

68:30

problems because it's still engaged in a

68:31

lot of data privacy problems, copyright

68:34

exploitation, things like that. Um, and

68:37

some people argue that ultimately they

68:40

were distilling the models from that

68:44

that were first developed through the

68:46

scaling paradigm. So you first develop

68:48

some of these colossal scaling models

68:51

and then you end up making them smaller

68:53

and more efficient. So some people argue

68:55

that you actually have to first do that

68:56

scaling before you get the efficiency.

68:59

But anyway, what it did show is you can

69:03

get these capabilities with

69:04

significantly less compute. And it also

69:06

showed a complete unwillingness of

69:08

American companies now that they know

69:10

that they can use these techniques to

69:12

make their models more efficient.

69:14

They're still not really doing it. Why

69:16

do they do they like giving their money

69:17

to Nvidia? What's the

69:20

Because when you

69:23

if you continue to pursue a scaling

69:25

approach and you're the only one with

69:29

all of the AI experts in the world, you

69:33

persuade people into believing this is

69:34

the only path and therefore you continue

69:37

to monopolize this technology because it

69:40

locks out anyone else from playing that

69:42

game. And also because path dependence

69:46

like these companies are actually not

69:48

that nimble, they end up the way that

69:52

they they organize themselves. It it's

69:55

not so easy for them to just like

69:56

immediately swap to a different

69:58

approach. They end up putting in motion

70:00

all the resources, all of the training

70:02

runs, so on and so forth o o over the

70:05

course of months, and then they just

70:07

have to run with it. So Deep Seek

70:09

actually wasn't the first time that this

70:10

happened. The first time that this

70:12

happened was with image generators and

70:14

stable diffusion. And stable diffusion

70:18

was specifically developed by an

70:20

academic in Europe who was really pissed

70:23

that the AI companies like OpenAI were

70:26

taking a scaling approach to image

70:28

generation. He was like, "This is

70:29

literally wholly unnecessary." And

70:31

they're spending thousands of chips, all

70:34

of this energy to produce dolly. And

70:38

ultimately, he ended up producing stable

70:40

diffusion with a couple hundred chips

70:42

with using a new technique called latent

70:44

diffusion, hence the name stable

70:47

diffusion. And you know, arguably it was

70:49

actually an even better model than Dolly

70:51

because users were saying that stable

70:53

diffusion had even better image quality,

70:54

better image generation, better ability

70:57

to actually control the images than

70:58

Dolly. But even knowing that latent

71:02

diffusion existed, OpenAI continued to

71:04

develop Dolly with these massive scaling

71:07

approaches. And it wasn't until later

71:09

that they then adopted the cheaper

71:11

version. But it was it was just

71:12

significantly delayed. And and I was

71:14

asking open air research like why that

71:16

doesn't make any sense. why did you do

71:17

that? And they were like, well, once you

71:18

set off on a path, it's kind of hard to

71:20

pivot. Also, Jensen Huang, the the CEO

71:23

of Nvidia is really charismatic, right?

71:25

I mean, it's quite funny when cuz I'm

71:26

I'm a Marxist. Just I'm going to make

71:28

that confession. You have these big sort

71:29

of structural um understandings of how

71:32

of how history happens and then you sort

71:34

of realize actually this guy's really

71:35

charismatic and this person's really

71:36

manipulative and all of a sudden the

71:38

world's hyperpower is, you know, making

71:40

these technological decisions. Okay. Uh

71:43

quite strange. Um we talked about data

71:45

centers. We talked about earth um water

71:48

energy. I want to talk also about some

71:50

of the more exploitative practices with

71:52

regards to workers in the global south.

71:54

You use one really grueling example

71:56

actually in Kenya. Can you talk about

71:58

some of the research around that? Some

72:00

of the people you met. Yeah. So I ended

72:02

up interviewing workers in Kenya who

72:04

were contracted by OpenAI to build a

72:06

content moderation filter for the

72:08

company. And at that point in the

72:10

company's history, it was starting to

72:13

think about commercialization after

72:14

coming from its nonprofit fundamental AI

72:16

research roots. And they realized if

72:18

we're going to put a text generation

72:20

model in the hands of millions of users,

72:21

it is going to be a PR crisis if it

72:23

starts spewing racist, toxic, hateful

72:26

speech. In fact, in 2016, Microsoft

72:29

infamously did exactly this. They

72:31

developed a chatbot named Tay. They put

72:34

it online without any content moderation

72:36

and then within hours it started saying

72:38

awful things and then they had to take

72:40

it offline and to this day as evidenced

72:43

by me bringing it up it's still brought

72:45

up as a horrible case study in corporate

72:48

mismanagement and so open I thought we

72:51

don't want to do that we're going to

72:53

create a filter that wraps around our

72:56

models so that even if the models start

72:59

generating this stuff it never reaches

73:01

the user because the filter then blocks

73:03

it. In order to build that filter, what

73:06

the Kenyon workers had to do was wade

73:09

through reams of the worst text on the

73:12

internet as well as AI generated text on

73:14

the internet where OpenAI was prompting

73:16

its models to imagine the worst text on

73:20

the internet. And the workers then had

73:23

to go through all of this and put into a

73:26

detailed taxonomy, is this hate speech,

73:28

is this harassment, is this violent

73:30

content, is this sexual content? and the

73:32

degree of hate speech of violence of

73:34

sexual content. So it was they were

73:36

asking workers to say does it involve

73:38

sexual abuse? Does it involve sexual

73:39

abuse of children? So on and so forth.

73:42

And to this day I believe if you look at

73:44

OpenAI's content moderation filter

73:46

documentation, it actually lists all of

73:48

those categories. And this is one of the

73:51

things that it offers to clients of

73:53

their models, business clients of their

73:55

models that you can toggle on and off

73:57

each of these filters. So that's why

73:58

they had to put this into that taxonomy.

74:02

The workers ended up suffering very many

74:05

of the same symptoms of content

74:07

moderators of the social media era.

74:09

Absolutely traumatized by the work

74:12

completely changed their personalities

74:14

left them with PTSD. And I highlight the

74:17

story of this man Moin who is one of the

74:20

workers that I interviewed who showed to

74:23

me that it's not just individuals that

74:25

break down. it's their families and

74:27

communities because there are people who

74:29

rely on these individuals. And so Mofat

74:32

was on the sexual content team. His

74:34

personality totally changed as he was

74:36

reading child sexual abuse every day.

74:38

And when he came home, he stopped

74:41

playing with his stepdaughter. He

74:42

stopped being intimate with his wife.

74:44

And he also couldn't explain to them why

74:47

he was changing because he didn't know

74:49

how to say to them, "I read sex content

74:51

all day. That doesn't sound like a real

74:53

job. That sounds like a very shameful

74:55

job. Chad GBT hadn't come out yet. So

74:57

there was no conception of what does

74:59

that even mean? And so one day his wife

75:02

asks him for fish for dinner. He goes

75:04

out, buys three fish, one for him, one

75:06

for her, one for the stepdaughter. And

75:08

by the time he comes home, all of their

75:10

bags are packed and they're completely

75:12

gone. And she texts him, "I don't know

75:13

the man you've become anymore, and I'm

75:16

never coming back." You say that's the

75:17

case with regards to text. Are people

75:18

also having to engage with images as

75:20

well? I mean, that was more of a social

75:22

media thing. Is that here too? Yeah,

75:24

they there were workers that they then

75:26

so after this they contracted these

75:28

Kenyon workers that contract actually

75:31

was cancelled because there was a bunch

75:33

of scrutiny on that company and there

75:36

the the third party company that they

75:37

were contracting the workers through and

75:40

huge scandal um this is Sama right sama

75:43

yeah and there was a huge scandal around

75:45

Sama and then open ended up shifting to

75:49

other contractors who were then involved

75:51

in moderating images

75:53

And were they remunerated for the kind

75:54

of work they were doing quite well or

75:56

for the Kenyan workers they were paid a

75:58

few dollars an hour, right? Yeah. And

76:01

then on the other side of the of the

76:04

Atlantic, you talk about people in South

76:06

America um doing effectively, you know,

76:08

mechanical turk peace work for these

76:11

companies as well. Can you talk about

76:12

that a little bit? Yeah, so generative

76:14

AI is not the only thing that leads to

76:19

data annotation. This has actually been

76:20

part of the AI industry for a very long

76:21

time. And so I ended up years ago

76:25

interviewing this woman in Colombia who

76:27

was a Venezuelan refugee about the

76:30

specific thing that happened to her

76:32

country

76:34

in the global AI supply chain. So when

76:39

in 2016

76:41

when when the AI industry first started

76:44

actually looking into the development of

76:46

self-driving cars, there was a surge in

76:49

demand for highly educated workers to do

76:53

data annotation labeling for helping

76:54

self-driving cars navigate the road. You

76:56

have to show self-driving cars, this is

76:58

a car, this is a tree, this is a bike,

77:00

this is a pedestrian. This is how you

77:01

avoid all of them. These are the lane

77:03

markings. This is what the lane markings

77:04

mean. And they're humans that do that.

77:06

And it just so happened in 2016 when

77:10

this demand was rising that Venezuela as

77:13

a country was was dealing with the worst

77:17

peacetime economic crisis in 50 years.

77:21

So the economy bottomed out. A huge

77:25

population of highly educated workers

77:28

with great access to internet suddenly

77:31

were desperate to work at any price. And

77:34

these became the three conditions that I

77:36

call the crisis playbook in my book that

77:40

companies started using to then scout

77:43

out more workers that were extremely

77:45

cheap for working for the AI industry.

77:48

And so the woman that I met in Colia,

77:50

she was not just it she was working in a

77:55

level of exploitation that was not based

77:56

on the content that she was looking at.

77:59

She was labeling self-driving cars and

78:01

labeling, you know, retail platforms and

78:03

things like that. The exploitation was

78:05

structural to her job in that she was

78:07

logging into a platform every day and

78:10

looking at a queue that automatically

78:12

populated with tasks that were being

78:13

sent to her from Global North companies

78:16

and most of the time the tasks didn't

78:19

appear and when they did she had to

78:21

compete with other workers to claim the

78:24

task first in order to do it at all. And

78:28

because there were so many Venezuelans

78:30

in crisis and so many of them were

78:32

finding out about data annotation

78:33

platforms in the end there were more and

78:37

more and more workers competing for

78:38

smaller and smaller volumes of tasks.

78:41

And so these tasks would come online and

78:44

then disappear within seconds. And so

78:46

one day she was out on a walk when a

78:49

task appeared in her queue and she

78:51

sprinted to her apartment to try and

78:53

claim the task before it went away. But

78:55

by the time she got back it was too

78:56

late. And after that she was like I

78:58

never went on a walk during the weekday

79:00

again. And on the weekends which she

79:02

discovered is less often less likely for

79:05

companies to post tasks. She would only

79:08

allow herself a 30 minute walk break

79:11

because she was too afraid of that

79:13

happening again. And did she did she

79:15

detail about how that gave her sort of

79:16

anxiety or insomnia or mental health

79:19

kind of overheads? It's it it

79:22

that sounds insane sounds insane way to

79:24

live. um

79:26

it completely controlled her life. She

79:28

didn't tell me about whether or not it

79:30

gave her insomnia, but it completely

79:31

controlled the rhythms of her life in

79:33

that she had this plugin that she

79:36

downloaded that would sound an alarm

79:38

every time a task appeared so that she

79:40

could, you know, cook or clean or

79:41

whatever without literally just looking

79:43

at the laptop the whole day. and she

79:45

would turn it on to max volume in the

79:47

middle of the night because sometimes

79:48

task would arrive in the middle of the

79:49

night and if the alarm rang she would

79:51

wake up, sprint to her computer, claim

79:53

the task and then start tasking at like

79:56

3:00 a.m. in the morning. Um, and she

79:59

had chronic illness. Um, one of the

80:02

reasons why she was tethered to her

80:05

apartment doing this online work in the

80:06

first place was not just because she was

80:08

a refugee, but also because she had

80:11

severe diabetes. And it got to the point

80:15

where she ended up in the hospital and

80:17

was completely blind for a period of

80:20

time. And the doctor said that if you

80:21

had not come to the hospital when you

80:24

did, you would have died. And so she was

80:28

tethered to her home because she had to

80:30

inject herself with insulin like five

80:32

times a day. And it was this really

80:34

complicated regime that didn't allow her

80:35

to commute to a regular office, have a

80:37

regular job. So she was doing all this

80:40

extremely disruptive, disregulating

80:43

work on top of just trying to manage

80:46

extreme severe diabetes. I mean, it's

80:49

extraordinary you've managed to un

80:51

unveil those stories. I think I mean

80:53

that's why the book is so interesting,

80:54

fascinating for me. That's why it's got

80:55

the plates it's got is that you're, you

80:57

know, you're speaking to people who are

80:58

on first name terms as Sam Alman, then

80:59

you're talking to Venezuelan refugees in

81:02

Colombia. Um, and it's really important

81:04

to say that this work is being done for

81:07

multi-trillion dollar companies. Yes.

81:09

That's the other side of it, right?

81:10

You're seeing Elon Musk worth 300

81:12

billion plus dollars and then there are

81:14

people that's where the value is being

81:16

generated. Yeah. Exactly. And that's

81:18

when the reason why I really wanted to

81:20

highlight those stories is because

81:21

that's where you really see the logic of

81:23

Empire. There is no moral justification

81:27

for why those workers whose contribution

81:30

is critical to the functioning of these

81:32

technologies and critical to the

81:34

popularity of products like chat GBT are

81:38

paid pennies when the people working

81:41

within the companies can easily get

81:44

million-doll compensation packages. The

81:47

only justification is an ideological one

81:50

which is that there are some people born

81:52

into this world superior and others who

81:55

are inferior and the superior people

81:57

have a right to subjugate the inferior

81:59

ones. My last question what does the US

82:03

public do about big tech if it wants to

82:05

take on some of these issues income

82:07

inequality regional inequality global

82:10

imperial overreach etc. A few proposals

82:14

and which you know somebody can execute

82:15

on. What would you suggest? Yeah, I

82:17

wouldn't even say it's just the US

82:19

public. I mean, anyone in the world can

82:21

do something about it. And one of the

82:22

remarkable things for me in reporting

82:25

stories is people who felt like they had

82:27

the least amount of agency in the world

82:28

were actually the ones that put up the

82:31

most aggressive fights and actually

82:32

started gaining ground on these

82:35

companies in take taking resources from

82:37

them. So, I talk about Chilean water

82:39

activists who pushed back against a

82:41

Google data center project for so long

82:43

that they've stalled that project. now

82:45

for 5 years and they forced Google to

82:49

come to the table and the Chilean

82:51

government to come to the table and now

82:52

these these residents are invited to

82:55

comment every time there's a data center

82:58

development proposal which they then

83:00

said is not it's not the end of the

83:02

fight like they still have to be

83:03

vigilant and at any moment if they blink

83:06

something could happen but but anyone in

83:09

the world I think has an active role to

83:11

play in shaping the AI development

83:13

trajectory and the way that I Think

83:15

about it as the full supply chain of AI

83:17

development. You have a bunch of

83:19

resources that these companies need to

83:21

develop their technologies, data, land,

83:24

energy, water, and then you have a bunch

83:26

of spaces that these companies need

83:28

access to to deploy their technologies.

83:29

Schools, hospitals, offices, government

83:31

agencies. All these resources in all

83:34

these spaces are actually places of

83:37

democratic contestation. They're

83:38

collectively owned. They're publicly

83:41

owned. So, we're already seeing artists

83:43

and writers that are suing these

83:45

companies saying, "No, you cannot take

83:47

our intellectual property." And that is

83:49

them reclaiming ownership over a

83:51

critical resource that these companies

83:53

need. We're seeing people start

83:56

exercising their data privacy rights. I

83:58

mean, one of my favorite things about

84:00

visiting the UK and EU as an American

84:02

that has no federal data privacy law to

84:04

protect me is to reject those cookies

84:07

every single web page that I encounter.

84:10

That is me reclaiming ownership over my

84:12

data and not allowing those companies to

84:14

then feed that into their models. We're

84:17

seeing just like the Chilean water

84:18

activists, hundreds of communities now

84:20

rising up and pushing back against data

84:22

center development. We're seeing

84:24

teachers and students escalate the a

84:27

public debate around do we actually want

84:30

AI in our schools and if so under what

84:33

terms. And many schools are now setting

84:35

up governance committees to to to

84:38

determine what their AI policy is so

84:40

that ultimately AI can facilitate more

84:43

curiosity and more critical thinking

84:45

instead of just eroding it all away. The

84:47

same thing I'm sure wherever your

84:50

audience is sitting right now. If they

84:52

work for a company, that company is for

84:54

sure discussing their AI policy. Put

84:57

yourself on that committee for drafting

84:59

that policy. Make sure that all the

85:01

stakeholders in that office are at that

85:04

table actively discussing when and under

85:07

what conditions you would accept AI and

85:10

from which vendors as well because again

85:13

not all AI models are created equal. So

85:16

do your research on which AI

85:17

technologies you want to use and which

85:20

companies are providing them. And I

85:22

think if we everyone can actually

85:27

actively play a role in every single

85:30

part of the supply chain that they

85:31

interface with, which is quite a lot.

85:33

Most people interface with the data

85:35

part. Many people will now have data

85:37

center data centers popping up in a

85:39

community near them. Everyone goes to

85:42

school at some point. Everyone works in

85:44

some kind of office or community at some

85:46

point. If we do all of this push back a

85:49

100,000 times fold and democratically

85:52

contest every stage of this AI

85:55

development and deployment pipeline, I

85:58

am very optimistic that we will reverse

86:01

the imperial conquest of these companies

86:04

and move towards a much more broadly

86:07

beneficial trajectory for AI

86:09

development. Yeah, we've had we've had

86:12

big tech social media for the last 15 20

86:15

years and I suppose the question is is

86:17

the same set of patterns going to apply

86:20

to this stuff and I I I think when you

86:22

speak to someone like Jonathan hate when

86:23

he talks about um young people and their

86:25

consumption now of social media and

86:28

mobile telephones etc. his real worry is

86:30

AI. Yeah. And if there is this lass

86:33

attitude from policy makers and also

86:34

let's be honest from civil service civil

86:36

society that there was over the last 15

86:38

20 years I mean he's terrified about the

86:41

implications so it's interesting to see

86:43

that there's congruence between what

86:45

you're saying what Jonathan hates saying

86:47

can I ask you one more question have you

86:49

ever read Dune by Frank Herbert I've

86:52

watched the movie and it's sitting on my

86:54

bedside table to actually read the

86:56

original and I'm so glad that you asked

86:59

me this because this is an analogy that

87:01

I use all the time now to describe the

87:04

AI world. Yeah. But Larry and Jihad.

87:08

So yeah. So one of the things that was

87:10

so shocking to me because we already

87:12

talked about this like quasi religious

87:14

fervor within the AI community and I was

87:17

interviewing people who one of the

87:19

people that their voice was quivering

87:22

when they were telling me about the

87:25

profound cataclysmic changes on the

87:27

horizon. Like these are very visceral

87:29

reactions. These are true believers. And

87:33

Dune strikes me as a really good analogy

87:35

for understanding this ecosystem

87:39

because Paul Trades mom in the story,

87:43

she creates this myth to help position

87:46

Paul as a supreme leader and to

87:50

ultimately control the population. And

87:52

the people who encounter this myth, they

87:55

don't know that it's a creation. So,

87:56

they're just true believers. And at some

87:59

point, Paul gets so wrapped up in this

88:02

own mythology that he starts to forget

88:04

that it was originally a creation. And

88:08

this is essentially what I felt like I

88:11

was I was seeing with my interviews of

88:15

people in the AI world because because I

88:18

w I had the opportunity to start

88:20

interviewing people starting all the way

88:22

back in 2019. You know, I interviewed

88:24

some people who for back then and for

88:27

the book to just map out their their

88:30

trajectory

88:33

and there were non-believers back then

88:35

that are true believers now. Like if

88:38

they were able to stay long enough at

88:40

that company, they all in the end become

88:43

true believers in this AGI religion. And

88:46

so there's this vortex of it's like a

88:51

black hole, ideological black hole. I

88:53

don't know how to explain it, but people

88:56

when they swim too long in the water, it

88:59

just becomes them. So what you're saying

89:01

is Sam Sam Alman is the Lisan Algib.

89:05

That's the character and and Paul Graham

89:08

maybe was the, you know, the it would

89:09

seem it would seem like that would be

89:12

the most appropriate character to assign

89:13

to him. Yeah. Wow, this has been

89:16

fabulous. And I have to say honestly,

89:18

the book is really, really exceptional.

89:19

Empire of AI. I read it so much that

89:21

dust jacket I think my daughter actually

89:22

ripped it off. But anyway, uh it is a

89:25

sensational book. Sensational

89:26

journalism, fantastic journalist. We

89:28

don't have enough of those in the world.

89:30

Thank you. Um real pleasure to meet you,

89:32

Karen. Thanks so much for joining us. It

89:33

was great to meet you.

89:39

[Music]

Interactive Summary

Ask follow-up questions or revisit key timestamps.

The speaker discusses the book "Empire of AI" by Karen How, which offers an inside look at OpenAI and the broader AI industry. The conversation highlights the origins of AI as a term, its current poorly defined nature, and the distinction between AI, machine learning, and deep learning. A significant portion of the discussion focuses on the environmental and societal costs of AI development, including massive energy and water consumption for data centers, the potential for AI to exacerbate climate change and public health crises, and the ethical implications of using public resources for private tech expansion. The book also delves into the motivations behind the AI race, exploring the ideological fervor and potential lack of clear business cases driving investment, as well as the aggressive tactics used by companies like OpenAI and its CEO, Sam Altman, to secure resources and talent. The conversation touches upon the concentration of power in a small group of individuals, the erosion of democratic processes, and the exploitative labor practices in the global AI supply chain. Finally, it explores the origins of OpenAI as a non-profit, its eventual shift to a for-profit model, and the cult-like atmosphere that can develop within the industry, drawing parallels to religious movements and the book "Dune".

Suggested questions

10 ready-made prompts