Silicon Valley Insider EXPOSES Cult-Like AI Companies | Aaron Bastani Meets Karen Hao
2298 segments
I went to school with a lot of the
people that now build these
technologies. I went to school with some
of the executives at OpenAI. I don't
find these figures to be towering or
magical. Like I remember when we were
walking around dorm rooms together in
our pajamas and it it instilled in me
this understanding that technology is
always a product of human choices. And
different humans will have different
blind spots. And if you give a small
group of those people too much power to
develop technologies that will affect
billions of people's lives, inevitably
that is structurally unsound.
[Music]
Artificial intelligence is the backbone
of some of the biggest companies in the
world right now. Multi-trillion dollar
companies can talk about nothing else
but AI. And of course, whenever it's
discussed in the media by politicians
and civil society, it's compared
invariably to the steam engine. It is
going to be the backbone for a new
machine age. Some people are really
optimistic about the possibilities it
will bring. They are the boosters, the
techno optimists, the technoutopians.
Others are doomers. They're down on AI.
AGI, artificial general intelligence,
it's never going to happen. And if it
does, well, it's going to look like The
Matrix or maybe even the Terminator and
Skynet. We don't want that, do we?
Today's guest, however, is not
speculating about the future. Instead,
they're very much immersed in the
present and indeed the recent past of
the artificial intelligence industry.
Karen How went to MIT. She studied
mechanical engineering. She knows the
STEM game inside out. But she made a
choice to go into journalism and media
to talk about these issues with a
fluency and a knowledge that very few
people have. Rather than speculate, what
Karen has done with this book is talk to
people in the field. 300 interviews with
260 people in the industry, 150
interviews with 90 employees of OpenAI,
both past and present. She has access to
emails, company Slack channels, the
works. This is the inside account of
open AI and the perils of artificial
intelligence, big tech, and big money
coming after well, pretty much
everything. It's an amazing story told
incredibly well. I hope you enjoy this
interview. Karen, how welcome to
Downstream. Thank you so much for having
me, Aaron. It's a real pleasure to have
you on. Right. We say we say that to
everybody, every guest. Uh, but I have
to say, and this has had rave reviews,
even though I've lost the uh the dust
jacket, Empire of AI with this huge poem
you've written, 421 pages, I think, not
including the acknowledgements.
Really, really interesting book. It's
about AI, this burgeoning industry in
the United States around artificial
intelligence. That word has been in
circulation since the 1950s, I believe.
Yeah. Before we drill down into your
book, what is AI and what do people mean
by AI when they talk about it in 2025 in
Silicon Valley? This is you would think
this is the easiest question, but this
is always the hardest question that I
get because artificial intelligence is
quite poorly defined.
We'll go back first to 1956 because I
feel like it helps understand a little
bit about why it's so poorly defined
today. But the term was originally
coined in 1956 by this Dartmouth
professor, assistant professor John
McCarthy. And he coined it to draw more
attention and more money to research
that he was originally doing under a
different name. And that was something
he has explicitly said a few decades
later. He said, "I invented the term
artificial intelligence to get money for
a summer study." And that kind of that
that marketing route to the phrase is
part of why it's really difficult to pin
down a specific definition today. The
other reason is because generally people
say that AI refers to the concept of
recreating human intelligence in
computers. But we also don't have a
scientific consensus around what human
intelligence is. So quite literally when
people say AI, they're referring to an
an umbrella of all these different types
of technologies that appear to simulate
different human behaviors or human
tasks. Um, but it really ranges from
something like Siri on your iPhone all
the way to chat GBT, which behind the
scenes are actually really, really
different ways of operating. They're
totally different scales in terms of the
consumption of the technologies. Um, and
of course they they often have different
use cases as well. So right now when
OpenAI meta when they use those words AI
in regards to their products
specifically, what are they talking
about? Most often they are now talking
about what are called deep learning
systems. So these are systems that train
on loads of data and you have software
that can statistically compute the
patterns in that data and then that
model is used to then make decisions or
generate text or make predictions. So
most modern-day AI systems built by
companies like Meta, by OpenAI, by
Google are now these deep learning
systems. So deep learning is the is is
the same as machine learning is the same
as neural networks. Deep learning is
these are synonyms. Deep learning is a
subcategory of machine learning. Machine
learning refers to a specific branch of
AI where you build software that
calculates patterns in data. Deep
learning is when you're specifically
using neural networks to calculate those
patterns. So you have a what I call um
one of the founding fathers of AI used
to call AI a suitcase word. So you
because you can put whatever you want in
the suitcase and suddenly it AI means
something different. So we have this
suitcase word of AI and then under that
any datadriven AI techniques are called
machine learning and then any neural
network datadriven techniques are called
deep learning. So it's the smallest
circle within this broader suitcase
work. So deep learning and neural
networks are kind of interchangeable.
Not exactly in the sense that neural
networks are referring to a piece of
software and deep learning is referring
to the process that the software is
doing, right? Yeah. Do you get upset
when when when politicians so in this
country we have a prime minister called
Karma, you know, and they say we think
the NHS can save, you know, 20% by, you
know, using AI applications, right? Do
you sort of think my good like these
people have no idea what they're talking
about because that is such an expansive
term. It can't really it's its political
convenience is precisely doesn't mean
anything. It it does frustrate me a
little bit. I so I often use the analogy
that AI is like the word transportation.
I mean if transportation can refer to
bicycles or rockets or self-driving cars
or gas guzzling trucks, you know, like
they're all different modes of
transportation, serve different
purposes, different costbenefit
analyses. And you would never have a
politician say we need more
transportation to mitigate climate
change. You would be like but what kind
of trans like what are you talking
about? Well yeah
we need more transportation to stimulate
the economy. I mean maybe in that case
it's like it's just yeah like there is a
vagueness
around the AI discussion that is really
unproductive and I think a lot of that
leads to confusion where people think AI
equals one thing and AI equals progress
and so we should just have all of it but
actually if we were to use the
transportation analogy you know like
having more bicycles having more public
transit sounds great but if someone were
actually referring to just like using
rockets to commute from, you know, um,
Dublin to to London and we were like,
everyone should get a rocket now, like
that's going to bring us more progress,
you'd be like, what are you talking
about? And that's effectively what these
companies are doing with general
intelligence. When you're giving people
tools for free with regards to
generative AI to just generate stupid
images of nonsense, that's kind of what
we're doing, right? I I presume you
would take that analogy to that level.
It's like saying, "Let's use a rocket to
get from Dublin to London to Paris."
Yeah, exactly. Like, it's not fit for
the task. Um, and the the extraordinary
amount of environmental costs for flying
that rocket when you could have flown a
much more efficient plane to do the same
thing is like what are you doing, you
know? Um, and that's some one of the
things that people don't really realize
about artificial or about generative AI
is that the resource consumption
required to develop these models and
also use these models is quite
extraordinary and often times people are
using them for tasks that could be
achieved with highly efficient different
AI techniques and you're but because we
use the sweeping term AI to mean
anything then people just think, "Oh,
yeah, right, right. I'm just going to
use Chat GBT for my one-stop shop
solution for anything AI related." So,
right now, data centers globally, I
think, are about 3 3.5% of CO2
emissions. I think the the data centers
for AI are a tiny fraction of that, but
obviously they're growing at an
extraordinary pace. Yeah.
Are there any numbers out there with
regards to projected CO2 emissions of
data centers globally 5, 10, 15 years
from now or is that also it's so recent
that we can't really speculate about the
numbers involved? There are numbers
around the energy consumption which you
could then use to kind of try and
project uh project carbon emissions. So
there was a McKenzie report that
recently projected that based on the
current pace of data center and
supercomputer expansion for the
development and deployment of AI
technologies, we would need to add
around half to 1.2 times the amount of
energy consumed in the UK annually to
the global grid in the next 5 years.
Wow. Yeah. And most of that will be
serviced by fossil fuels. This is
something that Sam Alman actually even
said in front of Senate the Senate a
couple weeks ago. He said it will most
probably be natural gas. So he actually
picked the nicest fossil fuel. But we
already seeing reports of coal plants
having their lives extended. They were
meant to be retired, but they're no
longer being retired explicitly to power
data center development. We're seeing
reports of Elon Musk's XAI, the giant
supercomputer that he built called
Colossus in Memphis, Tennessee. It is
being powered with around 35 unlicensed
methane gas turbines that are pumping
thousands of toxic air pollutants into
the air into that community. So this
data center acceleration is not just
accelerating the climate crisis. It also
is accelerating the public health crisis
of people's ability to access clean air
as well as clean water. So one of the
aspects that's really undertalked about
with this kind of AI development, the
OpenAI's version of AI development is
that these data data centers need fresh
water to cool because if they used any
other kind of water, it would erode
corrode the equipment. It would lead to
bacterial growth. And so most often
these data centers actually use public
drinking water because when they enter
into a community that is the
infrastructure that's already laid to
deliver the fresh water to companies, to
businesses, to residents. And so one of
the things that I highlight in my book
is there are many many communities that
are already they do not have sufficient
drinking water even for people. And I
went to Monte Vido Uruguay to speak with
people about a historic level of drought
that they were experiencing where the
Monte Vido government literally did not
have enough water to put into the public
drinking water supply. So they were
mixing toxic waste water in just so
people could have something come out of
their taps when they opened them. And
for people that were too poor to buy
bottled water, that is what they were
drinking. And women were having higher
rates of miscarriages. uh elderly were
having an exacerbation or inflammation
of their chronic diseases. And in the
middle of that, Google proposed to build
a data center that would use more
drinking water. This is called potable
water, right? This is a potable water.
Yeah. Exactly. You can't use seaw water
because of the saline aspect that you
Exactly. Exactly. And Bloomberg recently
had a story that said 2third of the data
centers now being built for AI
development are in fact going into water
scarce areas.
You said a moment ago about um XAI
unlicensed energy generation using
methane gas. When you say unlicensed,
what do you mean? As in the company just
decided to completely ignore existing
environmental regulations when they
installed those methane gas turbines.
And this is actually a really
one of the one of the things that I
concluded by the end of my reporting was
not only are these companies really
corporate empires, but also that if we
allow them to be unfettered in their
access to resources and unfettered in
their expansion, they will ultimately
erode democracy. Like that is the
greatest threat of their behaviors. And
what XAI is doing is a perfect example
of at the smallest level
the they're enter these companies are
entering into communities and completely
hijacking existing laws, existing
regulations, existing democratic
processes to build the infrastructure
for their expansion. And we're seeing
this hijacking of the democratic process
at every level, the smallest local
levels all the way to the international
level.
It's kind of that that orthodoxy of seek
permission after you do something is now
I mean when you start applying this is
business as usual for those companies
that's part of their expansion strategy
which we'll talk about and we're going
to talk about um the sort of global
colonial aspect as well with regards to
resource consumption resource use just
bring it back to the US again because at
the top of this conversation I want to
offer a bit of a primer to people out
there who they maybe know what AI is
they maybe have used chat GPT
what are the major companies we're now
talking about in this space particularly
in the United States of America over the
last 5 years who who are the people in
this race to AGI Mhm. allegedly
um artificial general intelligence
something which you know either might be
sentient probably not or capable of
augmenting its own intelligence more
plausible who are the major players in
that field right now one caveat on AGI
is that it's as illdefined as the term
AI um so I like to think of it as just a
rebranding you know the the entire
history of AI has just been been about
rebranding and the term deep learning
was also a rebranding so anyway but the
players First, OpenAI of course they
were the ones that fired the first shot
with chat GBT anthropic major competitor
Google Meta Microsoft they're the older
uh internet giants that are now also
racing to deploy these technologies um
super safe super intelligence which spun
out of also uh o an open AI splinter
there are many openai splinters so this
was founded very recently by the former
chief scientist of OpenAI and Thinking
Machines Lab founded very recently by a
former chief technology officer of
OpenAI and Amazon is now trying to get
into the game as well. So basic and
Apple is also trying to get in the game.
So basically all the older generation
tech giants as well as a new crop of AI
players are all jostling in this space
and that's just the US, right? And
that's just the US, right? So the the
Chinese ecosystem is interesting because
they're not so
um they don't really use the term AGI
like that this is like a very kind of
unique thing about the US ecosystem is
that there's a quasi religious fervor
around that underpins the construction
of AI products and services whereas in
China it's much more like these are
businesses we're building products that
users are going to use. So, if you're
just looking at companies that are
building chat bots that are sort of akin
to chat GBT, then we're talking about
Bite Dance, owner of Tik Tok. Um,
Alibaba, the equivalent of Amazon, BYU,
the equivalent of Google, Huawei, the
equivalent of Apple, and uh, Tencent,
the um, what is the equivalent of
Tencent? I I guess Meta is the
equivalent of Tencent. So, they're also
building on these things. And there's
similarly a crop of startups that are
moving into the generative AI space. And
in Europe, we've got the little tidlers
like Mistral in France, you know, really
not not at the races cuz we're Europe.
Um what's the business case for all
this? Because obviously you've got
massive companies
often driven by maximizing shareholder
value, multi-trillion dollar valuations.
You do these things, you invest money to
make money as a capitalist society. So
what what is the business case made by
say Microsoft when they have their
shareholder meetings and they say we're
going to allocate 4050 billion dollars
towards building data centers and so on.
So it's really it's interesting that you
mentioned Microsoft because Microsoft
has recently been pulling back their
investments in data centers. They they
went all in and now they're really
rapidly starting to abandon data center
projects. So to answer your question, it
is really unclear what the business case
is and Microsoft has been one of the
first companies to start acknowledging
that and Satia Nadella has come onto
some podcasts recently where he actually
stunned some people in the industry by
being quite skeptical of whether or not
this race to AGI was productive. Um but
one of the things that I I really felt
after reporting what is driving the
fervor is you can't actually fully
understand it as just a story about
money. It has to also be understood as a
story of ideology because when in the
absence of a business case then you ask
why are people still doing this? And the
answer is there are people who genuinely
fervently believe and they talk about it
as a belief in this idea that we can
fundamentally recreate human
intelligence and that if we can do that
there is no other more important thing
in the world because what else like how
else you should you be dedicating your
time other than to bring about this
civilizationally transformative
technology. And so that's part of why
what drives open AI, what drives
Enthropic, what drives safe super
intelligence, these other smaller
startups. And then the bigger giants
which are more business focused and more
classic companies that actually care
about their bottom lines, they end up
getting pressured because shareholders
are seeing the enormous amounts of
investment by these startups and they're
seeing users start shifting from Google
search to using chat GBT as search. Chat
GBT should not be used as search but
consumers think that it is. And then
shareholders ask in Google's shareholder
meetings, what are you doing with AI?
What is your AI strategy? Why aren't you
investing in this technology? And so
then all of the other giants end up
racing in the same direction.
What does Warren Buffett make of it?
That's what I want to know. Is he sort
of like you guys if he's like you guys
are wasting your money? He's like he's
he's probably right. I have no idea. Has
he invested in AI? No, I don't I don't
think so. He just sticks to Coke and
these sorts of things, doesn't he? I
mean there's there's two rational. So I
think one is like you say a quasa
religious fervor has inflected the
investment decisions of some of the
world's most um valuable companies which
is just an extraordinary thing to even
think about. I suppose the other one is
that a lot of people in this space, as
we'll talk about in a moment, are
heavily influenced by people like Peter
Teal. And Peter Teal's orthodoxy is that
competition is for idiots, right? If
you're going to start a business, it has
to be a monopoly. And I can only presume
that companies like Microsoft, etc.,
Although maybe that's not the best
example now given recent events, but
XAI, Open AI, Meta, the only reason you
would invest ultimately hundreds of
billions, trillions of dollars into this
is because first mover advantage gives
you a monopoly on the most
transformational technology since the
steam engine. Yeah. I mean, that's the
only way I can make sense of it, right?
Have Have they has anybody in that space
kind of said that we want a we want the
monopoly on AGI? We want to be the the
Facebook of AGI. Well, what what OpenAI
often says to investors is if you make
this seemingly fantastical bid into our
technology, you could get the biggest
returns you've ever seen in your life
because we will then be able to use your
funding to get to AGI first. So, it's
still riding on this concept of the fact
that there might be an AGI, which is
high, it's not like rooted in scientific
evidence. Um, and even if we fail, we
will successfully be able to automate a
lot of human tasks to the point where we
can convince a lot of executives to hire
our software instead of a labor force.
So that in and of itself could
potentially end up generating enough
returns for you more than you've ever
seen before. So that's usually the pitch
that they make. But you know it is a
huge risky bargain that these investors
are actually pitching into. And and you
know a lot of investors they they have a
bandwagon mentality like they aren't
necessarily doing their own analysis to
say let me do this investment. They're
just seeing everyone glom onto this
thing and they're like well I don't want
to miss out. Why don't we glom on as
well? But you know, there are some
investors that have actually recently
reached out to me to be like, one of the
most under reportported stories right
now is the amount of risk that is not
just being taken on by these VCs is
actually being taken on by the entire
economy because the money that these
investors are investing comes from like
university endowments and things like
that. So if the bubble pops, it doesn't
just pop for Silicon Valley, it actually
has will have ripple effects across the
global economy. I mean, when you look at
the the the sort of e-commerce bubble in
the late 90s, okay, it was a bubble, you
know, pets.com or whatever. It was, you
know, had these crazy valuations, but,
you know, buying and selling goods and
services offline and then taking that
online. I mean, that makes sense. That's
a that's a plausible sort of commercial
model, but like you say, nobody's really
done that with artificial intelligence.
It does kind of feel like, you know, you
read these stories about Chulip Mania in
17th century Holland, and it does kind
of feel very similar. Um, you mentioned
Open AI and we've talked about it many
times and of course OpenAI is is the
central organization in this book.
What's the big idea behind Open AI? When
it starts and when does it start? Let's
let's end of 2015. 2015. So, it's 10
years old. What are the animating values
that give birth to open AAI? So open
started as a nonprofit which many people
don't realize based on the fact that
it's one of the most capitalistic if not
the most capitalistic organization in
Silicon Valley today.
But it was co-founded by Elon Musk and
Sam Alman as a bid to try and create a
fundamental AI research lab that could
develop this transformative technology
without any kind of commercial
pressures. So they positioned themselves
as the anti-Silicon Valley, the anti-
Google because Google at the time was
the main driver of AI development. They
had developed a monopoly on some top AI
research scientists and Musk in
particular had this really great fear of
not just Google but Google's deep mind
uh Google's acquisition of deep mind
where he was very worried that this
consolidation of some of the brightest
minds would lead to the development of
AI that would go very badly wrong. And
what he meant by very badly wrong was it
could one day develop sentience
consciousness, go rogue and kill all
humans on the planet. And because of
that fear, Alman and Musk then thought,
we need to do a nonprofit, not have
these profit- driven incentives. We're
going to focus on being completely open,
transparent, and also collaborative to
the point of self-sacrificing if
necessary. If another lab starts making
faster progress than us on AI and on the
quest to AGI, we will actually
just join up with them. We will we will
dissolve our own organization and join
up with them and uh that didn't hold for
very long.
So what's their theory behind that?
Because you know at that point Google is
now about maybe 2015 is maybe the
world's most valuable company. I don't
know. is certainly up there and this is
a nonprofit. Yeah. So how how are they
going to achieve AGI before Google?
So initially the bottleneck that they
saw was talent like right Google has
this monopoly in talent. We need to chip
away at that monopoly and get some of
those Google researchers to come to us
and also start acquiring PhD students
that are just coming out of uni. And
because of that I have come to speculate
this is not based on any documents that
I read or anything. I've come to
speculate that part of the reason why
they started as a nonprofit in the first
place is because it was a great
recruitment tool for getting at that
bottleneck.
They could not compete on salaries with
Google, but they could compete on a
sense of mission. And in fact, when
Alman was recruiting the chief
scientist, Deia Sudskver, who was the
critical first acquisition of talent,
that then led to many other scientists
being really interested in working for
OpenAI. He appealed to Sudskver's sense
of purpose, like do you want to do you
want a big salary and just to work for a
for-profit company or do you want to
take a pay cut and do something big with
your life? And it was actually that
reason that Sutzk said, you know what,
you're right. I I do want to work for a
nonprofit. And so that's how they
initially conceived of competing with
Google was we we're starting a little
bit late to the game. How do we first
get a bunch of really really smart
people to join us? let's create this
really big sense of mission and and the
I open the book with two quotes in the
epigraph and one of them is from Sam
Alman writing a blog post in 2013 and he
quotes someone else that says successful
people build companies more successful
people build countries the most
successful people build religions and
then he reflects on this and says it
seems to me that the most successful
founders in the world don't actually set
off to build a company. They set off to
build a religion. And it turns out
building a company is the easiest way to
do so. And so, you know, it's not like
2013 and then 2015 he creates OpenAI as
a nonprofit.
It's important to say as well, Sam
Melman is not some sort of idealistic
um porpa, you know, he's working at Y
Combinator. He is very much inshed
within the Silicon Valley elite. Um,
I suppose also there's tax as well,
right? If you're a nonprofit, you've got
the mission, you've also got a bunch of
tax breaks which you don't have as a for
profit. So maybe there's a very cynical
genesis there. Um, but I suppose just
reading your book and becoming more
familiar with the arguments over time,
you know, clearly the amount of compute
you have is is was always going to be
critical. And if you if you believe on
the in the um neural network model, the
deep learning model, the amount of
compute you have is always going to be
critical. And it just seems implausible
that a nonprofit could ever have been
able to compete with Google, for
instance, ever. like it seems
implausible because you have to spend as
we now see tens of billions, hundreds of
billions of dollars on compute.
Did nobody say that? Did nobody say,
"Hey, you know, like the bottleneck
isn't just talent action. It's being
able to spend hundreds of billions of
dollars on these Nvidia GPUs." It's so
interesting because at the time the idea
that you needed a lot of compute was
actually
neither very popular nor one that was
seen as that scientifically rigorous.
Right? So there were there were many
different ideas of how to advance AI.
One was we already actually have all the
techniques that we need and we just need
to scale them. But that was considered a
very extreme opinion. And then on the
other extreme it was we don't even have
the techniques yet. And interestingly
recently there's a New York Times story
that says why we likely won't get to AGI
anytime soon by Cade Mets. And he cites
this stat that 75%
of the longest standing most respected
AI researchers actually still think to
this day we don't actually have the
techniques to get to AGI if we will
ever. So, it's we're we're kind of
coming full circle now and it is
starting to become unpopular again. This
idea that you can just scale your way to
so-called intelligence, but that was the
research vibe when openi started was we
can actually maybe just innovate on
techniques, right? And then very quickly
because Ilas Sutskver in particular was
a scientist who anomalously did think
that scaling was possible and because
Altman loved the idea of adding zeros to
things from his career in Silicon Valley
and because Greg Brockman the chief
technology officer also very Silicon
Valley entrepreneur liked that idea as
well then they identified why don't we
go for scale because that is going to be
the fastest way to see whether we can
beat Google.
And once they made that decision about
less than a year in roughly is when they
started actually talking about that,
that's when they decided we actually
need to convert into a forprofit because
the bottleneck has shifted now from
acquiring talent to acquiring capital.
And that is also why Elon Musk and Sam
Alman ended up having a falling out
because when they started discussing a
for-profit conversion, both Elon Musk
and Sam Alman each wanted to be the CEO
of that forprofit. And so they couldn't
agree. And originally Ilascover and Greg
Brockman chose Musk. They thought that
Musk would be the better leader of
OpenAI. But then Altman essentially, and
this is something that is very classic,
a very classic pattern in his career,
became very persuasive to Brockman, who
he had had a long-term relationship with
about why it could actually be dangerous
to go with Musk and like like I would
definitely be the more responsible
leader so on and so forth. And then
Brockman convinces Susper and the two
chief scientist, chief technology
officer pivot their decision and they go
with Alman and then Musk leaves in a
huff and says I don't want to be part of
this anymore which has become rather
typical of the man hasn't it
subsequently but that is incredible
really. So by 2016 there's a recognition
that in terms of capital investment
they're going to have to go toe-to-toe
with maybe at that point the world's
biggest company and they're a nonprofit.
Yeah. I just find it weird that and but
lots of people bought the propaganda
that open AI was in some way open. Yeah.
What did the open stand for by the way?
The open originally stood for open
source which in the first year of open
AI they really did open source things.
They did research and then they would
put all their code online. So it it it
really was like they did they did what
they said and then the moment that they
realized we got to go for scale then
everything shifted. It's such an amazing
story and so emblematic of the 2010s
that you have this organization which
presents itself as effectively an
extension of activism. Yeah. You know,
ends up becoming today some people value
open AI at $300 billion. Yeah. Um and
it's doing all these terrible things
which we're going to talk about. Sam
Alman specifically,
who is he? What's his background? How
does this guy who nobody's heard of
become the CEO of a company which today
is you know it's it's almost more
valuable than any company in Europe for
instance. Yeah. Altman is he's spent his
entire career in Silicon Valley. He was
a first a founder, a startup founder
himself and he was part of the first
batch of companies that joined Y
Combinator, one of now today one of the
most prestigious startup accelerators in
Silicon Valley. But at the time he was
he was the very first class and no one
really knew what YC was. He did that for
seven years. He was running a company
called Looped, which was a mobile-based
social media platform, effectively a
Foursquare competitor, but which
actually started earlier than
Foursquare. It didn't do very well. It
was sold off for parts and but what he
did do very well during that time was
ingratiate himself with very powerful
networks in Silicon Valley. So, one of
the first and longest mentors that he
ended up having throughout his career is
Paul Graham, the founder of Y
Combinator, who then plucked Sam Alman
to be his successor. And Sam Alman then
at a very young age became president of
YC. And then he ended up doing that for
around 5 years. And during his tenure at
YC, he dramatically expanded YC's
portfolio of companies. He started
investing not just in software companies
but also pushing into quantum into
self-driving cars into fusion and really
going for those hard tech engineering
challenges. And if you look at how he
ended up then as a CEO of OpenAI,
I think that he basically was trying to
figure out what is going to be the next
big technology wave. Let me test out all
of these different things. position
myself as involved in all of these
different things. Um, so in addition to
all his investments, he started
cultivating this idea of AI also seems
like maybe it'll be big. Let me start
working on an idea for a fundamental AI
research lab that becomes open AI. And
once open AI started being the fastest
one taking off, then Alman hops over and
becomes CEO.
He hops over. So how does that happen?
Where does he come from? Cuz like you
say originally it's got people like he's
there. Who's there first? him or I sits
technically Alman recruited Satzkever
but Alman was only a a chairman he he
didn't take an executive role at OpenAI
even though he founded the company right
and similarly with Musk Musk didn't have
an executive role he was just a
co-chairman so it was just the two of
them that were chairman of the board and
Ilia Szver and Greg Machmann were the
main people the main executives that
were actually running the company
dayto-day in the beginning I mean I have
to say reading the book Sam Alman he
comes across as a a master manipulator
like masterful manipulator and
understander of human psychology there's
this great quote let me get it up uh
which you have I think it's from Paul
Graham
um Sam Alman has it you could parachute
him into an island full of cannibals and
come back in 5 years and he'd be the
king if you're Sam Alman you don't have
to be profitable to convey to investors
that you will succeed with or without
them I mean
he just sounds He's also described, by
the way, as a once in a generation
fundraising talent. I think that's by
you. Yeah. Um,
how how is he able to just basically
come out of nowhere and compete with
people like Elon Musk, Zuckerberg as
this kind of intellectual heavyweight in
Silicon Valley in regards to one of the
major growth technologies of our of our
decade. So, from the public's
perspective, he came out of nowhere. But
within the tech industry, everyone knew
Sam Alman. You know, like I I as someone
who worked in tech, like I knew Sam
Alman ages ago because Y Cominator was
just so important. It was as a CEO of
potential company that valuable. Was it
always something that he might be in?
No, I don't think people ever thought
that he would jump to become the CEO of
a company because he has such an
investor mindset and his approach has
always been to be involved in many many
companies. I mean he invested in
hundreds of startups as both the
president of YC and running some uh
personal investment funds as well but
people he was well respected within the
valley. He was seen as a critical
lynchpin of the entire startup ecosystem
and not just by people within the
industry but by policy makers which is
key. He started cultivating
relationships with politicians very very
early on in his tenure as the president
of YC. And for example, I talk in my
book about how Ash Carter, the head of
the Department of Defense under the
Obama administration, came to Altman
asking, "How can we get more young tech
entrepreneurs to partner with the US
government?" So, he was seen as a
gateway into the valley. And
obviously the valley isn't just made of
of of startups. There's also the tech
giants. But back then like starting a
startup was way cooler than working at a
tech giant because Google, Microsoft,
they were considered the older safer
options if you really wanted job
security. But if you wanted to be an
innovator, if you wanted to do
breathtaking things, you would build a
startup. And then that start your number
one goal as a startup founder was to get
into YC. So Altman was the pinnacle. He
was he was a he was emblematic of the
pinnacle of success in the valley. And
he even if his net worth wasn't the same
as other people in terms of his social
capital, his networking, he understood
early on that's where the real value
lies. Exactly. So interesting. I mean
some notes that I wrote down um cuz
there are there are points where I'm
thinking why on earth is this gentleman
the CEO of such a valuable company he
seems kind of useless and the notes I
had down were um people pleaser yes liar
conflict averse
how' you become the CEO of such a
successful company
maybe you think that or don't think that
I don't know I mean at points it kind it
comes across as almost psychotic the
capacity to to lie.
Here's an interesting question for me
and I don't know I don't know how
comfortable you are with answering it.
In writing this book, there's another
alternative timeline where you basically
write a hography of Sam
and you leave all of that out, right?
There are other writers out there, I
won't name them, they sell a ton of
books, and they write very positive,
affirming um biographies of these
visionary leaders, whether it's Elon
Musk or Steve Jobs, etc.
Why didn't you just write that book
about Sam Alman? You know, you would
have made a ton more money.
Right. And I'm but I'm reading this
stuff and I'm thinking, my good this and
it's so deaf and nuance your your
portrait of Sam Alman. I just think the
guy I mean this this is going to really
hurt him when he reads this stuff. I
imagine why didn't you do that? Take the
easy route. I don't know that that would
have been the easy route.
I mean, I just wrote the facts and the
facts come out that way, you know, like
I interviewed over 260 people across 300
different interviews and over 150 of
those interviews were with people who
either worked at the company or were
close to Sam Alman. And that's just what
they presented was all of the details
that I ended up putting in. And one of
the things that he that just came
through again and again and again, well,
two two things that came through again
and again,
no matter how long someone worked with
him or how closely they worked with him,
they would always say to me, at the end
of the day, I don't know what Sam
believes. So that's interesting. M and
then the other thing that came through
was I would ask them well what did he
say to you he believed in this meeting
at this point in time for why the
company needed to do this XYZ thing and
the answer was he always said he
believed what that person believed
except because I interviewed so many
people who have very divergent beliefs
and I was like wait a minute he's saying
that he believes what this person
believes and then what that person
believes and they're literally
diametrically opposite. So yeah, so I
just I just ended up documenting all of
those different details to illustrate
how people feel about him. I mean, he's
a polarizing figure both extreme in the
positive and negative direction. Some
people feel he is the greatest tech um
leader of our generation and they but
they don't say that he is honest when
they say that. They just say that he's
one of the most phenomenal assets for
achieving a vision of the future that
they really agree with. And then there
are other people who hate his guts and
say that he is the greatest threat ever.
And it really also comes down to whether
or not they agree with his vision and
they don't. And so then his persuasive
powers suddenly become manipulative
tactics. M I mean if you compare him to
somebody like Elon Musk as a CEO who is
obviously far from perfect but Elon Musk
makes makes big bets. He has gut
instincts. He's very happy to alienate
people if he thinks he's right about
something. And you know obviously I
don't agree with him on many many things
but that's that's quite a sort of
there's an archetype with regards to a
business leader that that looks like
that. And then you got somebody like Sam
Alman. He's doing all of these things.
Like I say, the peopleleasing, the
conflict aversion, and yet he's managed
to lead this company to essentially a
third of a trillion valuation. He must
obviously be doing something right as
well. So what are his sort of
comparative advantages as a business
leader cuz on paper I read all that
stuff and I think the guy wouldn't be
able to get up in the morning and make
breakfast and yet he's accomplished some
extraordinary things. Yeah, I think it
really comes down to he really he does
understand human psychology very well,
which not only is helpful in
getting people to join in on his quest.
So, he's great at at acquiring talent
and then he's said himself like I'm I'm
a visionary leader. I'm not an
operational leader and my best skill is
to acquire the best people that then
operationalize the thing. So, he's he's
good at persuading people into joining
his quest. He's good at persuading what
whoever has access to whatever resource
he needs to then give him that resource
whether it's capital, land, energy,
water,
laws, you know. Um, and then he is
people have said that he instills a very
powerful sense of belief in his vision
and in their ability to then do it.
He's good. We say in English soccer, we
would say good man manager. He can
inspire people. He inspires people to do
things that they didn't think that they
would be able to do. Yeah. Um but yeah,
but I mean
this is this is why there's so much
controversy. He is such a polarizing
figure because people who
encount everyone has a very personalized
encounter relationship with him because
he he often um he he does his best work
in one-on-one meetings when he can say
whatever he needs to say to get you to
do believe achieve whatever it is that
he needs you to do. And that's also part
of the reason why there's so many
diverging like people that are like,
"Oh, I think he believes this. I think
he believes that." And they're like
totally diverging. It's because he's
he's having these very personalized
conversations with people. Um, and so
some people end up coming out of those
personalized meetings feeling totally
transformed in the positive direction,
being like, I feel super human. I can
now do all these things and it's in the
direction that I want to go. It's I'm
building the future that that he sees
and I see. and we're like aligned and
and then other people end up coming out
of these meetings feeling like was I
played you know like was this was he
just telling me all these things to try
and get me to do something that's
actually fundamentally against my
values. You said you spoke to 150 people
who were connected with open AI um over
150 interviews. Yeah. Yeah. Sorry. 150
interviews 250 interviews altogether. 27
people altogether. The the numbers were
No, but it's absolutely incredible. I
should have said this right at the start
really. What's your what's your personal
sort of bio on all this stuff? Because
of course when people out of journalism
media cover technology, the intersection
of that with politics, we go, well, they
don't really know what they're talking
about. They're generalists because they
come out of journalism. What's your
background? Because it's quite
particular. I studied mechanical
engineering at MIT for undergrad and I
went and worked in Silicon Valley
because that's what I thought I wanted
to do. I lasted a year before I realized
it was absolutely not what I wanted to
do. And then I went into journalism. And
the reason why I had such a visceral
reaction against Silicon Valley is
because I was quite interested in
sustainability and how to mitigate
climate change. And the why I went to
study engineering in the first place was
I thought that technology could be a
great tool for social change and shaping
consumer behaviors to to prevent us from
planetary disaster. And I realized that
Silicon Valley's technology incentive
structure incentive structures for
producing technology
were not actually leading us to develop
technologies in the public interest. And
in fact, most often it was leading to
technologies that were eroding the
public interest. And the problems like
mitigation of climate change that I was
interested in were not profitable
problems. But that is ultimately what
Silicon Valley builds. They want to
build profitable technologies. And so it
just seemed to me that it didn't really
make sense to try and continue doing
what I wanted to do within a structure
that didn't reward that. Yeah. And then
I thought, well, I've always liked
writing. Maybe I can use writing as a
tool for social change. So I switched to
journalism. You went to MIT Review,
right? And then I went to a few
publications and then eventually MIT
Technology Review to cover AI and then
Wall Street Journal. And then the Wall
Street Journal. I mean, these are big
just just so people know there's real
there's real credibility behind this.
All these interviews, this CV. Um,
and it's interesting as well you say, I
wouldn't write a hography, I just wrote
what was there. I mean, maybe that's
partly an an extension of your sort of
STEM background, right? You know, rather
than writing like propaganda on a puff
piece, which let's be honest is is most
coverage of of the sector, but it's
true, right? Well, you know, people
often ask me this is like how much does
did my engineering degree help me in
reporting on this? And I think it helps
me in ways that are not what people
would typically assume. I went to school
with a lot of the people that now build
these technologies. I went to school
with some of the executives at OpenAI,
you know, and so for me, I do not find
there to be magic. I don't find these
figures to be towering or magical. Like
I remember when we were walking around
dorm rooms together in our pajamas and
it it instilled in me this understanding
that technology is always a product of
human choices. And different humans will
have different blind spots. And if you
give a small group of those people too
much power to develop technologies that
will affect billions of people's lives,
inevitably that is structurally unsound.
You like we should not be allowing small
groups of individuals to concentrate
such profound influence on society when
it is not you cannot expect any
individual to have such great visibility
into everything that's happening in the
world and perfectly understand how to
craft a one-sizefits-all technology that
ends up being profoundly beneficial for
everyone. Like that it just doesn't make
sense at all. Um, and I think the other
thing that it really helps me with is
it g Silicon Valley is an extremely
elitist place and it allows me to have
an honest conversation with people
faster because if they start
stonewalling me or like trying to
pretend that there's certain things that
these technologies are capable of that
they're not actually capable of, I will
just slap my MIT degree down and be
like, "Cut the bull crap." like tell me
what's actually happening. And it is a
shortcut to getting them to just speak
more honestly to me, but it's not
actually because
of what I studied. It's more just that
it signals to them that they need to
speed up their throat clearing. That's
really interesting though. Yeah, because
I do I do feel like lots of coverage of
this sector. I mean I again I can only
speak in regards to the UK and we're a
tidler compared to to you guys but at
the intersection of particularly
politics and technology the coverage by
political journalists at Westminster you
know K star and Rachel Reeves say we're
going to build more data centers isn't
that fantastic actually not necessarily
they're not going to create that many
jobs once they're built they can use a
ton of energy ton of water what's the
upside for the UK taxpayer there is very
little interrogation of just the press
releases yeah um and it's really
interesting to me that you've come out
of MIT and then you've taken this
trajectory is This stuff you just talked
about knowing these people, this tiny
group of people whose decisions now
affect billions already. Is this stuff a
on the present trajectory? Is it an
existential challenge to democracy? And
challenge is is speculative. Is it going
to end democracy?
I think it is greatly threatening and
increasing the likelihood of democracy's
demise. But I I never make predictions
of
this outcome will happen because it
makes it sound inevitable. And one of
the reasons why I wrote the book is
because I very much believe that we can
change that and people can act now to
shape the future so that we don't lose
democracy. But on this trajectory,
right, if the next 20 years, like the
last 20 years, on this trajectory for
sure, I think it will end democracy.
Yeah. How quickly?
We've really screwed up in the last 20
years, right? I wonder, you know, it's
kind of Gosh. Yeah.
I'll give it maybe 20 years. 20 years.
Yeah. Yeah. We used to have this thing
called privacy, high streets, childhood,
all gone. Um, you've said that um what
OpenAI did in the last few years is they
started blowing up the amount of data
and the size of the computers that need
to do this training in regards to the um
in regards to the um deep learning.
Give me a sense of the scale. We've
talked a little bit about the data
centers, but how much energy, land,
water is being used to power open AI
just specifically as one company. Yeah.
To power open. That's really hard. Um
because they they don't actually tell us
this. So we only have figures for the
industry at large and the amount of data
centers. So it's not in their annual
reports for instance. No. Well, they
don't have annual reports because
they're not a public company. Of course.
Yeah. Huh. So that's, you know, one of
the ways that and and actually it
doesn't matter if they're a public
company because Google and Microsoft,
they do have annual reports where they
say how much capital they've spent on
data center construction. They do not
break down how much of those data
centers are being used for AI. They also
have sustainability reports where they
talk about the water and carbon and
things like that, but they do not break
down how much of that is coming from AI
either. And they also massage that data
a lot to make it seem better than it
actually is. But even with the
massaging, there was that story 2 years
ago or sorry la last year 2024 where
both Google and Microsoft reported I
think it was a 30% and 50% jump in their
carbon emissions. Yeah. Because largely
driven by this data center development.
Yeah. And also the context here is over
the last it was one of the good news
stories of the last sort of 10 to 15
years is that CO2 emissions per capita
in the US has kind of plateaued right
across the west had kind of plateaued
and actually in the UK energy
consumption dropped I mean we stopped
making things but still
you know everything's made in East Asia
now but no but it's it it was kind of a
good story and I kind of bought it right
I thought that you know we'd have we'd
kind of plateaued obviously the global
south would consume more energy But we
are as well. Um,
should we look at these companies as
kind of analogous to the East India
Company of the 19th century? That is the
analogy that I have increasingly started
using, especially with the Trump
administration in power because the
British East India Company very much was
a corporate empire and started off not
very imperial. They just started off as
a company, very small company based in
London. And of course through economic
trade agreements with India gained
significant economic power, political
power and eventually became the apex
predator in that ecosystem and that's
when they started being very imperial in
nature and they were the entire time
abetted by the British Empire the nation
state empire. So you have a corporate
empire, you have a nation state empire
and I literally see that dynamic playing
out now where the US government is also
in its empire era. The Trump
administration has quite literally used
words to suggest that he wants to expand
and fortify the American empire and he
sees these corporate empires like OpenAI
as his empire building assets. And so I
think he is probably seeing it in the
same way that the British crown saw the
British East India Company of let's just
let this company acquire all these
resources, do all these things and then
eventually we'll nationalize the company
and then India formally becomes a colony
of the British Empire. So Trump whatever
the equivalent modern day equivalent
would be of nationalizing these these
companies is his endgame. like he is
helping them strike all these deals and
installing all this American hardware
and software all around the world with
the hope that then those become national
assets and then you know there was
actually just a recent op-ed in
Financial Times from Marate Shake one of
the former EU par parliamentarians who
pointed out like isn't it so convenient
for the US to get all of this American
infrastructure installed everywhere
around the world so that the US
government could literally turn it off
at any time. I mean, if you want to talk
about empire building, there's that. But
at the same time, these corporate
empires are also trying to use the
American empire as an asset to their
empire building ambitions. So there's a
very tenuous alliance between Silicon
Valley and Washington right now in that
each one is trying to use the other and
ultimately trying to dominate the other.
And there's a growing popularity in
Silicon Valley of this idea of a
politics of exit. This idea that
democracy doesn't work anymore. We need
to find other ways of organizing
ourselves in society. And maybe the best
way of organizing ourselves is actually
a series ofworked companies with CEOs at
the top. So I don't ultimately know
who's going to win like the nation state
empire or the corporate empire. But
either version is bad because all of the
people in power now both the business ex
executives and the politicians do not
actually care at all about preserving
democracy. I mean the analogy of India
is really interesting. So I think I
might have my dates wrong. Um East India
Company is running things until 1857.
You have the Indian mutiny, basically an
uprising against the East India Company
and then of course that commercial
endeavor has to be underpinned by the
organized violence of the British
imperial state. Um and it does feel it
does feel like that could be the next
step of what happens with regards to US
interests overseas. I suppose one retort
would be well hold on it sounds kind of
good. I'm a I'm a socialist. I kind of
like the idea of SpaceX being
nationalized. I kind of like the idea
of, you know, the federal government
having a 51% stake in Open AI and Tesla
and Meta. What would you say to that? I
don't necessarily know if my critique is
of the nationalization of the company
more as like why are they nationalizing
these companies and what are they what
you know like the because of this
endgame mentality of let's just let
these companies run rampant around the
world so that ultimately whatever their
assets are become our assets is leading
the Trump administration to have a
completely hands-off approach to AI
regulation they're quite literally they
proposed the big beautiful bill which
passed the House and is now going up to
the Senate with a clause that would, if
implemented, put a 10-year moratorium on
AI regulation at the state level, which
is usually the state level is usually
where regulation, sensible regulation
happens in the US. So they're doing all
of these actions now with wide-ranging
repercussions that will be very
difficult to unwind in the name of this
idea that maybe if they just allow these
companies to act with total impunity
that it will ultimately benefit the
nation state. How do people like Sam
Alman look at the rest of the world
outside the US? These kind of tech
leaders and how do they look at Little
Britain and Italy and how do they look
at us? What do they think about us? You
know, you've you've been inside their
minds.
It's it's Yeah, I mean, they see them as
resources. They see different
territories as different types of
resources, which I mean is what older
empires did. you know, they would look
at a map and just draw out the resources
that they could acquire in each
geography.
We we're going to go here and acquire
the labor. We're going to go here and
acquire the lands. We're going to go
here and acquire the minerals. I mean,
that's literally how they talk. Like
when I was talking with some OpenAI
researchers about their data center
expansion, you know, there was this one
OpenAI employee who said, "We're running
out of land and water." and he was just
saying, "Yeah, we're just like trying to
we're just trying to look at the whole
world and see where else we can place
these things. Where what other
geographies can we find all the
conditions that we need to build more
data centers? Land without earthquakes,
without floods, without tornadoes,
hurricanes, all these natural disasters
and can deliver massive amounts of
energy to a single point and can cool
the systems." And they they are they
they're looking at that level of
abstraction
to what are the different pieces of
territory and resources that we need to
acquire and that includes other parts of
the west. Yeah. That's not just the
global south. No, it includes other
parts of the west as well. Yeah. So
there have been rapid data center
expansion in rural communities in both
the US and the UK and they it always
ends up in economically vulnerable
communities because those are the
communities that often actually opt in
to the data center development initially
because they are not informed about what
it will ultimately cost them and for how
long. And so I spoke with this one
Arizona legislator who said I didn't
know it had to use fresh water. And for
the UK audience, Arizona is a desert
territory. There is no there's there's a
very very stringent budget on
freshwater. And after that legislator
found out, she was like, I would have
never voted for having this data center
in. But the problem is that there are so
few independent experts for these
legislators, city council members to
consult that the only people that they
rely on for the information about what
the impact of this is going to be are
the companies. And all the companies
ever say is we're going to invest
millions of dollars.
We're going to create a bunch of
construction jobs up front and it's
going to be great for your economy.
Yeah. I mean, that's all we hear about
data sense in this country. And it's a
great it's a great top line for the
chancellor and the prime minister
because they can say tens of billions of
pounds worth of investment. Okay. But in
terms of long-term jobs, how many? And
also, by the way, for that rural
community in God knows where, you know,
the northeast of England or whatever.
Yeah. You're not telling them that
actually they can't use their hose pipes
for 3 months a year because all the
water is going to that local data
center. Exactly. And
it's quite extraordinary. and and and
the most scary thing about all of it is
in the UK at least the politicians don't
know any of that. I sincerely don't
think the chancellor knows any of that.
Uh and there's no real I mean even if
you use the prism of colonialism,
imperialism with regards to exploitative
economic relations between the United
States and other parts of the world,
they think you're a troskist, right?
That's that's the crazy things. They
can't even look after their own people
because if looking after your own people
boils down to being too leftwing well I
think part of it is also that they don't
really realize that it's literally
happening in the UK. So the so to
connect it to the UK, data center
development along the M4 corridor is has
literally already led to a ban in
construction of new housing in certain
communities that desperately need more
affordable housing. And it's because you
cannot build new housing when you cannot
guarantee deliveries of fresh water or
electricity to that housing. And it was
due to the massive electricity
consumption of the data centers being
built in that corridor that led to that
ban. That's nuts. I mean, that's the
most valuable real estate for housing in
in the country, the M4. Yeah. And do you
think UK politicians are are aware of
that contradiction or is that just
that's I mean, you know, I don't know if
they are aware if maybe they're they
don't have awareness or maybe they are
aware and they're also thinking of other
tradeoffs. I mean now in the UK and and
in the EU at large there's just this
huge conversation around data
sovereignty and of course technology
sovereignty there's this whole concept
of developing the EU stack and why is it
that we don't have any of our tech
giants why don't we have any of this
infrastructure um and like here Starmer
just said this week during London tech
week we want to be AI creators not AI
consumers so I think in their minds
maybe this is a viable trade-off. We we
skimp a little bit on housing for the
ability to have more indigenous
innovation. But I think the thing that
is often left out of that conversation
is this is a false trade-off. People
think that you need colossal data
centers to build AI systems. You
actually do not. This is specifically
the approach that OpenAI decided to
take. But actually before open AI
started building large language models
and generative AI systems at these these
colossal scales, the trend within the AI
research community was going the
opposite direction towards tiny AI
systems. And there was all this really
interesting research looking into how
small your data sets could be to create
powerful AI models and how little
computational resources you needed to
create powerful AI models. So there were
there were interesting papers that I
wrote about where you could have a
couple hundred images to create highly
performant AI systems or uh you could
have AI systems trained on your mobile
device. That's like a sing not even a
one computer chip on running on your
mobile device. And OpenAI took an
approach that is now using hundreds of
thousands of computer chips to train a
single system. And those hundreds of
thousands of computer chips now are
consuming, you know, city city loads of
energy. And so if we divorced the
concept of AI progress with this scaling
paradigm, you would realize then you can
have housing and you can have AI
innovation.
But once again, there's not a lot of
independent experts that are actually
saying these things. Most AI experts
today are employed by these companies.
And this is basically the equivalent of
if most climate scientists were being
bankrolled by oil and gas companies.
Like they would tell you things that are
not in any sense of the word
scientifically grounded, but just good
for the company. I interviewed a great
guy um twice actually now, a guy called
Angus Hansen who's really just on it
with regards to the exploitive nature of
the increasingly exploitive nature of um
uh of the United States um economic
relations with the UK. Just fascinating.
fascinating uh a book and man and I just
don't think it's cut through to our
politicians here how bad it's getting
and you're saying about AI consumers or
or or creators I mean ultimately you're
talking about meta you're talking about
alphabet you're talking about XAI you're
talking about open AI we are consumers
we are dependent it's a colonial
exploitative relationship with regards
to big tech has been for a really long
time our smartest people which the
taxpayer trains here go to the US I
think one of the top people at Slack as
a UK national demis you know um uh deep
mind now working under the you know the
sort of the the umbrella of alphabet and
yeah it just doesn't make it just
doesn't make sense for me with regards
to that formulation they they simply
don't get it you know every I came here
using my Mastercard
millions of Brits use Apple Pay and
Google Pay and Mastercard and Visa and
every time we do 0.1 0.2% 2% crosses the
Atlantic and and it's it just goes over
the heads of um our political class
which is is very unnerving in regards to
the efficiency of these um smaller
systems. Where does where does deepseek
fit in all of this? Because of course
the scaling laws at the heart of open AI
which is you get to AGI by more compute,
more parameters, more data is kind of
untethered a bit by the arrival of
deepseek. Yes. What? Deepseek is such an
interesting and complicated case because
they basically they it was it's a
Chinese AI model that was created by
this company Highf flyier and they were
able to create a model that essentially
matched and even exceeded some
performance metrics of American models
being developed by OpenAI and Anthropic
with orders of magnitude, less
computational resources, less money.
That said, it's not necessarily the
perfect. It's like I I don't think the
world should suddenly start using Deep
Seek and saying DeepS solves all these
problems because it's still engaged in a
lot of data privacy problems, copyright
exploitation, things like that. Um, and
some people argue that ultimately they
were distilling the models from that
that were first developed through the
scaling paradigm. So you first develop
some of these colossal scaling models
and then you end up making them smaller
and more efficient. So some people argue
that you actually have to first do that
scaling before you get the efficiency.
But anyway, what it did show is you can
get these capabilities with
significantly less compute. And it also
showed a complete unwillingness of
American companies now that they know
that they can use these techniques to
make their models more efficient.
They're still not really doing it. Why
do they do they like giving their money
to Nvidia? What's the
Because when you
if you continue to pursue a scaling
approach and you're the only one with
all of the AI experts in the world, you
persuade people into believing this is
the only path and therefore you continue
to monopolize this technology because it
locks out anyone else from playing that
game. And also because path dependence
like these companies are actually not
that nimble, they end up the way that
they they organize themselves. It it's
not so easy for them to just like
immediately swap to a different
approach. They end up putting in motion
all the resources, all of the training
runs, so on and so forth o o over the
course of months, and then they just
have to run with it. So Deep Seek
actually wasn't the first time that this
happened. The first time that this
happened was with image generators and
stable diffusion. And stable diffusion
was specifically developed by an
academic in Europe who was really pissed
that the AI companies like OpenAI were
taking a scaling approach to image
generation. He was like, "This is
literally wholly unnecessary." And
they're spending thousands of chips, all
of this energy to produce dolly. And
ultimately, he ended up producing stable
diffusion with a couple hundred chips
with using a new technique called latent
diffusion, hence the name stable
diffusion. And you know, arguably it was
actually an even better model than Dolly
because users were saying that stable
diffusion had even better image quality,
better image generation, better ability
to actually control the images than
Dolly. But even knowing that latent
diffusion existed, OpenAI continued to
develop Dolly with these massive scaling
approaches. And it wasn't until later
that they then adopted the cheaper
version. But it was it was just
significantly delayed. And and I was
asking open air research like why that
doesn't make any sense. why did you do
that? And they were like, well, once you
set off on a path, it's kind of hard to
pivot. Also, Jensen Huang, the the CEO
of Nvidia is really charismatic, right?
I mean, it's quite funny when cuz I'm
I'm a Marxist. Just I'm going to make
that confession. You have these big sort
of structural um understandings of how
of how history happens and then you sort
of realize actually this guy's really
charismatic and this person's really
manipulative and all of a sudden the
world's hyperpower is, you know, making
these technological decisions. Okay. Uh
quite strange. Um we talked about data
centers. We talked about earth um water
energy. I want to talk also about some
of the more exploitative practices with
regards to workers in the global south.
You use one really grueling example
actually in Kenya. Can you talk about
some of the research around that? Some
of the people you met. Yeah. So I ended
up interviewing workers in Kenya who
were contracted by OpenAI to build a
content moderation filter for the
company. And at that point in the
company's history, it was starting to
think about commercialization after
coming from its nonprofit fundamental AI
research roots. And they realized if
we're going to put a text generation
model in the hands of millions of users,
it is going to be a PR crisis if it
starts spewing racist, toxic, hateful
speech. In fact, in 2016, Microsoft
infamously did exactly this. They
developed a chatbot named Tay. They put
it online without any content moderation
and then within hours it started saying
awful things and then they had to take
it offline and to this day as evidenced
by me bringing it up it's still brought
up as a horrible case study in corporate
mismanagement and so open I thought we
don't want to do that we're going to
create a filter that wraps around our
models so that even if the models start
generating this stuff it never reaches
the user because the filter then blocks
it. In order to build that filter, what
the Kenyon workers had to do was wade
through reams of the worst text on the
internet as well as AI generated text on
the internet where OpenAI was prompting
its models to imagine the worst text on
the internet. And the workers then had
to go through all of this and put into a
detailed taxonomy, is this hate speech,
is this harassment, is this violent
content, is this sexual content? and the
degree of hate speech of violence of
sexual content. So it was they were
asking workers to say does it involve
sexual abuse? Does it involve sexual
abuse of children? So on and so forth.
And to this day I believe if you look at
OpenAI's content moderation filter
documentation, it actually lists all of
those categories. And this is one of the
things that it offers to clients of
their models, business clients of their
models that you can toggle on and off
each of these filters. So that's why
they had to put this into that taxonomy.
The workers ended up suffering very many
of the same symptoms of content
moderators of the social media era.
Absolutely traumatized by the work
completely changed their personalities
left them with PTSD. And I highlight the
story of this man Moin who is one of the
workers that I interviewed who showed to
me that it's not just individuals that
break down. it's their families and
communities because there are people who
rely on these individuals. And so Mofat
was on the sexual content team. His
personality totally changed as he was
reading child sexual abuse every day.
And when he came home, he stopped
playing with his stepdaughter. He
stopped being intimate with his wife.
And he also couldn't explain to them why
he was changing because he didn't know
how to say to them, "I read sex content
all day. That doesn't sound like a real
job. That sounds like a very shameful
job. Chad GBT hadn't come out yet. So
there was no conception of what does
that even mean? And so one day his wife
asks him for fish for dinner. He goes
out, buys three fish, one for him, one
for her, one for the stepdaughter. And
by the time he comes home, all of their
bags are packed and they're completely
gone. And she texts him, "I don't know
the man you've become anymore, and I'm
never coming back." You say that's the
case with regards to text. Are people
also having to engage with images as
well? I mean, that was more of a social
media thing. Is that here too? Yeah,
they there were workers that they then
so after this they contracted these
Kenyon workers that contract actually
was cancelled because there was a bunch
of scrutiny on that company and there
the the third party company that they
were contracting the workers through and
huge scandal um this is Sama right sama
yeah and there was a huge scandal around
Sama and then open ended up shifting to
other contractors who were then involved
in moderating images
And were they remunerated for the kind
of work they were doing quite well or
for the Kenyan workers they were paid a
few dollars an hour, right? Yeah. And
then on the other side of the of the
Atlantic, you talk about people in South
America um doing effectively, you know,
mechanical turk peace work for these
companies as well. Can you talk about
that a little bit? Yeah, so generative
AI is not the only thing that leads to
data annotation. This has actually been
part of the AI industry for a very long
time. And so I ended up years ago
interviewing this woman in Colombia who
was a Venezuelan refugee about the
specific thing that happened to her
country
in the global AI supply chain. So when
in 2016
when when the AI industry first started
actually looking into the development of
self-driving cars, there was a surge in
demand for highly educated workers to do
data annotation labeling for helping
self-driving cars navigate the road. You
have to show self-driving cars, this is
a car, this is a tree, this is a bike,
this is a pedestrian. This is how you
avoid all of them. These are the lane
markings. This is what the lane markings
mean. And they're humans that do that.
And it just so happened in 2016 when
this demand was rising that Venezuela as
a country was was dealing with the worst
peacetime economic crisis in 50 years.
So the economy bottomed out. A huge
population of highly educated workers
with great access to internet suddenly
were desperate to work at any price. And
these became the three conditions that I
call the crisis playbook in my book that
companies started using to then scout
out more workers that were extremely
cheap for working for the AI industry.
And so the woman that I met in Colia,
she was not just it she was working in a
level of exploitation that was not based
on the content that she was looking at.
She was labeling self-driving cars and
labeling, you know, retail platforms and
things like that. The exploitation was
structural to her job in that she was
logging into a platform every day and
looking at a queue that automatically
populated with tasks that were being
sent to her from Global North companies
and most of the time the tasks didn't
appear and when they did she had to
compete with other workers to claim the
task first in order to do it at all. And
because there were so many Venezuelans
in crisis and so many of them were
finding out about data annotation
platforms in the end there were more and
more and more workers competing for
smaller and smaller volumes of tasks.
And so these tasks would come online and
then disappear within seconds. And so
one day she was out on a walk when a
task appeared in her queue and she
sprinted to her apartment to try and
claim the task before it went away. But
by the time she got back it was too
late. And after that she was like I
never went on a walk during the weekday
again. And on the weekends which she
discovered is less often less likely for
companies to post tasks. She would only
allow herself a 30 minute walk break
because she was too afraid of that
happening again. And did she did she
detail about how that gave her sort of
anxiety or insomnia or mental health
kind of overheads? It's it it
that sounds insane sounds insane way to
live. um
it completely controlled her life. She
didn't tell me about whether or not it
gave her insomnia, but it completely
controlled the rhythms of her life in
that she had this plugin that she
downloaded that would sound an alarm
every time a task appeared so that she
could, you know, cook or clean or
whatever without literally just looking
at the laptop the whole day. and she
would turn it on to max volume in the
middle of the night because sometimes
task would arrive in the middle of the
night and if the alarm rang she would
wake up, sprint to her computer, claim
the task and then start tasking at like
3:00 a.m. in the morning. Um, and she
had chronic illness. Um, one of the
reasons why she was tethered to her
apartment doing this online work in the
first place was not just because she was
a refugee, but also because she had
severe diabetes. And it got to the point
where she ended up in the hospital and
was completely blind for a period of
time. And the doctor said that if you
had not come to the hospital when you
did, you would have died. And so she was
tethered to her home because she had to
inject herself with insulin like five
times a day. And it was this really
complicated regime that didn't allow her
to commute to a regular office, have a
regular job. So she was doing all this
extremely disruptive, disregulating
work on top of just trying to manage
extreme severe diabetes. I mean, it's
extraordinary you've managed to un
unveil those stories. I think I mean
that's why the book is so interesting,
fascinating for me. That's why it's got
the plates it's got is that you're, you
know, you're speaking to people who are
on first name terms as Sam Alman, then
you're talking to Venezuelan refugees in
Colombia. Um, and it's really important
to say that this work is being done for
multi-trillion dollar companies. Yes.
That's the other side of it, right?
You're seeing Elon Musk worth 300
billion plus dollars and then there are
people that's where the value is being
generated. Yeah. Exactly. And that's
when the reason why I really wanted to
highlight those stories is because
that's where you really see the logic of
Empire. There is no moral justification
for why those workers whose contribution
is critical to the functioning of these
technologies and critical to the
popularity of products like chat GBT are
paid pennies when the people working
within the companies can easily get
million-doll compensation packages. The
only justification is an ideological one
which is that there are some people born
into this world superior and others who
are inferior and the superior people
have a right to subjugate the inferior
ones. My last question what does the US
public do about big tech if it wants to
take on some of these issues income
inequality regional inequality global
imperial overreach etc. A few proposals
and which you know somebody can execute
on. What would you suggest? Yeah, I
wouldn't even say it's just the US
public. I mean, anyone in the world can
do something about it. And one of the
remarkable things for me in reporting
stories is people who felt like they had
the least amount of agency in the world
were actually the ones that put up the
most aggressive fights and actually
started gaining ground on these
companies in take taking resources from
them. So, I talk about Chilean water
activists who pushed back against a
Google data center project for so long
that they've stalled that project. now
for 5 years and they forced Google to
come to the table and the Chilean
government to come to the table and now
these these residents are invited to
comment every time there's a data center
development proposal which they then
said is not it's not the end of the
fight like they still have to be
vigilant and at any moment if they blink
something could happen but but anyone in
the world I think has an active role to
play in shaping the AI development
trajectory and the way that I Think
about it as the full supply chain of AI
development. You have a bunch of
resources that these companies need to
develop their technologies, data, land,
energy, water, and then you have a bunch
of spaces that these companies need
access to to deploy their technologies.
Schools, hospitals, offices, government
agencies. All these resources in all
these spaces are actually places of
democratic contestation. They're
collectively owned. They're publicly
owned. So, we're already seeing artists
and writers that are suing these
companies saying, "No, you cannot take
our intellectual property." And that is
them reclaiming ownership over a
critical resource that these companies
need. We're seeing people start
exercising their data privacy rights. I
mean, one of my favorite things about
visiting the UK and EU as an American
that has no federal data privacy law to
protect me is to reject those cookies
every single web page that I encounter.
That is me reclaiming ownership over my
data and not allowing those companies to
then feed that into their models. We're
seeing just like the Chilean water
activists, hundreds of communities now
rising up and pushing back against data
center development. We're seeing
teachers and students escalate the a
public debate around do we actually want
AI in our schools and if so under what
terms. And many schools are now setting
up governance committees to to to
determine what their AI policy is so
that ultimately AI can facilitate more
curiosity and more critical thinking
instead of just eroding it all away. The
same thing I'm sure wherever your
audience is sitting right now. If they
work for a company, that company is for
sure discussing their AI policy. Put
yourself on that committee for drafting
that policy. Make sure that all the
stakeholders in that office are at that
table actively discussing when and under
what conditions you would accept AI and
from which vendors as well because again
not all AI models are created equal. So
do your research on which AI
technologies you want to use and which
companies are providing them. And I
think if we everyone can actually
actively play a role in every single
part of the supply chain that they
interface with, which is quite a lot.
Most people interface with the data
part. Many people will now have data
center data centers popping up in a
community near them. Everyone goes to
school at some point. Everyone works in
some kind of office or community at some
point. If we do all of this push back a
100,000 times fold and democratically
contest every stage of this AI
development and deployment pipeline, I
am very optimistic that we will reverse
the imperial conquest of these companies
and move towards a much more broadly
beneficial trajectory for AI
development. Yeah, we've had we've had
big tech social media for the last 15 20
years and I suppose the question is is
the same set of patterns going to apply
to this stuff and I I I think when you
speak to someone like Jonathan hate when
he talks about um young people and their
consumption now of social media and
mobile telephones etc. his real worry is
AI. Yeah. And if there is this lass
attitude from policy makers and also
let's be honest from civil service civil
society that there was over the last 15
20 years I mean he's terrified about the
implications so it's interesting to see
that there's congruence between what
you're saying what Jonathan hates saying
can I ask you one more question have you
ever read Dune by Frank Herbert I've
watched the movie and it's sitting on my
bedside table to actually read the
original and I'm so glad that you asked
me this because this is an analogy that
I use all the time now to describe the
AI world. Yeah. But Larry and Jihad.
So yeah. So one of the things that was
so shocking to me because we already
talked about this like quasi religious
fervor within the AI community and I was
interviewing people who one of the
people that their voice was quivering
when they were telling me about the
profound cataclysmic changes on the
horizon. Like these are very visceral
reactions. These are true believers. And
Dune strikes me as a really good analogy
for understanding this ecosystem
because Paul Trades mom in the story,
she creates this myth to help position
Paul as a supreme leader and to
ultimately control the population. And
the people who encounter this myth, they
don't know that it's a creation. So,
they're just true believers. And at some
point, Paul gets so wrapped up in this
own mythology that he starts to forget
that it was originally a creation. And
this is essentially what I felt like I
was I was seeing with my interviews of
people in the AI world because because I
w I had the opportunity to start
interviewing people starting all the way
back in 2019. You know, I interviewed
some people who for back then and for
the book to just map out their their
trajectory
and there were non-believers back then
that are true believers now. Like if
they were able to stay long enough at
that company, they all in the end become
true believers in this AGI religion. And
so there's this vortex of it's like a
black hole, ideological black hole. I
don't know how to explain it, but people
when they swim too long in the water, it
just becomes them. So what you're saying
is Sam Sam Alman is the Lisan Algib.
That's the character and and Paul Graham
maybe was the, you know, the it would
seem it would seem like that would be
the most appropriate character to assign
to him. Yeah. Wow, this has been
fabulous. And I have to say honestly,
the book is really, really exceptional.
Empire of AI. I read it so much that
dust jacket I think my daughter actually
ripped it off. But anyway, uh it is a
sensational book. Sensational
journalism, fantastic journalist. We
don't have enough of those in the world.
Thank you. Um real pleasure to meet you,
Karen. Thanks so much for joining us. It
was great to meet you.
[Music]
Ask follow-up questions or revisit key timestamps.
The speaker discusses the book "Empire of AI" by Karen How, which offers an inside look at OpenAI and the broader AI industry. The conversation highlights the origins of AI as a term, its current poorly defined nature, and the distinction between AI, machine learning, and deep learning. A significant portion of the discussion focuses on the environmental and societal costs of AI development, including massive energy and water consumption for data centers, the potential for AI to exacerbate climate change and public health crises, and the ethical implications of using public resources for private tech expansion. The book also delves into the motivations behind the AI race, exploring the ideological fervor and potential lack of clear business cases driving investment, as well as the aggressive tactics used by companies like OpenAI and its CEO, Sam Altman, to secure resources and talent. The conversation touches upon the concentration of power in a small group of individuals, the erosion of democratic processes, and the exploitative labor practices in the global AI supply chain. Finally, it explores the origins of OpenAI as a non-profit, its eventual shift to a for-profit model, and the cult-like atmosphere that can develop within the industry, drawing parallels to religious movements and the book "Dune".
Videos recently processed by our community