Will AI Destroy the Economy? (According to Economists: No.) | AI Reality Check | Cal Newport
972 segments
There have been some pretty dark
articles published recently about all
the ways in which AI is about to destroy
the worldwide economy. Now, these
include tales of mass unemployment and
collapsing industries and white collar
workers trying to retrain for skilled
crafts jobs like woodworking and
plumbing. One of these pieces, a World
War IIstyle dispatch from the year 2028,
which was put out by a small financial
services firm named Catrini Research,
spread so widely and scared so many
people that it was blamed for a
temporary dip in the S&P 500. All that's
missing from these tales are the garbage
can fires. So, how seriously should we
take these economics doomsday articles?
Well, if you've been following AI news
recently, this is probably a question
that you've been asking. And today, I
want to try to find some measured
answers. I'm Cal Newport, and this is
the AI reality check.
All right, here's the thing. Coverage of
AI topics moves in waves. You'll have a
certain sort of take or idea that will
become popular and everyone is writing
and talking about it and then sort of
seemingly all at once, all the attention
will move on to a new topic as if the
other one didn't exist. Like back in
2023, for example, um I spent a lot of
time trying to explain to people that a
static feed forward large language model
could not be considered conscious. I had
fierce debates about this and then at
some point the whole conversation just
moved on with no resolution. Late last
year, to give another example, all the
discussion was around super
intelligence. And I found myself having
to argue about how you cannot uh infer
intention
in an anthropomorphized manner from the
auto reggressively produced outputs of a
chatbot. But then we've moved on from
that recently as well. The topic dour in
AI coverage is this idea that we might
not be ready for mass economic
displacement that AI is now poised to
wreak. Now I want to go over quickly a
few examples among many of some of the
articles recently that have been making
this point.
Uh the first article was published
online in February and it's part of the
March print issue of the Atlantic and it
was titled America isn't ready for what
AI will do for jobs. All right. So if
you read this piece, it opens on a
somewhat long history of the Bureau of
Labor Statistics, which is actually
quite interesting the the history of the
BLS. And so you're thinking, okay, maybe
this is going to be a sort of
thought-provoking exploration of job
cycles and technological disruption, but
nope. it uh it gets a little darker. Let
me read from the piece here. But like
all statistical bodies, the BLS has its
limits. It's excellent at revealing what
has happened and only moderately useful
at telling us what's about to. The data
can't foresee recessions or pandemics or
the arrival of a technology that might
do to the workforce what an asteroid did
to the dinosaurs. I'm referring, of
course, to artificial intelligence.
Yikes. Remember, the asteroid that
killed the dinosaurs uh killed off most
of life on Earth. So, we've kind of
raised the stakes pretty high for what's
about to happen with AI. All right. So,
the the article goes on, the author
says, "Tasks that once required skill
judgment and years of training are now
being executed relentlessly
and indifferently by software that
learns as it goes." I don't know what it
means for a language model to be
relentless or indifferent, but I guess
they are. Uh quick fact check. The
language models driving most of the
tools that are talking about here uh
they don't learn as they go. They're
static and trained in static batches. I
guess you could make a case that if
you're looking at like a a terminal
agent like cloud code that it could be
doing updates to a markdown file that it
uses as part of its prompting, but I
don't think that's a great understanding
of how this AI works. It's treat it more
like a human brain. All right. Anyways,
let's keep going here. But anyone
subcontracting task to AI is clever
enough to imagine what might come next.
a day when augmentation crosses into
automation and cognitive obsolescence
compels them to seek work at a food
truck, pet spa, or massage table, at
least until the humanoid robots arrive.
Man, the word might does a lot of work
in this essay. He said before, AI might
be like the asteroid that destroyed 99%
of life on Earth. And here he said, AI
might make us all have to work at pet
spas until the robots come. Um, but
there's evidence for this. So what's the
main argument for why we should be
concerned about this? Let me read from
the article again. In May 2025, Dario
Amade, the CEO of the AI company
Anthropic, said that AI could drive
unemployment up to 10 to 20% in the next
1 to 5 years and quote wipe out half of
all entry-level white collar jobs. End
quote. Jim Farley, the CEO of Ford,
estimated that it would eliminate
literally half of all white collar
workers in a decade. Sam Alman, the CEO
of OpenAI, revealed that quote, "My
little group chat with my tech CEO
friends in quote has a bet about the
inevitable date when a billion-doll
company is staffed by just one person."
I step out of the quote here. Uh, the
Atlantic piece then goes on to mention
layoffs that recently happened at many
companies, including Meta, Amazon,
United Health, etc. All right, back to
the quote. Taken together, these
statements are extraordinary. the owners
of capital warning workers that the ice
beneath them is about to crack while
continuing to stomp on it. All right, we
got to hold on for a second here. I want
to break apart. This is the evidence for
the claim that well, we got two claims.
Either all life on Earth is going to be
wiped out like the dinosaurs or
knowledge workers are going to have to
be a massage therapist. It's worth
taking a little bit closer look uh at
exactly what this evidence is stating.
Um, I want to start with the layoff
piece because we covered this in last
week's episode of the AI reality check
and I've covered it on my newsletter at
calupport.com as well.
We don't, for the most part, these
layoffs have nothing to do with AI
automating jobs or increasing efficiency
to the point that you don't need more
workers. Now, I haven't covered every
one of these companies uh mentioned in
this article, but I did cover the first
two companies mentioned, Amazon and
Meta. I've talked on background to
multiple people within both of those
companies and they're both very clear.
Recent layoffs have nothing to do with
AI making those workers unnecessary.
They have everything to do with
overhiring during the pandemic that's
now being corrected for the bulk of the
layoffs at Meta recently where in the
reality labs which uh Zuckerberg had put
a massive amount of money in over the
last 5 years to try to build uh the
metaverse where we're all going to put
on virtual reality helmets and float
around space stations and play cards.
Remember that? Yeah. It was a bad idea.
So they're firing a lot of those people.
They want to put that money elsewhere.
So, right off the bat, okay, we um this
is vibe reporting 101. You you take a
fact that you have a scenario that's
scary and then you take a fact that
directionally seems aligned with that
scenario, but in reality is not and you
list it next to it to try to ground the
hypothetical into something that's
happening now, which vastly increases
its value to actually cause anxiety or
fear. All right, but what about the
other piece of this argument? the idea
that AI CEOs are making dire
predictions. If the owners of capital
are warning us, then for sure we have to
listen. But wait a second, we could flip
this on its head.
Of course, the CEOs of AI companies are
making dire predictions about how
powerful their tools are going to be
because they are like the Wizard and
Wizard of Oz say don't look behind a
curtain. Don't look behind a curtain.
terrified that people are going to spend
more time asking about their financials,
asking about the fact that in order for
them to keep up with their debt, I'm
talking about the major AI companies,
and not face implosion over the next one
to two years, they need to be the
fastest growing companies in the history
of companies. We're talking about
hundreds and hundreds of billions of
dollars of revenue that needs to be
generated at some point in the next year
or two. And it's unclear how they're
going to do this beyond putting ads on
chat GPT and claude code subscriptions
which they're currently losing money on.
So yes, of course they would rather be
talking about dire predictions of some
future because guess what? That makes
their technology the most important
technology in the world and justifies
investors continuing to put money into
their company. So, I'm not saying that's
definitely what's happening, but I don't
have to stretch to find an alternative
explanation for why Dario Amade or Sam
Alman love to spout out these sort of
big predictions.
It completely serves their purpose. And
I want to say, look, this is it's it's a
good writer. The rest of the art it's a
good article after this like it's well
researched. He talks to a lot of people.
Um, you learn a lot about labor
statistics. You hear from a lot of
experts. But I just want to kind of
point out the core. The beginning of the
article has this uh combination of vibe
reporting and appeal to biased authority
that as we're going to see is sort of a
theme in these economic doomsdays
article. All right, let me talk about
another one. Our second example here uh
this was from last week I think in the
New York Times. It was an op-ed that had
a uh a happy feel-good title. Mass
hysteria, thousands of jobs lost. Just
how bad is it going to get? Oh jeez. All
right. So, the piece opens, you know,
the you don't choose the titles if you
write an op-ed. So, let's put that
aside. Let's look at the piece to see
what it actually argues. The piece opens
with the story of a college graduate
having a hard time finding a job. Let me
read this here. Just a few years ago, an
entry-level role with a bank or an asset
management firm might have been Mr.
griefenburgers for the asking. But the
white collar job market has cooled
sharply. While the unemployment rate
remains relatively low, 4.3% office jobs
are suddenly a lot harder to come by for
recent college graduates and experienced
professionals alike. Now, this is uh
that's a this is an important real
story. Unemployment's pretty good, but
there is a cooling, especially on entry-
level hiring in knowledgework jobs that
has been persistent really for multiple
years now and isn't yet improving.
All right. So, why is this happening?
Well, you can ask economist and there's
there's three reasons they'll give you
in a descending
order of importance. By far the number
one reason, most important reason
explaining this trend is that white
collar industries hired aggressively in
2020 to 2022 as pandemic era digital
growth was super strong.
Um, and there was like these great
resignation fears which led companies to
overcompensate and offer like very
attractive packages. It was like get
people in the door because we're worried
about uh losing workforce.
All right. Now, after that pandemic
period is over, the economy is trying to
correct for this. And we have a lot of
employers not firing people, but they're
going into what's called a no hire, no
fire phase where they say, "Okay, we
need to uh slow down here. We have too
many people. We don't most of us don't
want to do mass layoffs of too many
people because we might need, you know,
they might be useful in the future, but
let's let's uh let's do no hire, no
fire, which is how you get to this
unusual situation where unemployment is
actually pretty good. Um but you also
have low new job growth. All right. The
second secondary cause mentioned by
economists is the higher interest rates.
They started going up in 2022. They try
to offset the inflation caused by co era
stimulus investments that slows down
business expansion, right? That's
economics 101. Um the third cause is
global uncertainty
right with especially in you know the
American context we the tariffs what's
happening in educational and now
educational world. Um, and now we have
global wars. It's an uncertain time. So,
there's a lot of businesses that are
sort of like, let's just wait and see.
We don't need to, we are not sounding
the alarm bells yet. We don't have to,
you know, greatly reduce like we would
into a strong recession, but we're not
going to, let's be careful about hiring
right now as well. All right. So, let's
return now to that Times Opad. I'm sure
it says like this is what explains this.
So, uh, you know, it is what it is.
Hopefully, this will get better. All
right. Let's read what actually they
actually say instead. Many companies
went on hiring sprees of the pandemic
and the slowdown is perhaps just the
inevitable adjustment. All right, so far
so good. Are we going to leave it there?
Nope. Here's what comes next. But it is
happening against the backdrop of the
generative AI revolution and fears that
vast numbers of knowledge workers will
soon be evicted from their cubicles
replaced by machines. This is kind of a
remarkable statement because it's it's
vibe reporting, but it's vibe reporting
that's transparently acknowledging that
it's vibe reporting, right? They're
saying, "Look, there are good
explanations for this, but this other
thing is happening now that makes us
afraid. So, let's just pretend they're
connected. Even though we have other
explanations, it's directionally aligned
with this other fear we have. So, why
don't we just put them together?" What
is the main evidence site in this op-ed
uh for these fears? I'll quote here. The
people that the people selling the
artificial intelligence are among those
sounding the most ominous warnings about
its potential fallout is notable. Uh
some of them are prone to bombastic
claims, but it's hard to see how
spooking the public serves their
interest. It might be wise to take their
predictions at face value and assume
that AI is indeed going to devour a lot
of white collar jobs. Again, this is the
appeal to biased authority. It is not
hard to see why the CEOs of the
companies selling this technology like
stories that makes this the most
powerful important technology of the
last 200 years. Of course, they want
that story out there because without
that story again, it becomes how are you
going to generate $300 billion in
revenue in the next two years. They
don't want that question. So, they've
been spouting these things for the last
5 years. I I don't know why this idea of
like we need to take at face value what
the owners of the technologies
say about what their technology is going
to do. I don't think we should take them
at face value at all. We should be
highly suspicious of them. All right. So
anyways, again this this this article
goes on and it looks at a lot of things.
It's not a bad article but again we have
this sort of vibe reporting mention
stuff that's happening that's
directionally aligned with the fear.
Then you mention the fear and then you
justify the fear by saying, "Look, the
CEOs of these companies are the ones
sounding the alarm. Why would they sound
the alarm if it wasn't real?" All right,
let me get to the third article, which
is the one that spooked the stock
market. And this will be the the sort of
final example I point out here before I
get to some stronger responses. This
article was called the 2028 global
intelligent crisis. Intelligence crisis,
a thought exercise in financial history
from the future. It was published on
Substack by a small financial services
firm called Satrini Research. Now look
right off the bat if you read this
Substack piece. The authors are clear
that this is a they say this is a
thought experiment
and not a prediction.
And you'll hear actually that the
authors have been interviewed a lot in
the aftermath of this article going
viral and spooking people. And they're
really leaning into this. This was just
a thought experiment. I was writing
fanfiction. like why are people taking
this so seriously? But if you read their
same introduction, they then go on to
say hopefully reading this leaves you
more prepared for potential left tail
risk as AI makes the economy
increasingly weird. So clearly they're
saying this is a possibility. This is a
prediction. We're not saying it will
definitely happen, but it's on the table
and we need to be worried about it. So I
don't think they get off the hook by
saying, "Hey, we said this is not a
prediction." But you did say pay
attention to this so you're prepared for
what might come.
I'm not a linguist, but that kind of
sounds like the definition of a
prediction. All right. So, what does
this article actually say? Well, it is
written in the style of World War Z.
That is, it's written like a uh a
dispatch. I think it's a it's like a a
financial report like these companies
write, but from the year 2028,
reflecting on the dire current
circumstances and how the economy got
there. So, it's told in this sort of
fake future retelling style, which is a
very powerful style. Um, let me let me
read a quote here from early in this
sort of fake dispatch from the future.
Two years, that's all it took to get
from contained and sector specific to an
economy that no longer resembles the one
any of us grew up in. This quarter's
macro memo is our attempt to reconstruct
the sequence, a post-mortem on the
precrisis economy. And then it goes on
to lay out this scenario where it starts
like right now and it's like well
there's layoffs happening but we we were
happy about productivity booms and the
stock market goes up until about the
fall of 2026 and then as automation
continues these cyclally reinforcing
negative feedback loops emerge. The
economy crashes the next year in
November 2027 and you know again we're
back to garbage can fires and knowledge
workers having to eat their dogs. All
right, this was a very effective
article. It spread really far for two
reasons. One, that World War Z style of
storytelling where you're you're telling
a story like this is what happened. Let
me look back on it is very emotionally
engaging and it presses fear buttons
much more than sort of straightforward
analysis or prognostication. And two,
there's a vibe reporting trick here that
we've seen in the other two examples.
They peg their fake scenario to
something that's real happening right
now. It began with layoffs in the tech
sector in 2026, which there are
happening right now. Now, of course, as
I've covered in this episode, in the
last episode, and ad nauseium,
the layoffs in tech industry started a
few years ago, it's in response to
overhiring during the pandemic, but
whatever. When you you peg a story that
ends somewhere fantastical and terrible
with something that's happening right
now, your mind puts it on a reality
trajectory and it makes it much more
believable. So, that went viral. It was
uh people said it had to do with a
collapse in the S, not a collapse, a a
minor dip in the S&P 500. Other
commentators have said there's a lot of
factors why there might have been that
temporary collapse in SP500, but it got
a lot of news, especially in the
financial world. All right. So, how
seriously I mean, I talked about some of
the bad reporting techniques in these
articles, but it doesn't mean that
doesn't mean
a priori that they're also wrong. So,
how seriously should we take these
scenarios of economic doom? Well, I got
to say they're they're very
anxietyprovoking.
I I don't like dystopian
fiction, right? Like, I read World War
Z. I really didn't like it. I don't like
watching zombie movies. Dystopian,
especially like collapse of society,
tales and movies. They press a lot of
buttons for me. So, I'm someone who
knows a lot about AI
and uh am a critic of hype. And even for
me, these were distressing. So, I can
only imagine how much distress these
type of articles are causing for the
millions of people that are reading
these in major publications. So, how
seriously should we take them? Let me
tell you what made me feel better and
hopefully it'll make you feel a little
better as well in the wake of the
Satrini article because that spread
through the financial world and it might
have had an actual impact on the stock
market. In the wake of that Catrini
article,
professional economic economists and
global macro strategy analysts, people
who uh their their goal is not
engagement or impacting the
conversation. It's to make money based
on accurate understandings of what's
likely to happen in the economy. They
came out of the woodwork and said, "Hey,
enough.
This is ghost stories and they're not.
We have no reason to believe they're
true." And hearing from these
economists, I have to say, made me feel
a little bit better. I'm going to give
you some quotes and hopefully it'll make
you feel a little better as well. The
New York Times, to their credit, um,
published an article called Bleak
Research Report, Stokes AI debate on
Wall Street. It's written by Financial
Reporter and they actually quoted some
serious economist
who were not that impressed by the
Satrini article. Let me read you two
quotes. Here's one.
The argument leans heavily on narrative
and emotion rather than hard evidence.
Jim Reed, a strategist at Deutsche Bake,
said of the report, "That doesn't mean
it will ultimately be wrong, but he
added that the vibes to substance ratio
is undeniably high." All right, here's
another quote. On Tuesday, Christopher
Waller, a governor on the Fed board,
noted that he had not read the Satrini
report quote deeply in quote, but push
back on the broader idea that AI will
lead to a rapid rise in unemployment as
technology displaces white collar
workers. I don't think that is going to
happen, Mr. Waller said, adding that he
is not a doom and gloomer like that
report was. I think my favorite
response, however,
came from uh Citadel Securities. So, a
global macro strategy analyst for
Citadel Securities named Frank Fle put
out a report in the aftermath of the sup
the catrini article um that had a a sort
of sarcastic title, the 2026 global
intelligence crisis. So the Catrini
report was the 2028 global intelligence
crisis. Say like, hey, everything has
gone wrong in these two years. And so he
called it the 2026 global intelligence
crisis. But here he's referring to the
intelligence crisis being people
believing these types of stories. And so
he does a sort of faux opening. He's
like here I'm make he's doing an
describing our current situation. And it
and that sort of faux opening describing
our current situation uh sort of sticks
in the dagger with the following.
Despite the macroeconomic community
struggling to forecast two-month forward
payroll growth with any reliable
accuracy, the forward path of labor
destruction can apparently be inferred
with significant certainty from a
hypothetical scenario posted on
Substack. He's sort of making fun of
people in the community who were taking
that Substack post with any seriousness.
He then proceeds to kind of educate in a
semi-accessible way the types of things
that global macro financial analysts
look at especially when it comes to
technological disruption and why they
don't see signs of some sort of major
calamity coming and they're not
particularly worried about some sort of
collapse of the economy. I'm going to
read a few of these quotes just to give
you a sense of the type of things
covered in this article.
Number one, we would posit that if AI
represents imminent displacement risk,
the real-time population data would show
an inflection upwards in the daily use
of AI for work. The data seems
unexpectedly stable and presents little
evidence of any imminent displacement.
Right? So again, there's lots of
discussion about this, but they're
looking at the Fed's data out of the St.
Louis Fed, and they say there's no rapid
uptake uh in the way that the news media
would have you um believe in AI use.
Second quote, the current debate around
artificial intelligence conflates the
recursive potential of the technology
with expectations of recursive economic
deployment. Technological diffusion has
historically followed an S-curve. Early
adoption is slow and expensive. Growth
accelerates as cost fall and
complimentary infrastructure develops.
Eventually, saturation sets in and the
marginal adopter is less productive or
less profitable, which causes growth to
accelerate. Um, I'm seeing this argument
from a lot of professional analysts of
technological disruption. They say,
"Man, we always make the exact same
mistake.
You have slow and then you get a period
of speed up." And we say that speed up
will go on forever and let's keep
extrapolating out that curve. And if we
keep extrapolating out that curve,
collapse or singularity or whatever the
thing is that you want to say is going
to happen. But this is never what
happens. It scurves. It goes up and then
it begins other sort of factors
contained to growth. It goes slower than
you think. There's time to adjust. They
say have no reason to believe. Why would
this be different? All right, let me
read another quote here. Displacing
white collar work would require orders
of magnitude more compute intensity than
the current level utilization. If
automation expands rapidly, demand for
compute definitionally rises, pushing up
its marginal cost. If the marginal cost
of compute rises above the marginal cost
of human labor for certain tasks,
substitution will not occur, creating a
natural economic boundary. We don't have
nearly enough compute for these
scenarios. And as they're saying, as you
try to build out compute for more and
more use, it's going to uh drive up the
cost because we're going to have a
mismatch between demand and actual
supply. As the cost comes up, it drives
back down the demand. We are already
actually seeing this with the one sector
where after 5 years of work, we're
finally seeing tools. It's the best case
scenario for AI. We're finally seeing
tools that are really catching the
interest of a sector, and that's in
computer programming. All of the
evidence I can find right now seems to
imply that these companies are selling
the compute for these agents for
computer programming at a significant
loss because they're trying to fight for
market share when they have to actually
go because again they are have huge
debt. When these companies actually have
to try to make more profit off of this
and these costs get adjusted to the
reality of how much expense they're
incurring at the AI companies, you're
going to see like a real moderation
probably and like how much we use for
programming and is it really worth is
worth $2,000 a month for an individual
$5,000 a month? I mean, it's it's going
to be uh interesting and that's just for
this one first use case. So, I think
that's interesting to see as well. They
also say, quote, "Moreover, there's
little evidence of AI disruption in
labor market data as of today. In fact,
the forward-looking components of our
labor market tracking have improved
recently, though huge mismatch between
what the financial analysts are seeing
and what the oped writers are
hypothesizing. The evidence of the
financial analyst is their decades of
experience of trying to understand the
labor market and technological
disruption. the evidence of the article
in oped writers.
Amazon laid off people and Dario Amade
says his technology is the most powerful
thing ever. All right, let me read the
conclusion from this Citadel Securities
piece. For AI to produce a substained
negative demand shock, the economy must
see a material acceleration and
adoption, experience near total labor
substitution, no fiscal response,
negligible investment absorption, and
unconstrained scaling of compute. It is
also worth recalling that over the past
century, successive waves of
technological change have not produced
runaway exponential growth, nor have
they rendered labor obsolete. Instead,
they have been just sufficient to keep
long-term trend growth in advanced
economies near 2%. Today's secular
forces of aging population, climate
change, and deglobalization exert
downward pressure on potential growth
and productivity. Perhaps AI is just
enough to offset these headwinds. So
they're saying, and I think this is
actually pretty optimistic, they're
saying the reality of major major
disruptive technological changes
historically,
has been just enough to offset all sorts
of negative trends and keep at least
some growth happening in the economy.
They say, "Well, we hope for here's what
they're predicting from AI." They're
like, "We have lots of negative growth
forces that we're going to have to
encounter in the next couple of decades
that are going to pull down the economy.
Hopefully, we'll get enough out of AI to
sort of stave those off and still get at
least some economic growth. That is a
very different vision. Like AI is the
latest technological innovation to stave
off DGrowth
is a completely different argument than
no, it's going to that this is the one
technology in history where the S-curve
doesn't happen and it's going to go
exponentially um and it's going to crash
the economy. So, they kind of end on a
positive note there. All right. So,
here's let's step back. First of all, I
want to say the economist make me feel
better.
It doesn't necessarily mean, of course,
they're right and maybe we are going to
have all these factors will come
together, right, to destroy the economy,
but I do like the fact the economists
aren't uh they're not that worried about
it. I think we see this reflected in the
stock market where we're seeing, you
know, again, if serious investors really
believe that the economy was going to
crash in the fall of 2027 and that we're
going to have massive uh decline
starting in October of 2026,
the COVID dip from 2020 is going to look
like a minor correction, right? Like it
would be substantial, but the reactions
are small. like they're actually being
they're they're they're pessimistic on
the frontier AI companies because they
think they're spending too much money.
So they don't buy the AI tech CEO
stories that their technology is going
to automate all work which would make
them the most valuable companies in the
history of companies. The stock market
doesn't buy it. We see more moderate
bets against specific sectors where they
think they're going to have practical
disruption like the SAS uh um sector and
even those are modest. And we're seeing
actually much bigger reaction from
things like the cost of oil going up to
$100 a barrel. That caused way bigger
impacts on the stock market than these
scenarios of the last two months about
the economy collapsing. So to me that
makes me feel better. But it doesn't
mean there's not going to be an impact.
And they could be wrong or maybe the
impact is going to be smaller.
But let's let's put that on the table
now, right? Let's say, okay, maybe the
economy is not going to collapse. so I
don't have to learn how to light a uh
garbage can fire and become a pet
masseuse. But maybe we're going to have
like it's going to be a hard run.
There's going to be economic disruption
and it's going to be like more so than
almost any other technology in the past
and it is going to be disruptive in some
way. Let's say that was the case. If
that is, and it could be true, and I
hope not, but it could be true.
AI doomsday reporting isn't helping.
What I'm seeing is that these sort of AI
doomsday articles where you try to oneup
each other with how uh precient you are
about how bad things are going to get
prevents us from responding
in effective ways. If we instead treat
AI like a normal technology and we
respond with our normal tools when we
see it doing things that we would
normally say this is a problem that we
need to correct. I think we can have
much better progress in containing,
shaping, and directing the AI revolution
than instead falling back to these
massive dystopian World War Z tales.
The fall back on doomsday writing is
letting the AI companies off the hook.
Look at what I covered last week.
Jack Dorsey
uh negligently goes off and makes these
huge acquisitions sort of in an
impulsive fashion throughout the
pandemic of these crypto and blockchain
companies. they don't go well. So he
then impulsively fires half of his
workforce because he can't do anything
injectors. He can't do anything in
measured increments. Everything he does
is drastic, right?
But because he comes out and says, "This
is just the first sign of the AI
economic apocalypse. I for one am
learning how to make trash can fires
because I'm going to not only be a pet
masseuse, but I have to maybe eat the
dogs because there'll be no money left
in the world." Because he leaned in the
doomsday reporting. what was the
coverage of the block layoffs?
Reporters would rather treat it as
evidence of the narrative economic
doomsday. That's what they focused on.
In fact, he cited in uh one of those two
art one of the articles I talked about,
the block layoffs are cited as evidence
of what's coming. The right way to treat
that was like, yeah, sure, and like I'm
sure you have a perpetual motion machine
and you can fly. Back to the point, what
happened to those crypto investments?
Why did you have to lay off that many
people? Who did you lay off? wait a
second, most of these jobs have nothing
to do with AI automatable roles. We
would hold his feet to the fire. Like,
you're being negligent and impulsive.
But instead, we're like, "Oh, yeah.
Thank you, Cassandra, for helping us
understand what's coming."
The same thing has happened with these
AI CEOs. They find like the more the
more dramatic and fearful of a thing to
say, the more the attention turns away
from what's actually happening.
Journalists used to severely distrust
billionaire tech CEOs, but not when it
comes to this issue. We look to them as
like they are guiding us to understand
what's happening with this technology.
They these CEOs have been covering this
have been saying crazy stuff for the
last four years. They keep changing what
it is on mass.
They were all talking about super
intelligence
and the machines getting out of control
and like an alien mind. They're all
talking about that and they all shifted
at some point to to something else and
now they've shifted to like the economy
is going. just follow they they just say
stuff and it's entirely in their favor
because again your technology automates
all jobs. Well, where am I going to put
my money? The only place left to put my
money is in like the three companies
that are going to run all the jobs. So,
I think doomsday reporting prevents us
from actually responding. Prevents us
from saying when Derry is like 50% of
white collar jobs are going to be gone.
I'm like, uh-huh. Uh-huh. um you need to
make $300 billion dollars somehow in the
next four years in order to like save
off like to to get anywhere near
profitability. Uh how are you doing
that? Right? That's the question we
could be asking.
So I I think that we don't need to
ignore AI or its impact on jobs. But we
need to cover it like a normal
technology so we can deploy the type of
normal things we would do when we see
disruption or changes when we see that
as cover for malfeasants or
impulsiveness or whatever is going on.
And so I hope we move past by the time
this comes out we'll probably have moved
on to you know something else. Uh I
don't know what AI and birds are going
to spy on us whatever it is. And I hope
so because I think this AI doomsday
reporting not only is stressing people
like me out, but it's preventing us from
actually respond to real impacts of this
technology in a way that could really
matter. All right, enough of my sermon.
Uh hopefully some of this makes you a
little bit feel a little bit better this
week. Uh we'll be back probably next
week. I'm doing this on Thursdays, maybe
not every Thursday, but if there's
something to talk about, I'll be back
next Thursday. Remember, take AI
seriously, but not everything that's
written about it. See you next time.
Hey, if you like this video, I think
you'll really like this one as well.
Ask follow-up questions or revisit key timestamps.
The video discusses the recent surge in articles predicting AI-driven economic doom, focusing on mass unemployment and industry collapse. The speaker, Cal Newport, argues that much of this reporting relies on "vibe reporting" and appeals to biased authority rather than solid evidence. He analyzes three prominent articles: one from The Atlantic, an op-ed in The New York Times, and a Substack piece from Catrini Research that reportedly caused a dip in the S&P 500. Newport debunks the claims by explaining that recent layoffs are primarily due to pandemic-era overhiring, not AI automation. He also suggests that AI CEOs' dire predictions serve their own financial interests, aiming to secure continued investment. The speaker contrasts these sensationalist narratives with analyses from professional economists and financial analysts who, while acknowledging AI's potential, do not foresee an imminent economic collapse. These experts point to stable real-time data, the S-curve adoption of new technologies, and the current unfeasibility of widespread AI-driven labor substitution due to compute costs. Newport concludes that while AI will undoubtedly cause disruption, the doomsday scenarios are exaggerated and prevent effective, measured responses to the technology's real impacts. He urges viewers to treat AI as a normal technology and apply standard analytical tools rather than succumbing to dystopian narratives.
Videos recently processed by our community