Don't Fall For the Stock Market Hype. The $7,000 Raise AI Is Giving You (That Nobody Mentions)
845 segments
A fictional recession just crashed the
stock market. And the real story is what
nobody's going to write about tomorrow.
So here's what happened. A Substack post
written as speculative fiction from 2028
wiped out over a hundred billion dollars
in market cap on Monday. IBM created 13%
its worst day in 25 years just because
Anthropic published a blog post about
cobalt. But that wasn't the Substack
post. The Substack post was by
investment research firm Catrini, which
wrote a highly regarded piece about how
bad things could get if labor
replacement driven by AI really takes
hold due to massive AI capability gains
over the next couple of years. Look,
you've seen me cover the individual
sell-offs. This video is different
because the doom narrative that is
driving all of them, there's seven I'm
counting now, it's not been about the
technology, it's been about the
economics. And so I'm going to take a
little bit of time in this video to lay
out the economics of the bear case and
the bull case, the doomer case and the
boomer case for AI. And then I'm going
to tell you the thing that neither of
them is talking about. So we'll get to
that in a second. So first the doom meme
and why it hits so hard. So let me steal
man this out properly because it does
deserve it. Catrini research and Olab
Shaw wrote a piece called the 2028
global intelligence crisis and they
framed it as a fictional macro memo from
2 years in the future. The scenario is
simple. AI capabilities keep
compounding. Companies rationally cut
white collar headcount to protect
margins. Displaced workers spend less.
The consumption hit cascades through
mortgages. The credit ultimately
contaminates the entire financial
system. And so in their fictional
scenario, the S&P drops 38% from its
2026 highs. Unemployment hits 10.2% and
things are very, very bad. I'll cut it
short here. The mechanism they describe
is consistent. It's well constructed and
it's pretty easy to follow even if you
don't have a degree in economics. White
collar workers make up about half of US
employment and drive threearters of
discretionary consumer spending, stuff
you get to spend on because you like it.
The top 20% of earners account for about
65% of consumer spending. These are the
people who buy second homes, who buy
cars, who buy vacations, who buy private
school tuition. If AI structurally
impairs their earning power, the
consumption math gets really ugly for
the whole economy quickly. A 2% decline
in white collar employment could easily
translate into double that, like a 4%
hit on discretionary spending. And so,
Satrini describes what they call an
intelligence displacement spiral. I
would call it a negative feedback loop.
Basically, what they see is AI gets
better, companies cut payroll, savings
go into more AI, AI gets better, and so
on. There is no natural break on the
spiral. And the financial contagion
chain is plausible, too. It's certainly
familiar to anyone who lived through
2008, as I did. Essentially, once you
start to get into the financial
institutions that own AI vulnerable
companies, there is a risk of contagion
because of what they're linked to in the
global credit system. And so in this
case, the mechanism is private credit
because private credit grew from a
trillion dollars in 2015 to over $2.5
trillion by 2026 when private credit
over that decade or so picked up and
rolled up a bunch of SAS companies at
valuations that assumed perpetual
revenue growth. I I was part of some of
those exits. I've seen them at work.
Those assumptions are dying in real time
and that's been part of the sell-off
story. I think the most haunting line in
the piece is this one. In 2008, the
loans were bad on day one. In 2028, the
loans were good on day one. The world
just changed after the loans were
written. I get why this went viral. I
get why it was shared everywhere. I get
why the markets convulsed. The scenario
is vivid. It's simple. It's wellargued.
It's emotionally resonant. It's
plausible. It activates the same dread
that made the big short a cultural
touchstone. the feeling that the system
is fragile, that nobody in charge sees
what's coming, and that the smart money
that's already headed for the exits. But
the thing about doom narratives is that
they are dramatically more viral, not
due to their nuanced analysis, but due
to one of the most robust findings in
human psychology. I'm referring to
negativity bias. We humans, all of us,
are evolutionarily wired to pay
disproportionate attention to threats. A
headline that says AI can crash the
economy generates way more like 10 to 50
times more engagement than a headline
that says AIdriven deflation could cause
real purchasing power for the median
household. You're asleep already. Both
headlines describe potential futures.
One of them is going to get millions of
views and the other one won't. And
that's what I want to talk about because
the asymmetry right now is distorting
the information environment that people
are using that we are all using to make
career and investment decisions. The
doom narrative is not wrong because it
went viral. The fact that it went viral
while the counterveiling evidence barely
registers should make you suspicious
about whether you're getting the full
picture because you're not. I'm going to
give you two different bullcase
arguments. And yes, they both use
economics, but we're going to simplify
it so it actually works. Alex Emis is an
economist at the University of Chicago
Booth School of Business. He read the
same intuitive arguments about AIdriven
demand collapse that Catrini formalized
as fiction and he actually went out and
built a model to figure out what would
happen. I'm going to drop a lot of the
modeling stuff and cut straight to the
chase. When you model the actual
conditions that Catrini describes where
labor share in the economy dramatically
declines very quickly where there is no
consumption that comes back after prices
decline where wealthy capital owners who
own data centers don't end up spending
more. Where interest rates hit the floor
and they can't drop further and there's
no policy response from the government.
Yeah, you kind of get what Satrini came
up with. But what Alex argues is that if
you have all of those in a row, the idea
that you have no policy response is kind
of laughable. And as someone who lived
through 2008 with a divided government
where everyone was fighting tooth and
nail, when things get bad enough, if
things get as bad as the ST treaty memo
argues, yes, government does end up
responding. And the reason why is
entirely selfish. They want votes. They
realize they're in trouble if they don't
get votes. And so they figure out a way
to get it done. But there's a lot of
other reasons to suspect some of the
other assumptions that the Catrini memo
just kind of handwaves aside. And I
think Alex is making a good point. I'll
give you a couple of examples. One of
the things that the Catrini memo doesn't
take fully aboard is the idea that we
might consume more
if we have lower prices in the economy.
That's actually pretty reasonable.
Everywhere you look, you see evidence of
Jieven's paradox. So that's the policy
piece. The other things that the Catrini
memo talks about like labor replacement,
like prices falling and people not
buying more stuff, at least not at
scale. Those are things that might be
individually plausible, but it's sort of
difficult to add them all together and
assume that all of them are correct at
once to make sort of a perfect doom
scenario. One example is the consumption
side. Let's imagine for a moment that
prices are going down in the economy
because AI is making things cheaper. If
that's the case, then people are going
to probably buy more stuff. Now, they
may not buy 10 TVs because the price of
TVs go down, but net net in the economy
overall, if people end up having more
purchasing power, they're going to end
up buying more stuff. And this is not
just about TVs and shoes and the and
sort of the hard goods that we produce
as a society. This is about services
too. And the services case is actually
worth calling out because I think it is
pointed than a lot of the bears want to
acknowledge. And I have to give credit
to Michael Bloke who wrote a direct
response to the Catrini piece when he
saw it and made this argument really
really coherently. I think it deserves a
lot more attention than it's getting,
but again, it's not a doomer narrative,
so it doesn't tend to get the attention.
What he argued is that most of consumer
spending is in services. Think mortgage
services like how to buy a house. Think
tax preparation, think insurance
brokerage, think travel booking. You get
the idea. These are all tasks that AI
agents plausibly make dramatically
easier today because they're
fundamentally a function of complexity.
And so if you're sitting there and
you're like, where can AI agents impact
the economy? It's really plausible that
AI agents will impact the economy first
by making a bunch of those services
cheaper. I would argue that's more
plausible than say replacing all the
cobalt and the ATM machines because
that's something that the stock market
was worried about this week. But
services are really easy to replace.
They're not legacy code. They don't
touch like core of the financial system
kind of stuff. It's like, yeah, is an
agent going to be good at travel
booking? Maybe it will be, and if it is,
you'll use it. If that's true, AI agents
could plausibly compress costs by 40 to
70%. And Michael did these numbers. I'm
not just making them up. And plausibly
return 4 to7,000
in annual gain per median household
taxfree. No legislation. Basically, we
all get more money in our pockets in the
US because AI agents are compressing the
margins of all these services. And the
point is simple. Is that money just
going to evaporate? No. People are going
to spend it. Let's say it goes into home
mortgages. Let's say you pay less for
buying a house in commission because the
services cost comes down. Well, now
you're going to put that money into
furniture, into renovations, into moving
costs. It doesn't disappear. It goes
back into the economy. There's one more
piece in Michael's scenario that I think
is worth calling out. He identifies the
ongoing high trend of business formation
in the US as significant. The Census
Bureau reported 532,000
new business applications in January of
2026 alone, up over 7% from December.
That continues a long-term trend that's
been accelerating since 2021. And
Michael reasonably supposes it's going
to continue. And what he's suggesting is
that essentially oneperson businesses
have more leverage in the economy than
they've ever had before because now they
have the skills, they have the tools,
they have radically lower overhead, and
they have more reach all thanks to AI.
And this is not just theoretical. Know
personally people in my life who have
gone from not coding at all to I'm
setting up a business and I'm making
real money from it. and they feel so
motivated they're starting a formal
business out of it. This is not just one
story that I'm cherrypicking. I know
more than I can count on two hands.
There's a lot of folks out there who are
finding that the conditions the AI
revolution is bringing are ideal for
people who want to strike out on their
own. Of course, it's easy to hear the
bears responding. This time it's
different because if AI is a general
intelligence, it's going to replace
everything at once. So, where will these
entrepreneurs go? Sure, that's a real
argument, and I take it seriously, but
this brings me to the part of the video
that nobody else is talking about.
Whether or not AI displaces labor, the
way the bears describe depends on the
speed of labor displacement outrunning
the speed of technical adaptation. And I
think that is an incredibly
underrepresented part of this
conversation. And if that made no sense
to you, don't worry. We're going to get
into it. Fundamentally, both doomer and
boomer narratives assume that AI
capabilities translate incredibly
rapidly into economic impact. The doom
narrative assumes that everyone's
getting fired. The boom narrative
assumes really rapid technical
adaptation across society. Both assume
the conversion rate from AI can
technically do this to the economy has
reorganized around AI is incredibly
fast. It's not. And the reason it isn't
is the most underrepresented part of
this conversation because capabilities
are not the same as deployment.
Deployment is not the same as adoption.
Adoption is not the same as deep
integration. Deep integration on its own
is still not the same as economic
impact. Social inertia is a massive
force in the economy and it is
dramatically underrepresented in every
AI analysis I've read, bare or bull.
This is what I mean concretely. I'm
going to name kinds of inertia because I
don't want to just throw away a line and
say it's all about inertia. No, we're
going to get specific. Regulatory
inertia. Financial services firms that
want to use AI for compliance work need
approval from regulators who haven't
finished writing the rules. Health care
organizations need to navigate HIPPA and
FDA clearance and institutional review
boards. Government agencies run
procurement cycles measured in years,
not quarters. The cobalt systems that
Anthropic is talking about modernizing
run an estimated 95% of ATM transactions
in the US. Hundreds of billions of lines
of cobalt run in production today,
powering critical systems across finance
and airlines and the government. Nobody
is migrating those to a new codebase
just because a startup published a blog
post, even if that startup is anthropic.
IBM's own CEO, Arvin Krishna, said last
year that their mainframe AI coding
assistant has gotten wide adoption
because it understands existing Cobalt
code bases and decides what to modernize
across those code bases. It's not
replacing them, it's understanding them.
The distinction matters. IBM stock
dropping 13% doesn't change the fact
that their client switching costs are
measured in years of institutional pain.
Not in API calls. But we're not done.
What about organizational inertia? The
Satrini scenario assumes companies cut
headcount rationally and rapidly as AI
capabilities improve. Companies are not
rational actors. In practice, large
organizations don't work that way.
Headcount decisions are filtered through
HR policies, through employment law,
through union agreements, through
severance obligations, through
institutional knowledge preservation,
through management politics, and the
simple fact that most executives have
never managed an AI transition and do
not know what they do not know. The gap
between Claude can technically do the
parts of this job that matter and we've
reorganized our workflows and retrained
our remaining staff and built QA
processes for AI output. and we've
confidently reduced headcount. That's an
enormous gap. I've seen firsthand how
long it can take to go from AI strategy
to pilot program. It takes so long. I
have seen multiple cases where big
company pilot programs are abandoned
because the very piece of AI capability
that they worked on is no longer
relevant because AI has moved so fast
past it. You know what a good example of
that is? Rag. Everyone was excited about
Rag in early 2025. You hear a lot less
about it now because Agentic Search has
gotten better. You hear a lot less about
it now because context windows have
gotten larger. And all of the people
that spend an inordinate amount of time
fine-tuning rag systems for their wikis
are pretty much regretting it. Companies
move slowly. Here's another one.
Cultural inertia. Yes, that's different
from organizational inertia. Most people
still don't use AI in their daily work.
I know lots of those people. They are my
friends. Yes, I have friends who don't
use AI. The adoption curves are real,
but they're way, way slower for most
people than the capability curve on AI.
When Toby Lutkkey, one of the most
technically fluent CEOs on the planet,
running a company whose entire business
is tech, when he has to issue a
companywide mandate in April 2025 saying
reflexive AI usage is now the baseline
at Shopify. When he has to build it in
performance reviews in order to get it
adopted, that tells you something
important about how slowly even
high-erforming organizations change
their cultural behaviors. Toby was
really explicit about this on the
acquired podcast. He said using AI well
is a skill that needs to be carefully
learned by using it a lot. He's right.
He talked about using what he calls a
Toby Eval. Like he applies this first to
himself where he has a personal folder
of prompts he runs against every single
new model release systematically probing
capabilities as if he was a QA engineer
running unit tests. And he says that the
skill of learning to prompt AI well, of
learning to give AI all the context it
needs to write a really coherent answer
without additional search has made him
better at everything else in his job. I
actually will agree with that as someone
who's worked a lot on prompting. I feel
like I'm a much clearer communicator
because I am a prompter. But regardless,
step back for a minute with me. Toby is
Toby. He is a 1enter AI fluent CEO. Do
you think all of the personal work that
Toby put in, all of the cultural work
that Toby put in is something that a
mid-market manufacturing firm in the US
is going to replicate? Is that CEO going
to do what Toby did? No. Now multiply
that mid-market manufacturing firm times
a million. Look at all the other
businesses that are led by leaders who
are not as AI fluent as Toby. Cultural
inertia is real. The last inertia force
I'll call out is trust inertia.
Enterprises do not and should not trust
AI output by default. And the cost of
figuring out how you formally scale
verification systems is really high.
Unless you're willing to put in the
capital to invest in verification as a
competency, you're not going to get to
the point where you trust AI enough to
let it do the kind of high lever work
that Citrini needs you to do for their
memo to come true. And most
organizations don't have the capital for
that kind of investment. And most of
them frankly don't have the stomach
because moving your workforce from I
have to do this work to I have a new
skill and it's verifying the AI at scale
is really really hard. Figuring out how
to do that in a way that helps you go
faster is even harder. And all along the
way you have to build institutional
trust to deploy that AI at scale. You
have to show that you have the
appropriate guardrails, the appropriate
audit trails, the appropriate human
oversight. that takes time that no
amount of benchmark improvements can
compress. Look, these four forces don't
mean that AI is never going to transform
the economy. All they mean is that it
won't transform the economy on the
timeline the stock market is pricing,
frankly, in either direction. The
doomers require a speed of labor
displacement that social inertia simply
won't permit. And the boomers require a
speed of adoption and integration that
organizational reality won't permit.
What actually happens is slower than
both, messier than both, and far more
unevenly distributed than either
narrative allows. Here's how I think
about it. Imagine two curves on the same
chart. The first curve is really
familiar to you if you listen to me.
It's AI capability. It goes up really
fast. Model intelligence, reasoning
depth, agentic endurance, you name it. I
can tell you any number of numbers and
they all go up really fast. Gemini
doubled its reasoning in just three
months. There's an example. The second
curve is societal dissipation. And we
never talk about it and we should. The
rate at which those AI capabilities
actually permeate the economy and change
how work gets done, how money flows, how
institutions operate. This curve is way,
way flatter. It's governed by the
inertia forces I talked about. It does
compound over time, but it starts from a
really low base and it goes really
slowly. The gap between these two
curves, the really fast exponential
curve for AI and the really slow
societal dissipation curve is where we
all live today. And it's the gap that
explains almost everything that seems
confusing about this current moment. It
explains why AI capabilities are
stunning and the economic disruption is
still modest. It explains why the stock
market frankly cannot make up its mind
because it's simultaneously pricing
incredible return on investment for AI
capabilities and also pricing incredible
disaster on the other hand. It explains
why the doom narrative and the boom
narrative both sound compelling. It
explains why a blog post can crash a
stock. But there's something that's much
more important than all of that
narrative explanation that these two
curves do. and that is unveil reveal a
specific and very large economic
opportunity. That opportunity exists for
you and for me and for a bunch of
businesses precisely because this gap is
wide. If AI capabilities were
irrelevant, there would be no advantage
to adoption. Guys, do we see no
advantage to adoption? No, we do not.
It's the gap. It's the fact that the
tools are powerful but very unevenly
distributed, understood by very few, and
integrated by even fewer. That's the gap
that creates asymmetric economic returns
for you and me and for anyone who wants
to invest seriously in their AI
capability set. And that's true for
companies, not just for people. The
people and firms operating at the
capability frontier while the rest of
the economy moves at the dissipation
rate are capturing an outside share of
economic benefit. And because social
inertia is so strong, the advantage that
we're getting here does not erode very
quickly. It persists. It compounds. And
it may persist and compound for a whole
lot longer than a lot of the models
predict because the models are not
really accounting for how slowly
societies tend to change. This is not
the same as saying learn AI and you'll
be fine. By the way, it's more specific
and it's more structural than that. The
capability dissipation gap means that
the economic rewards for early
aggressive adoption are higher and more
persistent than anyone is currently
modeling. The bears assume the gap
closes really fast with rapid labor
displacement. The bulls assume the gap
closes really fast with rapid technical
adaptation. Both are wrong about the
speed. the gap stays wide and while it
stays wide the people on the right side
of it accumulate advantages that
compound with every single model
release. Now the implications of this
gap play out really differently
depending on your scale. Frankly, large
firms are positioned to win on every
dimension except one and that may be the
one that matters the most. Like start
with capital advantage. Large firms have
the money to spend 20 grand a month on
an AI agent if that's what OpenAI wants
to charge. They have data advantages.
They have decades of proprietary
information. They have distribution
advantages. They have existing customer
relationships that create deployment
surface area. And they have the budget
for extensive verification and
compliance infrastructure if that's what
they need. But but but they carry the
full weight of organizational inertia.
Every new AI workflow has to survive
procurement, legal review, security
audit, you name it. So it can take 18
months from this tool will save us 10
million a year. So we're actually saving
the money. The only exception to this is
a highly involved founder like Toby.
Those are the wild cards in the pack. If
you have a really aggressive AI friendly
founder like Toby at a large company,
that can change. Small firms and
individuals and that difference is
blurring. We have the opposite profile.
We lack the capital. We lack the data.
We lack the distribution. But we have
the one thing the big companies don't
and that is speed. The capability
dissipation gap creates an asymmetric
advantage for speed and for anyone who
can collapse the integration timeline.
So a solo consultant who can integrate
AI into their workflow today is
operating at the capability frontier
while their competitors are still doing
quarterly meetings. The practical
huristic is really today. One of the
things that marks people who are AI
native is they think in terms of the
next couple of hours or get it done by
the end of the day. They are not coming
back and talking to me about we'll get
it done in two weeks. They're not coming
back and saying can we do it next month,
next quarter. And in the cases where big
companies can move to that way of
operating which is an enormous cultural
change which hits cultural inertia etc.
they have tremendous advantages. But for
everybody else who's smaller, who's
missing the capital, who's missing the
resources, they do better if they can
get on that speed train, if they can
leverage the advantage of being small. I
think Toby understands this
instinctively, and I think it's worth
looking at a case study from Shopify as
a result. Toby's mandate with AI is not
use AI when it's convenient. It's
demonstrate why AI can't do this before
you're allowed to ask a human to do it.
and he treats model evaluation as a
personal discipline, right? He's running
structured evals on his own on his own
time and growing his test harness over
time. Toby also requires AI exploration
in the prototype phase of every single
project. Not because the output will be
production quality, but because even if
AI fails at the task, you now have an
eval for the next model. That last point
deserves emphasis. When Toby makes a
junior employee test their project
against an AI tool, he's not expecting
the AI to succeed. That's not the goal
here. He's building organizational
muscle memory. He's ensuring that when
the next model release drops, and it
will, his company has a pre-built
evaluation framework that immediately
reveals what's newly possible. He's
investing in the rate of dissipation
within his organization. Most other
companies are trying to run the AI race
with the same tools they brought to
cloud and Toby is busy shortening the
track and focusing on how he actually
can get adoption with teeth. Toby made a
really sharp observation on the podcast
this week that stuck with me. He pointed
out that the best chess game every year
for the past 20 years has been played by
machine versus machine and nobody
watches those games. But everybody in
chess knows who Magnus Carlson is. We
don't actually care about the chess. It
turns out we care about the humans
playing the chess. Toby sees this as the
key insight people get wrong about AI.
The tools are instruments to be played.
They're not replacements for the player.
The craft still matters. The judgment
still matters. What changes is the
ceiling of what a skilled player can
achieve. Look, if you've dug in this
far, I want you to walk away with three
things. First, please recontextualize
that stock market activity. I don't know
what role you play. Maybe you're a
passive investor. Maybe don't invest at
all. The memes are still there and the
memes will get into tech and get into
your company and affect you. The AI
scare trade is creating mispriced assets
and organizational chaos. Some of the
companies getting hammered aren't going
to face real disruption, but on a
timeline measured in years, not in the
weeks that the market is pricing.
Meanwhile, the market isn't pricing in
the buy side of any of this at all. What
do companies do with the savings from a
40% reduction in software costs? What
happens to the $42 billion that gets
redirected from real estate commissions
to home buyers? No one's investing in
that. The doom narrative just doesn't
have a place for it. It doesn't drive
clicks. Second, recontextualize those
doomer narratives, too. The bare case
for AI is built on real economic forces.
Demand side effects of income
redistribution from workers to capital
owners, potential savings gluts. But the
conditions required for a full economic
contraction, the thing that is making
everybody panic right now are really
extreme. And when you get an economist
from the University of Chicago modeling
out these scenarios and basically saying
they are too unrealistic to hold in
practice, don't read that as
dismissiveness. It is some needed rigor.
That is a great antidote to some of the
panic going around in both investor and
tech circles today. The doom narrative
is useful as a policy warning. We should
absolutely be thinking about how we can
support job and career transitions. We
should be thinking about broader capital
ownership. But the doom narrative is not
particularly useful as a career planning
framework or as an investment thesis.
It's it's a meme, right? It's 10 to 50
times more viral than the counter
evidence and you should calibrate
accordingly. Third, and this is by far
the biggest one, map the capability
dissipation gap as it applies to you in
your situation. The most valuable thing
you can do right now is not learn AI in
the abstract. That's 2024 advice. That's
table stakes. The valuable thing to
figure out is where you sit relative to
the exponential curve and the flat
curve. Are you operating at the
capability frontier? Are you testing new
models regularly? Are you integrating AI
into your real workflows? Are you
building evaluation frameworks for your
domain or are you kind of content at the
dissipation rate? You're aware that AI
exists. Maybe you use it occasionally,
but fundamentally you're working the
same way you did 2 years ago. The gap
between those two positions is where
economic value is concentrating in the
next 2 or 3 years. And because social
inertia is so strong, that gap actually
isn't going to close as quickly as
people think. The person who spent the
last year building genuine AI fluency in
their domain is therefore not just
learning a tool. They're building an
asset that compounds. Every model
improvement makes that asset more
valuable, not less. Because each new
capability lands on a foundation of
practical understanding that takes real
time with the model to develop. The
career move right now is to become the
person in your organization who can walk
into a room of panicking executives. And
there's a lot of panicking executives
right now. and say with genuine
authority, I've tested this. Here's what
AI can actually do in our actual
workflow. Here's what it cannot do. Here
is the implementation plan. Here's the
budget and here's the timeline. That
person does not exist in most
organizations. The technical people
understand the models. The business
people understand the workflows but not
the technical side. And the consultants,
they just understand the frameworks and
talk. But nobody can cross all three.
And if you can bring all three of those
together, you have an incredibly
valuable skill set in 2026. The doom
narrative is a useful warning. The boom
narrative is a useful aspiration. We
should study that, too. Neither is a
useful plan for you or me or our
careers. Your plan that that's the one
that matters. It should be specific. Map
which of your problems are reasoning
problems. Point them at that model.
Which are effort problems? Which are
coordination problems? test which models
handle which tasks in your real workflow
like Toby does. Build the evaluation
frameworks that let you immediately
exploit each new model release. You are
trying to collapse the gap between
capability and integration in your
domain because every month that gap
stays wide is a month you're leaving
returns on the table. Stop worrying
about the doomer narrative. Do not pay
too much attention to whatever the next
investor-driven stock selloff is. I am
sure there will be another one. Pay
attention to the capability gap. Pay
attention to where AI is going and how
slowly it's actually getting adopted by
society. That gap is the greatest
generational opportunity anyone in the
workforce is going to see. That is where
you should be spending your time. Best
of luck.
Ask follow-up questions or revisit key timestamps.
The video analyzes two prevailing narratives about AI's economic impact: the "doom" scenario, where AI-driven labor displacement leads to economic collapse, and the "boom" scenario, where rapid AI adoption drives immense growth. The speaker argues that both overlook the significant force of "social inertia," encompassing regulatory, organizational, cultural, and trust-related barriers, which dramatically slows down AI's deployment, adoption, and deep integration into the economy. This creates a "capability dissipation gap" – the vast difference between rapidly advancing AI capabilities and the much slower rate at which society incorporates them. This gap, rather than being a problem, represents a massive generational economic opportunity for individuals and businesses that aggressively bridge it through active AI adoption, evaluation, and integration into their workflows, thereby gaining persistent advantages.
Videos recently processed by our community