Going Slower Feels Safer, But Your Domain Expertise Won't Save You Anymore. Here's What Will.
385 segments
AI is collapsing futures and most of us
are missing what that really means. We
think collapsing as in destroying.
That's not what what I mean here.
Collapsing as in compressing is what
people are missing because AI is
collapsing multiple different dimensions
of our work lives into a single thread
pointing to the future. And we're we're
missing the deeper implications of that.
The first collapse is horizontal.
Engineer, product manager, marketer,
analyst, designer, opsley. These used to
be distinct career paths with very
distinct skill sets. They're all
converging now very quickly into a
single meta competency orchestrating AI
agents to get work done. If you cannot
do that, none of the rest of the domain
knowledge is going to matter in late
2026. And yes, I don't want to lose the
fact that we still have folks who have
10 years, 15 years experience in these
individual domains in front-end design
in being an operational lead in doing
deep back-end engineering. But you don't
have value there unless you can do the
orchestrating AI agents piece certainly
by late 2026, early 2027. That is how
fast this space is developing and I
don't think most of us are ready for it.
The second collapse is temporal. the
leverage you thought you could build
over the next five years. The way we've
been trained to think about career
ladders as these steady steps wait two,
three years next promotion, that
timeline is compressing into months. The
rate of AI capability improvement nearly
doubled in the last year and it's just
going to keep going faster. Both
collapses point to one conclusion. Now
is what matters. Not your 5-year plan,
not your eventual intention to get up to
speed on AI because the future keeps
arriving faster. Preparation means
engagement. And I'll add one more piece
here that I think is absolutely true
across everyone who engages with AI
productively.
This is an art you learn by doing. You
do not get to learn to ride a horse by
reading a book as a friend of mine
called out. You do not get to learn to
swim by sitting in a deck chair and
watching the ocean. You just got to get
in. And that's very true of AI because
it's an experiential technology. Let me
go a bit deeper onto the differentiation
between knowledge work roles because I
think a lot of times when you see oh the
knowledge work roles are collapsing.
They're all going to be the same. It
feels like a big claim. It feels like
it's overhyped.
Gartner's predicting that close to half
of enterprise applications will
integrate task specific AI agents by the
end of 2026. That's up from less than 5%
in 2025. It's absolutely exploding. It's
an eight-fold increase in just over a
year. 57% of companies uh as of 2025
claim to have AI agents in production.
Now those can have varying degrees of
competency. They the the direction is
nothing but it's exploding. So what this
means is that specific domain AI
expertise is going to be mediated
through these universal skills. The
differentiation is going to be whether
you can apply your marketing skills,
your engineering skills, your finance
skills, whatever it is in an AI
agentshaped way. Think about what a
product manager does today versus two
years ago. The job used to require
synthesizing customer feedback, writing
specs, coordinating with engineering,
managing stakeholders, and now
increasingly the job involves just
prompting models to draft spec and using
AI to analyze customer data. And you're
often now using agents to update
tickets. You're using agents to directly
build in production. Your entire job is
radically different. And that pattern
repeats across every function. Legal
teams using AI to review contracts are
compressing jobs that took weeks into
hours. Finance teams can now use clawed
in Excel to build projections that used
to take days. Customer success teams can
run AI agents that handle 80% of initial
inquiries or 90 or 95. There is going to
be a fundamental turnover of skills
across every one of these jobs families.
What used to be 50 different
specializations is going to converge
into variations on a single theme.
Humans directing AI with good knowledge
and good software-shaped intent toward
an outcome. I've talked about software
shaped intent before. I think it's one
of the biggest skills we're missing when
we direct agents. We need to think in
terms of what agents can deliver within
the technical ecosystem they occupy.
Where is the agent's tool set? Where is
the agent's memory? Where is the agent's
workflow? When I direct the agent to do
something, is it going to look
softwareshaped? As in, is it going to be
an interface that adequately reads and
writes data so that I can solve the
problem? Software is leveraged expressed
in silicut. Fundamentally, if you know
how software works, and so much of
software is just reading and writing
data and presenting it in a way that's
useful, if you start to think in those
terms, you're going to be able to apply
the specific domain knowledge you have
in design, in finance, in customer
success, and you're going to be able to
use AI agents more effectively. Even if
your job isn't building software, this
used to be a product only thing or an
engineering only thing. The idea that we
now work with agents is becoming
universal. And the idea that we have to
think in software terms is coming out of
the technical box. It's coming out of
engineering. It's coming out of product.
It's coming for all of us. And I want to
be clear, your expertise doesn't
disappear here. It just becomes
foundational rather than differentiating
by itself. You need to have great domain
knowledge to direct AI effectively. It's
part of how seniors compete in a world
where everyone has access to the same AI
tools. But you have to be able to
leverage that through AI. And I think
most people think of that still in terms
of their specific domain. We have this
sort of single lane focus. And what I'm
calling out is that we have a giant
bottleneck on skilling. Like all of our
skills are starting to converge around
this one gigantic meta skill of driving
AI agents. The second collapse I want to
talk about, I mentioned temporal
collapse. This is really important and
we keep missing it. career leverage is
compressing into the present moment
because AI is accelerating time.
Consider even just the Sweetbench coding
benchmark. AI systems could solve 4% of
problems in 2023 and they've essentially
solved the entire benchmark 2 years
later. I I don't know exactly what it's
going to be when you see this video, but
it's around 90 95%.
Sweetbench is saturated and the fact
that we saturated it is not even the
most important thing. The doubling time
to get that number up is shrinking. AI
progress is accelerating. Traditional
career planning assumed you had the
time. Learn a skill, apply it for years,
build expertise, get promoted,
eventually learn that expertise and
figure out how to leverage it in
leadership. That timeline gave you a
sense that you could plot out your
growth over time and get some breathing
room. You could be strategic about when
to invest your learning energy. And that
assumption, if you take it at face value
like you could in the 2000s and 2010s,
that's now catastrophically wrong
because you have to assume a career path
where AI is gaining speed ever more
rapidly. And this creates a really tough
dynamic for career planning. I don't
want to sugarcoat that. The skills that
will matter in 2027 are being defined
now by people engaging now. If you wait
until the tech settles down, you're
going to find that the early adopters
have already built the workflows,
established the norms, and captured the
opportunities that you were waiting for.
They'll have two years of compound
learning while you're still figuring out
the basics. So, there is I cannot
promise you a way to settle down. This
is a chaotic period. There is no mature
state to wait for. There is only a
continuously steepening curve and it's
going to reward folks who can climb in
early and go faster. I compare AI to
riding a bike. If you are going slow on
a bike, it's really hard to balance and
you feel like you're never going to
catch up. But experientially, when you
go faster on a bike, the steadiness
increases. The way to balance gets
easier. And kids have so much trouble
learning this. They think if they go
slower, they'll be safer. But they're
actually safer going faster. And that is
what you have to learn with AI. You're
actually safer leaning in and going
faster than you are going slower because
slower forces you to constantly think
about breaking and stopping and slowing
down and figuring out how you can adjust
and work this into your existing
workflow. And I see so many of us acting
like kids on a bike for the first time.
We're just trying to figure out how to
go very slowly. I got to say AI is going
too fast for that. You got to get on the
bike and go as quick as you can because
that's the easiest way to balance. Like
people ask how I keep up. It's because
I'm going pretty fast on the bike and it
feels really steady. The old career
model assumed your expertise appreciated
over time. You would learn something
valuable. It would stay valuable
gradually. It would compound. The new
model is really different. Your
expertise atrophies. It depreciates
unless you continuously update it. And
the depreciation rate is accelerating
because AI progress is going faster. I'm
not trying to argue for panic here. It's
an argument for continuous engagement.
The people who are thriving now are not
the ones who just go to an AI class and
master it once and then coast. They're
the ones who develop the meta skill of
continuously learning and adapting as
the tech evolves. The halflife of any
given piece of specific AI knowledge is
short and it's getting shorter. The
halflife of the learning habit around AI
is getting longer and more durable. If
you doubt the magnitude of what's
happening, follow the money. This is the
biggest capex project in human history.
Big tech's combined AI capital
expenditure was close to half a trillion
dollar in 2025. It's going to be well
over half a trillion in 2026. And in
total, the big five, Amazon, Microsoft,
Google, Meta, Oracle plan to add more or
less at least $2 trillion in AI related
assets in the next four years. This is a
tremendous amount of operational
investment in what these companies
believe is the future. The money is
committed. AI is happening and is going
to define the next era of computing so
thoroughly that we have got to
understand that there is no other way
there. The only way out and through is
AI. And that's what I mean when I say
it's collapsing timelines and
compressing career trajectories. There
is no other way through the career path
that does not include AI. And things get
uncomfortable at that point. Many people
are resisting and you don't have time to
resist. If you tried chat GPT in 2022
and said it hallucinated and you just
left it, you don't have time for that
anymore. You don't have time to say I'll
wait it until it matures. Like that's an
that's like sitting by the bike and say
I'll wait till it gets steadier. It's
not going to get steadier. You don't
have time to say my job is immune. I got
news for you. It's not immune. Anytime
you are touching a computer, you are
touching AI. That's how pervasive it's
going to be in the next year. And to be
clear, for people who are saying, I want
to exit the ride. I want to stop a tech
career. I have respect for that. And I
know folks who have said, I had my
career. I think I'm done. I want to open
a bookshop. I want to go and start doing
carpentry. That's fine. That's great.
That is something that you can choose to
do intentionally. I think that's a much
more productive choice than trying to
stay in the industry that is converging
on AI and trying to resist that. that's
just not going to go well and it's going
to make everybody including you kind of
miserable. And so if you really think AI
is not for you, I think the best thing
you can do is pick that alternate career
path that takes you away from the screen
because if you're going to stay in
fields touched by AI, which is
increasingly everything to do with a
computer, you're going to have to
engage. I want to close by giving you
some encouragement. It is easy to look
at this and to be doom and gloom. It is
easy to say, I did not make the choice
for AI. I would argue none of us did.
The industry as a whole made that choice
and we are all living through this
moment together. We did not choose to
compress timelines. We did not choose to
compress career paths. It's happening
for us. And I have seen over and over
again that when people recognize that
and when they choose to say even though
I didn't get to pick this, I am going to
choose to engage with AI with curiosity.
I'm going to choose to engage AI and
learn to ride the bike. I'm going to
lean in as far as I can lean in even if
I'm not quite sure. that is going to get
you so much farther. It is going to get
you an accelerated rate of learning.
You're going to be less overwhelmed.
Curiosity literally opens up your brain.
And we need openness to this AI world if
we want to be able to shape it in a way
that works for us. And I've seen
numerous dozens and dozens and dozens of
examples where people have chosen that
positive path. They've chosen to lean in
in widely differing fields. healthcare,
tech, finance, engineering, product.
I've even seen folks lean in on like
small town community building in AI. And
without exception, that choice to
positively lean in has taken them
farther. And so, if I can leave you with
anything in the middle of a timeline
that feels like it's increasingly wild
and unpredictable, it's just an
invitation to get on the bike with AI.
You got to go faster. And you've got to
be able to believe that if you lean in
and pry, if you jump in and you say,
"All right, I'm going to I'm going to
try something new. I'm going to try
Claude Code." Whatever it is that's new
to you, right? I'm going to try lovable.
I'm going to try a different way of
working with my chatbot. Great. And then
do the next thing. And then lean in a
little farther. And then lean a little
farther. And you're going to go faster
and faster and faster and faster. And
it's going to feel steadier over time
because you're going to pick up how AI
works across all of these systems in
your unconscious brain. The patterns
will start to solidify and you're
basically learning to work with this new
piece of technology in a way that feels
very stable over time. And so if I can
encourage you with anything is that
going faster is safer and less scarier
with AI than going slower. Cheers.
Ask follow-up questions or revisit key timestamps.
The video discusses the concept of AI "collapsing" futures, not in a destructive sense, but by compressing multiple dimensions of our work lives into a single trajectory. This involves two main collapses: horizontal and temporal. The horizontal collapse sees traditional distinct career paths (engineer, marketer, etc.) converging into a meta-competency of orchestrating AI agents. Domain expertise alone will not be sufficient by late 2026; the ability to leverage AI is key. The temporal collapse means career progression timelines, once measured in years, are now compressing into months due to the rapid improvement of AI capabilities. The speaker emphasizes that now is what matters, not long-term plans, and that preparation means active engagement with AI, likening it to learning to ride a horse or swim – it's an experiential process. Gartner predicts a massive increase in enterprise applications integrating AI agents by 2026. The differentiation in future roles will be in applying existing skills (marketing, engineering, finance) in an AI-agent-shaped way, moving from specific tasks to prompting and leveraging AI for analysis and execution. This pattern repeats across all functions, fundamentally changing jobs. The core skill becomes "software-shaped intent," understanding how agents work within their ecosystem (toolset, memory, workflow) and how to interact with data effectively. Expertise doesn't disappear but becomes foundational, requiring continuous updating as AI progress accelerates and expertise depreciates rapidly. The speaker highlights massive investment in AI by major tech companies as evidence of its inevitable impact. Resisting AI or waiting for it to mature is futile; continuous engagement and learning are crucial. The analogy of riding a bike is used: going faster on a bike increases stability, and similarly, embracing AI quickly is safer and more effective than going slow and hesitant. The key is developing the meta-skill of continuous learning and adaptation, as the half-life of specific AI knowledge is short, but the habit of learning is durable. The video encourages leaning into AI with curiosity, viewing it as an opportunity for accelerated learning and adaptation, leading to greater stability and effectiveness.
Videos recently processed by our community