The Job Market Split Nobody's Talking About (It's Already Started). Here's What to Do About It.
928 segments
Code is about to cost nothing, and
knowing what to build is about to cost
you everything. Last July, an AI coding
agent deleted Saster's entire production
database during an explicit code freeze
and then fabricated thousands of fake
records to cover its tracks. Jason Linen
was a developer who had given the agent
all caps instructions. I guess that's
how we prompt, not to make any changes.
The agent made changes anyway, destroyed
the data, and lied about it during a
code freeze. Fortune covered this. The
register covered it. It made headlines
because an agent ignored a clear spec.
But that failure is what people fixate
on incorrectly. It's the story of the
disobedient machine, the Terminator. The
failure mode that actually matters is
quieter and it's vastly more expensive.
Agents that execute specifications
flawlessly, they build exactly what was
asked for and then what was asked for is
wrong. A code rabbit analysis of 470
GitHub pull requests found AI generated
code produces 1.7 times more logic
issues than human written code. Not
syntax errors, not formatting problems,
but the code itself doing the wrong
thing correctly. Google's DORA report
tracked a 9% climb in bug rates that
correlates to the 90% increase in AI
adoption alongside a 91% increase in
code review time. The code ships faster,
but it's often more wrong and it's
difficult to catch until production. AWS
noticed this and they launched Cairo, a
developer environment whose core
innovation isn't faster code generation.
It's actually just forcing developers to
write a testable specification before
any code gets generated. Tell me what
it's going to be like by telling me how
you test it. Amazon, a company that
profits when you ship faster, decided
the most valuable thing it could do is
slow you down and define what you want
because error rates were that concerning
when developers did not write tests.
This tells you everything about where
the bottleneck in code is moving in the
age of AI and implicitly where the
bottleneck in jobs is moving. The
marginal cost of producing software is
collapsing to zero. 90% of Cloud Code
was written by Claude Code itself and
that number is going to be 100% very
very shortly. Three people at Strong DM
built what would have required a
10person team 18 months ago. Curser
generates $16 million per employee
partly because they figured out AI code
generation. So the capability curve is
steepening. It's not leveling off. And
if you're reasoning from what AI could
do in 2025, in 2024, you're working from
an expired map. But the cost of not
knowing what to build, of specifying
badly or vaguely or not at all, is
compounding much faster than production
cost is falling, which is a huge
statement because production cost is
falling really fast. Yet, every
framework people reach for to understand
this moment tends to ask the incorrect
question because it tends to ask whether
AI replaces workers and jobs. But when
the cost of production is collapsing
like this, the more useful question is
actually what is the new bottleneck
where jobs are going to be useful? What
is the new bottleneck where humans have
to get really clear? And guess what?
It's around intent. It's around those
specifications that engineers struggle
to write. All of knowledge work is
becoming an exercise in specifying
intent. And this video is about what
happens when those engineering mental
models get out into the rest of the job
space and we all have to think about
where our value is moving when it is not
doing the work. I think one place we
need to start when we understand jobs
and AI is the thinking of Francois
Cholelay the creator of Caris one of the
sharpest thinkers in machine learning.
He made an argument that's become the
default framework for understanding AI
and jobs. He pointed to translation, a
profession where AI can perform 100% of
the core task and has been able to do so
since 2023. Translators did not
disappear. Employment has held roughly
stable since. The work has shifted in
the last couple of years from doing it
yourself to supervising AI output. Now
payment rates have dropped and
freelancers have gotten cut first. There
is new hiring freezes going on. So
there's impact on jobs. And yet, despite
all of that, the Bureau of Labor
Statistics still projects modest growth
for the translation job category.
Chalet's claim is that software is going
to follow the same pattern. More
programmers will be needed in 5 years,
not fewer. The jobs will transform
rather than vanish. I think the model is
useful for thinking, but I think it's
also stuck on the wrong question yet
again. Will software engineers keep
their jobs is not the most interesting
question when the cost of production is
collapsing towards zero because so many
of us as engineers frankly so many of us
as knowledge workers all of our work has
been in production and so if you're
going to take the cost of production to
zero will we keep our jobs is really the
wrong way to think about it. It's really
what is a what is our job going to turn
into? And so the interesting question if
we ask about job transformation not just
for engineers but for everybody is what
is becoming scarce and therefore what is
becoming valuable when doing the work
when building is no longer the hard
part. Chalet doesn't have a framework
for that because translation's
capability plateau gave the market the
time to find a stable answer in the
translation job category. AI coding and
by extension AI knowledge work is on the
steepest part of the curve right now.
I've said before that I think benchmarks
are fairly easy to game. I'm not the
only person to say that. But the
production evidence of coding capability
gain is so unambiguous. You don't need
to pay attention to a benchmark to
believe it. You get look at cursors arr
and how fast they're growing. Look at
lovable. Look at the ability to now have
agents review the code of agents.
translation had a couple of years to
adjust because the technology
essentially solved translation and then
you had to figure out what to do with
it. Software may not get the same runway
because the depth of what's changing is
much more profound and the pace is even
faster. We need a different model to
understand how jobs in software and
knowledge work are going to change.
First, when cost goes to zero, demand
goes to infinity. Every time in economic
history that the marginal cost of
production has collapsed in a given
domain, demand has exploded. Desktop
publishing did not eliminate graphic
designers. It created a universe of
design work that could not have existed
at any price point prior. Cameras in all
of our phones created a universe of
photography that did not exist when
cameras were very expensive and only a
few people had them. Mobile didn't
replace developers. It multiplied the
number of applications the world needed
by orders of magnitude. Software is
about to go through the same expansion
except bigger. Right now, most of the
world cannot afford custom software.
Regional hospitals run on spreadsheets.
Small manufacturers will track inventory
by hand. School districts use tools
designed for organizations 10 times
their size or more, and some of them use
nothing at all. The total addressable
market for software is constrained not
by demand because demand is functionally
infinite. It's constrained by the cost
to produce. We are underbuilt on
software even after 30 years of software
engineering 40 50 years. When the cost
of production collapses, constraint that
means that we are underbuilt lifts
forever. Every business process
currently running in email,
spreadsheets, phone calls is up for
grabs now. Every workflow that was never
worth automating at a $200 an hour
engineering rate becomes worth
automating at two bucks in API calls.
The market for software is not going to
contract. It is going to explode. And
that is the best argument for why total
software employment likely grows and not
shrinks. Chalet is right about that. The
demand for people who make software
happen, however they make it happen, it
may not be traditional coding, it won't
be. That has never been higher. and the
cost collapse is going to push it higher
still. But I do want to be honest, just
because we can wave our hands and say
Jven's paradox means employment grows
does not mean your specific job is safe.
And understanding the difference
requires understanding what happens when
the constraint shifts from production to
specification. So let's talk a little
bit more about the specification
bottleneck. The majority of software
projects that fail don't fail because of
bad engineering. They fail because
nobody specified the correct thing to
build. Make it user friendly is not a
specification. It's like Uber for dog
walkers is not a specification either.
It's just a vibes pitch. The entire
discipline of software engineering,
agile, sprint planning, etc. evolved as
a way of forcing specification out of
vague human language. We need mechanisms
for converting vague human intent into
instructions precise enough that code
can be written against them. That
vagueness problem has always been there.
What's new is that the friction of
implementation is changing. When
building something took 6 months and at
best a half a million dollars,
organizations were forced to think
really carefully about what they wanted.
The cost of building acted like a filter
on the quality of the spec. If you take
away the cost of building, as AI is
doing, that filter is going to
disappear. the incentive to specify just
evaporated in all of your orgs and the
cost of specifying really badly is going
to keep compounding faster than ever
because now you can build the wrong
thing at unprecedented speed and scale.
A vibecoded app can take an afternoon
and 20 bucks in API calls and if the
spec is wrong, you did not save 6
months. You wasted an afternoon and
perhaps launch something that will harm
customers because the spec was never
right. This is the inversion we need to
pay more attention to because it tells
us a lot about where jobs are headed.
The scarce resource in software is not
the ability to write code. It's the
ability to define what the code should
do. And funnily enough, that is part of
why knowledge work is starting to
collapse into a blurry job family.
Because the ability to specify is
something we all need to do, not just
engineers. The person who can take a
vague business need and translate it
into a spec is the new center of gravity
in the organization. It doesn't matter
what their title is. It's not obviously
the person who writes the code that's
disappearing. It's not the person who
reviews the poll requests because
increasingly that's going to be an
agent. It's the person with enough
precision to direct machines and enough
judgment to know whether the result
actually solves the problem for
customers. Two classes of engineer are
emerging right now and engineering is
the tip of the iceberg. This is going to
be true of the rest of knowledge work as
well. Those two classes emerging right
now tell us where jobs are headed in
software. The first class of engineer
drives high value tokens. These guys
specify precisely. They architect
systems. They manage agent fleets plural
not singular. They evaluate output
against intention consistently. They
hold the entire product in their heads,
what it should do, who it serves, why
the trade-offs are correct, and why they
matter. And all they do is they use AI
to execute at a scale that was
previously impossible. One of the things
I want you to think about is that if we
are underbuilt on software, all of our
mechanisms are for underbuilt software
footprints. Imagine a world where your
engineers have to hold a 10x bigger
software footprint in their head because
AI has enabled that kind of scale. You
can say yes to everything the customer
wants with AI, but are your engineers,
are your product managers ready to hold
that level of abstraction in their
heads? Because if you can specify well
enough and orchestrate agents
effectively, the number of things you
can simultaneously build and maintain is
bounded only by your judgment and
attention, not by the hours in the day.
These people are going to command
extraordinary pricing power. The revenue
per employee data is off the charts. I
mentioned cursor at $16 million. Well,
Midjourney is at $200 million with just
11 people. Lovable is past $100 million,
past $200 million soon. These are not
just outliers. This is the equilibrium
driven by extremely high value AI native
workers. When one person with the right
skills and the right agent
infrastructure can produce what a 20
person team produced a couple of years
ago, that person captures most of the
value that used to be distributed across
the team. The second class of knowledge
worker, the second class of engineer
operates at very low leverage and that
leverage is degrading single agent
workflows co-pilot style autocomplete AI
assisted but not AI directed. These
engineers, these knowledge workers are
doing the same work they've always done
faster and with better tooling and they
are being commoditized. I just need to
be honest with you, the signals are
already there in the data. Entry- level
postings are down something like 2/3.
new graduates at 7% of hires, which is a
historic low. 70% of hiring managers are
saying AI can do the job of interns. The
junior pipeline isn't narrowing at the
intake. It's collapsing because the low
leverage work that juniors used to do is
the work AI handles first and best. And
I want to be really clear here. I have
personally seen that this is not just a
junior problem. mid-level and senior
engineers that are sticking with the way
they've always worked are in this exact
same boat. Now, it's time to turn our
attention to one of the most popular
responses to the jobs debate, the
soloreneur thesis. The idea that
everyone becomes effectively a solo
capitalist and is able to as a company
of one unlock tremendous value. That
sounds really great, but I think it
captures something real about the first
class of developer, knowledge worker,
and not the second class. The ceiling
for what a single talented person can
build has absolutely risen through the
roof. But I think it's a thesis that
only 10 to 20% of the knowledge
workforce is positioned to take
advantage of today. You have to have
entrepreneurial instincts. You have to
have deep domain expertise. And you have
to have the stomach for risk and the
ability to ramp on AI tools quickly. I
love that if that's you. The world is
your oyster. You have never had a better
chance to build cool stuff. But for the
other 80%, the future is going to look
like smaller teams with higher
expectations and compressed unit
economics. It's not a revolution in
autonomy for them. It's not a revolution
in autonomy for you if you are building
with the same production model. Instead,
it's just more pressure on what it takes
to stay employed. And so, what is the
distinction? What is the difference
between the people who are in that top
10 to 20% and the world is their oyster
and they can drive high value through a
company or run their own company versus
the people who don't. I think it comes
down to the economic output generated
per unit of human judgment. That's the
bifurcation we're looking at in a
sentence. And the gap between those two
classes is going to widen as agent
capability increases because agents
force multiply excellent human
specification and judgment. That is a
learnable skill. By the way, I don't
believe this is written in stone. I am
talking about a percentage divide I have
observed in the real world. I am not
talking about something I believe is
inevitable. You can learn human speck
and judgment. That is absolutely
something that's doable. I have
exercises for it. It's something you can
accomplish. But I don't want to kid you,
your teams need to do it if you're a
leader. Individuals need to do it. The
companies that are able to get, you
know, from 10 to 20% to 30 to 40% of
their workforce in this position are
going to be much, much more competitive
because of the nonlinear value of
learning human judgment as a skill,
learning specification as a skill. In
the age of AI, software engineers are
just the canary in the coal mine here.
The entire coal mine is much bigger.
knowledge work like analysis, like
consulting, like project management. It
all runs on the same substrate that AI
is already transforming in software. It
happens on computers. It produces
digital outputs. It follows however
loosely that it can be described,
formalized, and validated. Now, I know
the standard objection is validation.
Software has very clear built-in quality
signals. Code compiles or it doesn't.
Knowledge work is much vagger. That
doesn't hold in 2026. Two forces are
converging to break that assumption.
First, a huge fraction of knowledge work
exists because large organizations need
to manage themselves. The reports, the
slide decks, the status updates. This is
the conneto of coordination. It's a
nervous system that large companies need
to function. When organizations get
leaner, which is one of the things I've
been talking about a lot, AI is making
them leaner and we are seeing it across
the board at big companies that
coordination work isn't going to
transform with AI. It just deletes.
Brook's law ends up working in reverse.
So Brook's law talks about how
complicated it is to coordinate large
numbers of people and how that scales
exponentially. Well, if you cut down the
number of people and make your team
leaner, it turns out you have
exponential gains in the ability to
coordinate efficiently. The work was not
valuable in itself. It was valuable
because the organization was too big to
function without it. And the
organization was big because it needed a
lot of production labor to sustain the
value. If you simplify the organization
and make it leaner, all of that
coordination work can be deleted.
Second, the knowledge work that does
remain, the analysis, the strategy, the
judgment calls can be made more
verifiable. Consider what's already
happening in financial services. A
portfolio strategy used to live in a
deck and a set of quarterly
conversations. Now, it lives in a model
with defined inputs, testable
assumptions, and measurable outputs. The
strategy has effectively become a
specification. And once it's a spec, you
can validate it against data, run
scenarios against it, and measure
whether the execution of that financial
strategy matched your intent. Legal
following the same path. Contract review
is becoming pattern matching against
structured playbooks. Compliance is
becoming continuous automated audits
against codified rules. Marketing is
becoming experimental design with a
measurable conversion funnel. The
mechanism is straightforward. You take
knowledgework outputs that used to be
evaluated by vibes and you structure it
as a set of testable claims or
measurable specs and suddenly it is
subject to the same quality signals that
make software verifiable. Now I'm not
saying every piece of knowledge work can
be automated in exactly this way
tomorrow. But every year that frontier
is moving forward faster and faster and
the work that resists structuring tends
to be exactly the high judgment high
context work that only the most capable
people were doing anyway. So knowledge
work is converging on software not
because consultants will all learn to
code but because the underlying
cognitive task is actually the same
thing. You're translating vague human
intent into precise enough instructions
that human or machine systems can
execute them. The person specifying a
product feature and the person
specifying a business strategy are doing
the same work just at a different level
of abstraction. As the tools of
structuring, testing, and validating
knowledge work get better, the
distinction between those two is going
to collapse very very quickly. And with
it is going to collapse the insulation
that non-engineering knowledge workers
might assume they have. Guys, we're all
in the same boat with engineering now.
It's not a different boat. We're all
working with AI agents. Now, obviously,
if knowledge work is converging, like I
say, the practical questions from a jobs
perspective is what do you do about it?
Obviously, the answer is not learn to
code. That's the wrong advice. It's been
the wrong advice for a while. Engineers
have spent 50 years developing
disciplines around a problem that
knowledge workers are only now running
into. And I think that we can learn from
the engineering discipline how to be
precise enough that a system can execute
intent. One of the things that is a
massive unlock for the rest of knowledge
workers is just learning some of the
basics that good engineers know and
first hit the right level of abstraction
and learn to spec your work the way
engineers spec features. So a product
manager who writes improve the
onboarding flow is operating at the
wrong level of abstraction and is
producing the same category of failure
as a developer who writes just make it
better or follow this prompt correctly.
Engineers learned painfully to write
good acceptance criteria, specific
testable conditions that define done.
Guess what? We all need to do that as we
start working with agents. This is
becoming one of the single most
transferable skills in business. And you
should start practicing writing specs
today. And by the way, if you're a
leader listening to this, that goes for
you, too. Your strategy needs to be
speckable. You should be able to say
this is the success criteria. I have
seen a lot of very terrible strategy
board decks in my time and I think this
would generally improve them. Second
major principle, learn to work with
compute. Don't just learn about compute.
Don't just learn about AI. A high value
AI worker, a high value engineer who
knows how to use tokens well is not
valuable because they know about Python
code or JavaScript or Rust. They're
valuable because they understand what AI
can and cannot do, how to structure a
task so an agent can get it done, and
how to evaluate whether what the agent
did was correct. Knowledge workers are
going to need that same literacy. If
you're a financial analyst, you should
be running your models through AI and
learning where they fail, which
assumptions they miss, which edge cases
they ignore. You should be testing
contract review agents against your own
judgment. The goal here is not to get to
a onetoone replacement with your
judgment. It's to understand the machine
well enough to direct it and guide it
and guard rail and catch it when it
makes mistakes. Third major principle,
make your outputs verifiable. I know
some people are running the other way
here. There are knowledge workers who
are deliberately sabotaging AI on their
teams because they don't feel like
they'll have jobs. That is a fault of
leadership. Leadership needs to give
people the support to lean in here
because you will not be able to automate
very quickly if you cannot figure out
how to make the dirty details of your
day-to-day work verifiable. Engineers
write tests. A function either returns
the right value or it doesn't. Knowledge
workers need to develop the equivalent
structured outputs with built-in
validation. You should have data sources
on your market analysis. A project plan
should include measurable milestones.
And funny enough, we've been trying to
say this for a while as knowledge
workers. All of the eye rolling around
OKRs is a little bit an early preview of
making your outputs more verifiable.
Except now we really have to do it.
Next, learn to think in systems, not
documents. The deliverable of work used
to be a document of some sort for almost
everybody who is not an engineer. Now
you need to think in terms of the larger
system that your work is driving. A deck
requires a person who produces it every
quarter. A system requires a person to
specify it once and maintain it when
conditions change. Knowledge workers who
think in terms of systems. What are the
inputs? What are the rules? What
triggers action? How do you know it's
working? They are going to build things
that compound. Even outside of
engineering, knowledge workers who think
in terms of documents are just going to
produce AI that generates stuff faster,
but it's the same old stuff. We need to
start to learn to teach thinking and
systems as a core skill for every
knowledge worker. Finally, audit your
role for coordination overhead. If your
honest assessment is that most of your
work exists because your organization is
complex enough to require it, big enough
to require it, right? You have to align
stakeholders. You have to translate
between departments. You have to produce
reports that synthesize information from
lots of teams. You're really exposed in
the age of AI. It's not because you're
bad at your job. It's because the
organizational complexity that justifies
your job is the same thing that AI makes
unnecessary. The question to ask is
this. If my company were half or a
quarter of its current size, would my
role exist? If the answer is no, the
value you provide is likely linked to
coordination and coordination is the
first casualty in leaner organizations.
Open AI is already making its internal
systems so transparent to knowledge
workers that they don't have to go and
query Slack message that they don't have
to go and query Slack messages at the
company. They don't have to go and look
for context from a meeting. They can
just hit the internal data system with
an agent-driven search and get exactly
what they need from 50 or 60 different
stakeholders and come back. That is
where organizations are starting to
move. You don't have to have a meeting
to get coordinated. You hit agentic
search and you see the data in front of
you. And so the move in that situation
is not to panic, is to migrate toward
work that creates direct value. Look for
ways you can ring the cash register. How
can you build customer-f facing revenue
generating products? How can you start
to think about your work in terms of
driving the direction of the business or
getting the data that drives the
direction of the business? There's lots
of ways to do this that don't
necessarily mean you're a product
manager, right? The business, any
complex business will have a lot of
operational arms that have to still
exist. Finance is still going to exist,
right? These functions aren't going
anywhere. Look for how you can be more
directly value producing in those areas.
None of this requires a computer science
degree. All of it requires adopting an
engineering mindset. And knowledge work,
to be honest, has resisted that for
decades. I have lost track of the number
of conversations I've had with
marketers, with customer service folks
over the years where they have said,
"Engineering is just too hard. I
couldn't be that precise." I got bad
news. We all need to be that precise
now. We all need to be testable. We all
need to be falsifiable. We all need to
understand our tools well enough to know
when they're wrong. So, if we step back
from the details of our day-to-day jobs,
what does the larger productivity and
jobs picture look like? Where is this
conflict around jobs and AI playing out
in the real world? We are in the trough
of a J curve right now. Census Bureau
research on manufacturing has found AI
deployment initially reduces
productivity by an average of 1.3
percentage points. I bet you didn't
expect me to say that. With some firms
dropping as much as 60 points before
they start to recover. The METR study
that I shared about earlier this week
talked about the idea that there are
dark factories where AI agents not only
produce all the code but review all the
code. That same study found that
experienced developers were 19% slower
with AI tools despite believing they
were 24% faster. They just didn't
understand. This is the J curve of
technology adoption. Productivity dips
before it surges and we are in the dip.
What's interesting is because AI is
moving so fast and because it's
influencing the economy so widely, we
know this is a J curve and not a
permanent degradation because we can
literally see the companies that have
figured this out and gotten to massive
multiples. We don't have to hypothetical
midjourney. We don't have to create a
hypothetical about cursor. The employees
at those organizations really are that
productive and you can see it in the
numbers. So what comes after for
everybody else? manufacturing firms that
were digitally mature before AI will
eventually so what comes after for all
the rest of us who don't work at
midjourney and cursor given the pace of
AI capability scaling agents going from
agents going from bug fixes to
multi-hour sustained engineering in
under a year three person teams shipping
what 10 person teams shipped last year
or 20 person teams my bet is that this
entire thing compresses that has been
the story of this cycle right the
software J curve curve the adoption cost
that you face before you get fluent,
even for the rest of the economy, even
for non-native AI companies, is going to
compress into something like 18 to 24
months. And early adopters are going to
be past the bottom already. The
companies that figure out specdriven
development and agent orchestration
don't just get to be more efficient,
they get to operate at speed, at
productivity ratios that make
traditional organizations look dead in
the water. a 10 to 80x revenue per
employee gap opens up. One of the things
that matters here is that the J curve
really is shaped like a J. When you get
past the bottom, you start to accelerate
really, really quickly because agent
gains start to multiply cleanly across
your business. So, if we look at the
broad arc of history, what kind of
historical analog actually makes sense
for us here? The historical parallel
that fits best is not the story of the
invention of ATMs and how that affected
bank tellers. It's not the story of
calculators and how that affected
mathematicians. It's actually the story
of telephone operators in the ' 90s.
Those jobs did not disappear overnight.
But the people who held those jobs,
predominantly women and workingclass at
the time, found themselves a decade
later in lowerpaying occupations or out
of the workforce entirely. Overall
employment grew. new categories of work
emerged. But for the individuals in the
crosshairs, that was cold comfort. It
did not matter for those women. I think
we're in a similar moment, but I think
we have more tools to support each
other. And I think it's incumbent upon
leadership to do a better job than we
did in the 1920s. The economy is going
to create more software than ever, more
systems running on computers than ever.
It will probably be two or three orders
of magnitude what we have today.
Computers will remain more central to
human society than at any point in
history. That part of the story is
genuinely structurally optimistic
because compute creates leverage and
leverage creates abundance for us. But
more jobs in the economy and your
individual jobs are very different
things. The bifurcation is already there
in the data. AI native companies are
exploding and picking up pieces of the
economic pie that traditional companies
are deserting. That is why you see the
collapse in the SAS stock market over
the past couple of weeks. The gap
between engineers who can drive high-V
value tokens is literally $285 billion,
which is the amount that Claude was able
to wipe off of traditional SAS stocks by
releasing a 200line prompter on legal
work. I did a whole video on that. The
point here is not an individual stock
drop. Whether or not it recovers, not my
problem right now. The point is to think
about knowledge workers and understand
that we need to have a much more
intentional conversation to ensure that
the 70 or 80% of knowledge workers who
are not pushing highv value tokens right
now get the skills to do so. How can we
think about the distribution of our
teams and look at each person on that
team as someone who can level up in
their agent fluency, someone who can
level up in their ability to write specs
and understand intent because that is
the new skill that's going to matter.
And there is no reason why we have to
leave people behind on that. It
absolutely is a skill issue. It's a
learnable skill. This transition is
going to happen whether we prepare for
it, whether we support our teams or not.
The only variable is which side of the
bifurcation we're going to end up on and
whether we as company leaders are going
to lean in and support our teams in that
transition. whether we as individuals
who are trying our best to get through
this AI transition are able to learn the
skill to start to think in terms of
giving clear intent and goals and
constraints in our work rather than
doing the work itself and that window is
closing faster because AI agent
capability gains keep accelerating. The
technology is not going to wait for
organizations and individuals to catch
up. We have to lean in and help each
other. If you are on a team and you
understand what I'm saying, it is on you
to help your buddies on the team to
understand this better. If you're a
leader, it is on you to think about how
you build systems that support everyone
in your org. And if you are stuck, it is
on you to figure out how you can take at
least a single step toward understanding
what it means to give the agent a job
and watch it do the work. It might be as
simple as trying Claude in Excel and
watching Claude create something. Maybe
that's the simplest way to start. I have
some other exercises as well that I put
in the substack that are at a range of
scales. But the larger point is that you
need to believe that there is hope at
the end of the tunnel and that the
company you're operating at, the job
that you're doing is something you can
pivot. If you think about it as tackling
a larger problem and specifying where
your agent needs to go to create value,
that's on us to do. the agent capability
is going to be there. It is on us to
specify enough of what we want that we
can create tremendous value with all of
this compute capability that we have. We
need to have better strategies. We need
to think bigger. It is actually rational
to think about boiling the ocean. We
were always told as companies, as
leaders, as product managers, don't boil
the ocean in your strategy. Well, if you
have the cost of production falling to
zero on software, why not think big? Why
not think courageously? Why not think
about producing more value? I think that
is a bold goal that can actually
catalyze a lot of transformational
change in the ways I'm talking about. It
can catalyze teams to work more leanly.
It can catalyze individuals to start to
think about how they can stretch and
grow and define what agents do work for
them so they can do more and lean more
into the direct production of value.
That is where we need to go. That is why
the future of jobs is not about
production of code or production of
work. It is about good judgment to
specify where agents are going. Best of
luck.
Ask follow-up questions or revisit key timestamps.
The video discusses the evolving landscape of software development and knowledge work in the age of AI. It highlights that while AI can generate code efficiently, the real challenge lies in accurately specifying what needs to be built. The cost of producing software is decreasing, shifting the bottleneck to the cost of specifying intent. The speaker contrasts two classes of workers: those who can precisely specify and orchestrate AI agents, commanding high value, and those whose work is being commoditized by AI. The video emphasizes the growing importance of skills like clear specification, system thinking, and making outputs verifiable, drawing parallels to historical technological shifts. It suggests that knowledge work, like software engineering, is converging towards a similar need for precision and clear intent, and that individuals and organizations must adapt to thrive.
Videos recently processed by our community