Why the Smartest AI Bet Right Now Has Nothing to Do With AI (It's Not What You Think)
634 segments
A week or so ago in Davos, Switzerland,
Elon Musk told the World Economic Forum
that we're approaching quote abundance
for all. Ubiquitous AI, ubiquitous
robotics, everything's going to be
great. An explosion in the global
economy, quote, truly beyond all
precedent. He recommended we not save
for retirement. Meanwhile, Dario Amade
predicted half of white collar jobs
would disappear. But apparently that's
good because the abundance is just going
to be everywhere. Look, the abundance
narrative was everywhere at Davos.
echoed through every panel, every
fireside chat, every opra ski
conversation. But I want to suggest to
you that the abundance economy is
probably the wrong frame for most of us
to think about the next few years. And
instead, we should think about the
bottleneck economy. It's much more
practical, much more likely to getting
you employed, and much more likely for
you as a builder or a company leader to
find ways to succeed in the AI economy.
So, let's talk about bottlenecks.
Cognizant released telling research on
AI claiming that it could unlock could
is the keyword and a half trillion
dollars in US labor productivity. But
there was a massive caveat that no one
paid attention to from that research.
The value will only materialize if quote
businesses can implement it effectively.
That is the biggest asterisk I've ever
seen. And most businesses according to
the CEO of Cognizant, Robbie Kumar, have
not yet done the hard work. I think
that's very true. There it is. That's
the gap between the abundance narrative
that sounds so good in Switzerland and
the reality. It's not about capability
of models. It's about implementation.
It's about value capture. The AI already
exists, but the trillion dollar value
that people like to talk about doesn't
just show up and flow automatically.
This is not the fountain of youth. This
is the story everyone is missing when
they debate AGI narratives. The
interesting question is really not
whether AI creates abundance. It does.
The interesting question is where are
the bottlenecks? Because that's where
value concentrates. Of course, AI is
creating an unprecedented abundance of
intelligence. But that just means that
the bottleneck flows downstream and
that's where the leverage lives and
that's where fortunes will be made or
lost in the next decade. Abundance is
super handwavy. I'm not interested in
handwavy. Bottlenecks are specific and
specificity is where strategy happens.
It's where careers happen and it's where
companies happen. So let's talk about
bottlenecks first. A bottleneck is the
binding constraint in a system. It's
it's not a constraint. It is the high
leverage binding constraint. The one
that determines actual throughput in the
system. If you improve anything else,
you've accomplished nothing because you
didn't improve the bottleneck. But if
you improve the bottleneck just a little
bit, everything will move. Look, this is
basic systems thinking and it's also
something that most people ignore. They
optimize for whatever is visible,
whatever is comfortable, whatever
they're already good at. They work
harder instead of differently. They add
capacity where there's already lots of
capacity in the system and they ignore
the choke point because that's been
really painful to view and consider and
address. So many organizations do this.
The history of the corporation
ironically illustrates this perfectly.
Every dominant organizational form
emerged to dissolve a specific
bottleneck. The Dutch East India
companies solved the capital lockup
problem of multi-year oceanic voyages.
Railroads cracked the energy constraint
on overland transport. Banks emerged to
allocate capital across time. Stock
exchanges aggregated capital at scales
that exceeded any private fortune.
Walmart solved the information
bottleneck in retail supply chains just
knowing what was selling where and
getting it there before stockouts. The
pattern is consistent. Whoever solves
the binding constraints captures
disproportionate value. Everybody else
participates in the abundance that's
created. The AI era absolutely has its
own bottlenecks and they're not the ones
most people are watching. First, the
binding constraint on AI capability is
increasingly atoms, not bits. Jensen
Hang told Davos that AI needs more
energy, more land, more power, and more
trade skilled workers. Contemporary
hypers scale data centers consume 100
plus megawws. Training a single frontier
model can require sustained exoflops of
compute for weeks. The electricity
demands are approaching those of small
nations. In some cases, this matters
because physical infrastructure operates
on very different timelines than
software. You can ship a new model in
months if you have the compute, but
building a data center to run it at
scale that takes moving atoms around.
That takes time. Permitting alone can
take years in some cases. Expanding grid
capacity is even harder. Google recently
shared that they are bottlenecking on
the ability to establish connections to
the grid. This is not the only
bottleneck in the system, but it's a
great example of all of the specific
upstream bottlenecks that are
constraining the ability of hyperscalers
to build right now. The result is a
structural wedge between what's
technically possible and what is
deployable today. capability sprints
ahead while infrastructure really plots.
We're seeing this also with the memory
crisis where DRAM prices are just
skyrocketing because there's not enough
memory to go around. A model can exist
in potential, but the physical substrate
to run it at scale is what's required to
deliver value. Who captures value from
this gap? Well, the joke is it's always
Jensen and it's Nvidia. And that's not
entirely wrong, but it's also more than
that. It's whoever can navigate the
physical constraints faster. It's who
can pick the better site. It's who can
get faster permitting. It's who can get
more efficient construction. Who can do
smarter energy sourcing. This is not a
temporary bottleneck. This is
structural. The companies that
understand this are securing power
purchase agreements, advanced memory
purchase agreements, locking up
construction capacity, and building
relationships with utilities years in
advance. The companies that don't are
assuming compute will magically appear.
The chip supply chain is even more
constrained. TSMC and a handful of other
fabs control the production of advanced
semiconductors. Packaging and testing
and high bandwidth memory all have their
own separate bottlenecks. As I've called
out, Nvidia's market position isn't
really about better chips. It's about
having chips at all when everyone else
is capacity constrained. The hardware
advantage compounds because access to
compute determines who gets to train the
next generation of models, who gets a
seat at the table. And yes, the physical
layer creates an opportunity for an
entirely different kind of company. One
we normally don't think of as an AI
business. Someone has to build these
facilities. Someone has to provision the
power. Someone has to manufacture the
cooling systems, install the racks,
connect the fiber. This is what Jensen
is calling highquality jobs because he
can't get enough of them and neither can
any of the other hyperscalers. He says
trade craft jobs in these kinds of
spaces have salaries that have nearly
doubled. And I'm not at all surprised.
The abundance of AI at the application
layer depends on scarcity being resolved
at the physical layer. And that
resolution means people. The geographic
distribution matters too. Data centers
need stable grids, friendly permitting
environments. Access to cooling whether
through climate or water. This means
certain regions effectively become
strategic assets. It means local
politics become unexpectedly relevant to
the trajectory of AI. the infrastructure
to build AI, the AI that we have in our
pocket and assume is global, that
infrastructure lives locally. But that
is not the only bottleneck. In fact,
that might be the most well-known
bottleneck, but there are a bunch of
others that people talk about less
often. The trust deficit is the next
one. When Demuse Hassabi spoke at Davos,
his biggest concern wasn't technical. It
was the loss of meaning and purpose in a
world where productivity is no longer
the priority. End quote. He also worried
that we lack quote institutional
reflection about AI. Look, what he's
really saying is these are coordination
problems and coordination runs on trust
and he's worried about trust. Consider
what happens when anyone can get
sophisticated AI content and generate
whatever they want at the touch of a
bot. Text, images, video, code, all
become cheap to produce. The cost of
generation collapses, but the cost of
trust doesn't get cheaper. If anything,
trust gets harder because the difference
between synthetic and authentic is
becoming indistinguishable. Every piece
of content could be fabricated. Every
credential, you could gain that. Every
piece of information might be generated
to manipulate you. When you can't
distinguish the signal from the noise,
you're overwhelmed as a human. And you
look for someone to trust. Trust is the
infrastructure of coordination. When I
trust that a counterparty will honor a
commitment, I don't need to write every
contingency into legal language. When I
trust that a credential signals
competence, I don't need to administer
all of my own tests. When I trust that
published information is accurate, I
don't need to verify it independently.
Trust reduces transaction costs. It's
the it's the trust in the system that
makes coordination possible. Now,
imagine that trust degrading. You don't
have to imagine it. You see it and you
feel it. Transaction costs tend to rise
across the entire economy. Deals take
longer. Verification layers multiply.
Everything gets harder. Who captures
value here? Whoever can mediate trust.
The institutions that can verify, that
can authenticate, that can certify, the
platforms that develop reputations for
signal in a world of noise, the networks
where track records are visible and
accountability actually exists. We're
kind of looking for trust banks in the
21st century. essential infrastructure
that everyone can rely on, controlling a
scarce resource that must be accumulated
over time and that can be allocated
across different uses. The parallels
between trust and capital are definitely
thought-provoking. Here's another
bottleneck that people aren't talking
about enough. The integration gap.
Cognizance research points to something
specific. The value is conditional on
implementation.
$4.5 trillion dollars sitting there
chained up because organizations can't
figure out how to use AI effectively.
This is the integration bottleneck. AI
has the general capacity but no specific
context. And we know after a couple of
years of implementation at the corporate
level that a general capability is a
tool that works well for individuals and
without specific work on the part of the
company, it just dies at the team level.
it does not go anywhere. And so, yes, a
general AI can write code, but it
doesn't know your code base. A general
AI can draft strategy, but it doesn't
know your competitive dynamics. It can
it can talk about board politics
generally, but it doesn't know your
board. It can talk about the product
strategy of someone in your category,
but it doesn't know you. The gap between
AI can do this, and AI does this
usefully right here is $4.5 trillion.
Bridging it requires context that's
often tacit, right? It embeds practices.
It embeds relationships. It doesn't just
embed documents. The person who's been
at the company for 20 years knows that
things aren't written down anywhere. The
AI doesn't. This knowledge is not
promptable. The interface between
general AI capability and specific
organizational reality is where value
gets lost or captured. And some
companies are going to figure out how to
solve this integration problem and
unlock massive productivity gains by
tying AI into their workflows. Others
are going to deploy AI tools by the side
that sit unused or worse get actively
misused. And they're going to generate
outputs that look deceptively productive
and that do not connect to anything that
matters. The difference isn't the AI or
the tool. The AI is increasingly a
commodity, guys. The difference is the
organizational capacity to integrate.
Who builds that capacity? That's not
obvious. Maybe it's a new category of
consultancy that specializes in AI or
fit. Maybe it's internal roles that
don't exist yet. People whose job it is
to translate between what the business
needs and what AI can do. Maybe it's
software that encodes organizational
context in ways that make AI outputs
more relevant. Whatever the form, this
is a bottleneck. And bottlenecks are
where value concentrates. The
coordination problem is broader than
trust. AI doesn't just dissolve the
challenge of getting humans to agree
magically. It doesn't make them align
magically. It might make coordination
even harder. When anyone can generate
sophisticated arguments for any
position, groups have even more trouble
reaching consensus or alignment. Larry's
warning at Davos was really pointed. If
AI does to white collar workers what
globalization did to blue collar
workers, we need to confront that
reality directly. It's very comforting
for him to say that, isn't it? Sitting
in his little Sharon Davos. But he's
describing a coordination problem. How
do we actually share the gains from AI
in ways that don't trigger social
disruption? That's a question of human
alignment. And really, no one at Davos
has those answers. And everyone wanted
to talk over a cocktail glass about
them. The IMF managing director
certainly had a quotable saying that a
tsunami was hitting the labor market and
40% of jobs globally would be affected
and quote we don't know how to make it
inclusive. Well, honestly, the people
who are the closest to knowing how to
put AI and jobs together aren't the ones
going to Davos. They're the ones who are
actually building workflows where AI and
people work together. They don't get
those invitations. You know what's
really interesting? I've spent the first
part of this video talking about
bottlenecks and most of them seem like
they apply to companies, but everything
above applies to individuals too. The
bottleneck principle is a fractal
principle. You are also a system with
binding constraints. Your output and
your impact and your leverage are
functions of which bottleneck you're
solving and whether you're optimizing
the right constraint. The old individual
bottlenecks are dissolving. Access to
information is abundant. Access to tools
is cheap. Skill acquisition is rapidly
getting easier. It used to take 5 years
to become a proficient programmer or
more. AI compresses or eliminates those
runways. Dario Amade noted at Davos that
his own engineers no longer program from
scratch. They supervise and edit the
work of models. And this is something
that's come out of OpenAI as well. And
we're hearing it over and over again
from other extremely experienced
engineers who are now saying we don't
really touch code. This is disorienting
if your identity was built around skills
that are commoditizing like programming.
But disorientation is not a strategy not
for your career or mine. The question is
where are the new individual
bottlenecks? And honestly I was not
happy with what the Davos guys said.
Again I feel like they asked lots of
questions and didn't have answers.
Hubis's advice to young people was to
become incredibly competent with AI
tools. That's a throwaway line. That's
not a great line. I want to think more
deliberately. Tool fluency is table
stakes. The constraint shifts to what
you do with those tools. Taste and
judgment become really critical. When
generation is really cheap because
people have all those tools, the
curation of what's good is expensive.
Knowing what to make, when to stop,
what's good enough versus what's
actually good. These are capacities that
actually still take a lot of time to
learn. The AI can generate a hundred
options, but knowing which option is
right is still human terrain. The
challenge is that taste develops slowly
while AI devalues output. So if you
spend 3 years developing good taste in
design and AI makes okay design a
commodity before you can capitalize on
your extra 10% or 20% of taste, you end
up losing a race you didn't know you
were running. I feel and hear that
frustration from a lot of early career
folks right now. The window to good
taste is getting narrower and the people
who are surviving and thriving and
developing good taste are narrowing
their focus earlier. It used to be that
when you developed good taste, you were
really broad to start with and then you
discovered how to narrow over time as
you learned. These days the folks I see
who have extraordinary taste are diving
in super deeply on something. So they
are rapidly pushing to the frontier past
the edge of where an AI good enough is
acceptable because we all know yes AI in
many ways has solved front-end design
but if you want extraordinary design
people are still turning to humans who
have extraordinary taste. That kind of
dynamic is going to persist in a lot of
different corners of the economy and
it's going to supply a lot of different
jobs. Here's another one. Problem
finding eclipses problem solving. AI
solves wellsp specified problems with
increasing fluency. But specifying the
right problem and framing it right that
remains very very human. What should we
build? What is wrong here? Have I had
time to think about it? What question if
answered would unlock everything else?
Our education system has largely
optimized for problem solving and the
market is increasingly rewarding problem
finding. If you are good at looking for
problems and good at talking about them
and framing them, you're in good shape.
The analyst who knows which questions to
ask and which problems matter vastly
outpaces the analyst who can answer any
question. The skill increasingly is not
execution, it's direction setting. It's
a management skill. Context and
institutional knowledge are becoming
moes for individuals in the way that
data is becoming a moat for companies.
AI is general usefulness is specific.
The person who understands why the
organization really operates the way it
does, what the stakeholder actually
wants beneath what they're saying. That
tacit knowledge is very hard to
replicate and increasingly valuable. And
this creates a really strange dynamic.
Juniors who would historically have
accumulated context through years of
apprenticeship now face a very
compressed path. Why spend 5 years
learning how the organization works when
AI can help you skip the grunt work? But
the grunt work was also where that
context got absorbed and the implicit
knowledge that made senior people really
valuable often came from thousands of
little exposures that never happen if AI
handles all the tasks. So, how do you
develop institutional knowledge without
that slow accumulation? Honestly, I
think it still takes slow accumulation
and people are trying to speedrun it and
they're going to learn that the hard
way. No one has a better answer yet.
There is no fast forward to 20 years of
deep experience in a domain. I'll give
you one more. Execution and follow
through are emerging as a binding
constraint for many. I know I said that
solving problems was going out of style
and finding problems was in style. Well,
there's an element of follow-th through
that we still see as a bottleneck. AI
can generate a lot of plans. It can
generate a workout plan for me tomorrow,
but I have to show up to the gym.
Turning any of these plans that AI can
generate into reality requires a human
to decide and commit and to persist and
to navigate politics, to hold people
accountable, to keep going when things
get hard. Execution has always been
underrated because it's much less
legible than ideation. People love to
ask, "What about Steve's brilliant mind
when he created the iPhone?" They don't
ask, "What about Steve's relentless
execution to get it done?" And call
Gorilla Glass and make them produce the
glass he knew was right for the iPhone.
A brilliant strategy document is
visible. It might get you a promotion in
some companies, but the grinding work of
implementation. Steve calling Google and
saying, "The yellow in the O on Google
looks terrible on the iPhone. and my
engineers will be at your door to fix
it. That's grinding work of
implementation. That's not a strategy
document. Tolerance for ambiguity
separates those who thrive from those
who freeze. The environment is shifting
really fast, guys. Best practices are
shifting all the time. People are
desperate for stable ground in that
world. And the constraint that you face
is actually your ability to metabolize
change. How much uncertainty can you
hold on to in a rapidly changing world
without freezing while continuing to
execute and follow through on a
longerterm perspective? People who are
able to master that balancing act are in
huge demand. All of this adds up to a
leverage shift. The old model of talent
development was super linear. You
acquired skills, you traded your time
for money, you let it accumulate slowly.
The new model has a really different
shape. Some individuals are discovering
X leverage X leverage through AI
augmentation not because they work
harder but because they've identified
their bottleneck and directly dissolved
it. Maybe a developer was bottlenecked
on boilerplate. Maybe a strategist was
limited by analysis bandwidth. Whatever
it is, they found the constraint and
removed it and unlocked capacity that
was laden. Most of us are not finding
that leverage for ourselves because we
are optimizing against the old preAI
constraints. We're still trying to prove
we have the skills when the skills are
commoditizing. The diagnostic question
for each of us is deeply personal. What
is constraining my output right now?
It's not what I wish was constraining
me, right? It's not what was
constraining me 3 years ago. It's not
the constraint I built my identity
around solving so I can be proud of it.
It's the actual binding constraint
today. Now, for some of us, it is tool
fluency because we haven't genuinely
integrated AI into a workflow. And for
others, it's taste. Maybe it's problem
finding for you. The bottleneck is going
to be specific to you. Solving it
requires first honesty about what's
actually holding you back. I keep going
back to Davos and the abundance
narrative that dominated there. It feels
clangy. It feels out of touch. The
conditional is doing a lot of work in
these predictions. Yes, the capability
might exist. I increasingly don't doubt
that. But the value capture depends on
solving bottlenecks that are
organizational, institutional, physical,
and social, not technical. And that is
hard work. That is hard human work. I
believe the businesses and people that
are going to thrive in the next 10 years
are going to be the ones that correctly
identify where scarcity has run off to,
where it has migrated to, into physical
infrastructure, into trust, into
integration, into coordination, and
build systems and build careers out of
addressing those constraints.
Intelligence is getting cheaper. The
promise of abundance is absolutely real.
AI is going to keep getting smarter.
cognitive output is going to keep
getting easier to produce every single
month. Abundant,
but abundance doesn't create value
directly. Abundance shifts where
scarcity lives. And we haven't been
honest about that. The question isn't
whether to believe in the coming
abundance as an article of faith. No,
no, no, no, no, no. The question is
where are the bottlenecks and are you
positioning yourself and your business
to solve them? That's really the only
question that matters and it doesn't get
enough airtime.
Ask follow-up questions or revisit key timestamps.
The video challenges the prevailing
Videos recently processed by our community