Inside the Closed-Door Deals Running OpenAI and Anthropic
492 segments
We have been watching the AI race for
three full years now. At first, it was
all about the opening moves, the models,
the breakthroughs, the chips, GBT versus
Claude, Gemini versus Deepseek. But this
part of the game is over and we're past
the opening and now we're entering the
middle game. We're no longer competing
on the models. We're competing on the
infrastructure, on who controls the
compute, who controls the clouds, and
how the contracts that are being signed
decide who gets to play. This year, we
saw two AI mega deals. Microsoft and
OpenAI and Anthropic's web of
partnerships, and these are so much more
than just headlines. We're watching a
formidable power play and a much larger
match. Today is about uncovering what is
really happening on the chessboard and
who controls whom. Let's dive in.
On October 28th, OpenAI has announced an
extensive restructuring when they
officially granted Microsoft a 26%
equity stake. This move underscores how
the global AI race is evolving and it's
evolving from a competition of models to
a contest of ecosystems, partnerships
and deals. At the core of these
partnership is the fact that OpenAI's
original structure became increasingly
unsustainable. Their unit economics do
not work. They cannot survive without
extensive capital injection from the
outside. They're losing $2 on every
dollar of revenue and they're burning
through investments faster than they can
monetize. I spoke about this at length
in my earlier videos on the unit
economics of Chad GBT. If you want to
learn more, check them out. In 2019,
OpenAI created OpenAI LP or limited
partnership which is a capped profit
subsidiary. You may ask what is openp?
Openp is a subsidiary created by openai
the purpose of which was to balance
fundraising with the nonprofit mission
using a capped profit structure through
open AI LP. They could access
significant capital that they need to
develop advanced AI models without
abandoning the core mission to benefit
humanity. This was the first step OpenAI
took to balance commercial viability
with ethics. Fast forward to now and
Microsoft holds approximately 30% of
OpenAI and their stake in OpenAI is
valued at $135 billion which makes it
one of the largest technology
partnerships in history. What this 30%
chunk means is that Microsoft now has
significant influence over OpenAI's
operations and development. On top of
this, Microsoft will be taking 20% of
OpenAI's direct revenue. And as a
refresher, the largest chunk of OpenAI's
revenue comes from Chad GBT, not API
sales. The reason I'm making an emphasis
on the API sales is because API sales is
the heart of B2B adoption and GPT is not
doing very well on API sales, meaning
they're predominantly consumer. So yeah,
back to Microsoft, they're going to be
taking 20% of OpenAI's revenue through
2030. And this means that they're going
to have a huge influence over OpenAI's
go to market and pricing decisions. But
this goes both ways. Microsoft will also
pay OpenAI around 20% of revenue from
Azure OpenAI services and Bing AI
features.
If you're building a company, you
already know scaling means hiring. With
every hire, you're looking at payroll
setup, benefits enrollment, IT
provisioning, access to 12 different
tools, corporate card, compliance
paperwork. Onboarding one takes your op
teams about 6 hours, but you're hiring
15 this quarter. That's 90 hours of
manual work. Most companies are running
on 100 plus disconnected tools. Now,
each one solves a problem, but together
they create silos. Ripling, one of the
fastest growing HR, IT, and spend
platforms out there, calls this SAD
software as a disservice. To combat
this, Riplin built a unified platform
across four systems that run your
company. one source of truth for all
employee data that allows you to manage
HR, payroll, spend, and IT in one
unified system. You can hire contractors
in more than 185 countries and full-time
employees in more than 80 countries and
pay them in different currencies, which
really matters because your next hire
probably isn't local. New hire gets
their laptop, all accounts, correct
permissions based on their role all
automatically. Someone leaves,
everything gets deprovisioned instantly.
Real-time visibility into company
spending. No more waiting for expense
reports. No more budget surprises.
Rippling is trusted by more than 20,000
companies across a range of industries,
including Cursor, Barry, Chaz.com, and
Liquid Death. Every fragmented system in
your stack is holding back execution.
Rippling provides four core systems to
manage your entire staff end to end. HR,
payroll, IT, and spend management. All
connected, all automated, helping you
increase savings and make better
decisions. And that's how you build a
company that can actually move as fast
as you scale. Check out stopsad.com to
see if SAD is affecting your
organization and what you can do about
it.
Now, why was this partnership
architected and who controls whom? The
real value of this partnerships happens
through several layers. The first one
being the revenue share. $865 million
captured in 9 months of 2025. Now look
at this number. 865 million in 9 months.
The fact that Microsoft captured this
much in revenue in just 9 months shows
both the scale of financial potential
from just one foundational AI company
and the dependency, the inevitable
dependency that it creates. Because
first of all, it shows that OpenAI is a
massive revenue driver for Microsoft and
a monetization engine for Azure. because
Microsoft extracts a significant cut
from every dollar OpenAI makes
regardless of OpenAI's own
profitability. And secondly, this flow
of hundreds of millions of dollars per
quarter means that OpenAI's growth
directly contributes to Microsoft's
cloud business and its ability to
justify further infrastructure expansion
which becomes their moat in AI. And
thirdly, the size and cadence of these
payments prove that there is a very
acute pressure on OpenAI's unit
economics because we already know that
their inference costs already exceed
revenue and now they're giving another
20% to Microsoft which means that it's
going to be extremely difficult for them
to achieve sustainable business
economics. This revenue share
illustrates how partnerships and deals
in the AI economy are a lot less about
profit sharing and much more about deep
control over the economics and
trajectories of foundational AI
companies. The second layer of this is
guaranteed sale of infrastructure
because OpenAI has committed to
purchasing $250 billion in Azure
services. What this means is that OpenAI
is not just a big client that Microsoft
managed to land. Microsoft's whole
business model is transforming because
they used to sell software. That's what
they've been known for and they're now
becoming a renting compute
infrastructure business. It's not good
or bad. It's just fascinating to observe
how AI race is becoming the race of
computing power. When OpenAI makes up
50% of Azure's revenue growth, Microsoft
becomes dependent on a partner that is
losing $2 on every dollar of revenue and
the same partner is burning through
compute resources faster than it can
monetize. And if you add the whole AGI
narrative on top of this and all the
effort and the money that is being spent
on achieving this magic entity of AGI,
which if you're curious about AGI, watch
the previous video. There is a lot of
capital burning happening here. This
creates a risk that cloud businesses do
not typically face. Microsoft cannot
easily replace OpenAI's demand. OpenAI
is the largest AI client on the planet
and the one with the biggest computing
needs. Signing them as a client is a
double-edged sword. Yes, you get the
largest AI client on the planet. You're
selling more than you ever have, but
you're also scaling your operations to
cater to this client. And if this client
leaves, you're screwed. To find the next
OpenAI level customer, for example,
Anthropic or Deepseek, you would need
years for them to scale to the needs of
OpenAI. And now, let's turn this around
because OpenAI is actively seeking
diversification and they're seeking it
through Core Weeave and Oracle. And
they're doing so to reduce their
dependency on Microsoft. And the reason
they're looking to diversify is because
this extreme financial strain makes it
risky for OpenAI to depend solely on
Microsoft. If Microsoft raises prices or
changes terms of this partnership,
OpenAI's whole survival could be at
risk. And this risk keeps growing. But
the same risk goes for Microsoft. If
OpenAI's usage drops or diversifies to
other clouds or the company completely
collapses under its own financial
structure, which yes, extremely
unlikely, but nevertheless not
impossible, Microsoft will lose both the
revenue stream and the growth narrative
that supports its $4 trillion valuation.
Yep, don't forget that Microsoft's
valuation went up because of this
partnership as well. You may ask, but
why did Microsoft do it given such a
massive risk? They did it because they
were catastrophically behind Google and
Amazon in the AI race. At the time of
the initial $1 billion investment in
2019, Microsoft held only 29% of the
cloud market versus AWS's 37%. And they
did not have any competitive AI research
capability to match Google's DeepMind or
Amazon's Alexa. The partnership with
OpenAI delivered an instant 10-year leap
because Microsoft got exclusive access
to frontier models that they couldn't
build internally and a chance to embed
AI into its Microsoft 365 and Teams. By
doing this, they create distribution
advantage that neither Google or Amazon
would be able to replicate. And to top
it off, they get the ability to position
Azure as the only cloud with OpenAI API
access. This was a gamble, and I applaud
everyone who has architected this gamble
because it pays off. Azure grew 40%
annually through 2025 compared to AWS's
19%. Microsoft stock gained over $2
trillion in market cap since this
partnership began. Microsoft now
controls the enterprise AI distribution
layer through co-pilot and Microsoft now
controls the entire enterprise
distribution layer and as a B2B product
manager who works at an enterprise and
the vast majority of enterprises are on
Azure cloud and using Microsoft stack it
is hard to overestimate the reach that
Microsoft has into the enterprise tech
and finally number four the competitive
mode which is arguably the highest
currency of all they got exclusive API
rights through Azure until AGI is
verified, which means that they're
locking enterprise customers into
Microsoft's cloud. Microsoft's exclusive
API rights to OpenAI mean that any
enterprise that wants to use OpenAI's
models in production must route all
traffic through Azure. And this creates
a dependency loop almost an unbreakable
loop where purchasing access to GBT5 for
example for your business automatically
makes you an Azure customer regardless
of whether you're on AWS or Google Cloud
or onrem. This is beyond billing. This
is technically an architectural lockin
because when you purchase access to
enterprise level GBT, you have no choice
but to accept Azure's private network,
Azure's compliance, Azure's data
residency rules, Azure's authentication,
and Azure's pricing because OpenAI's API
literally cannot be run anywhere else
until AGI is verified by an independent
expert panel. And on top of everything I
just said, once an enterprise builds
applications on Azure OpenAI service, it
automatically integrates with Azure
Cognitive Search, Azure Functions, Azure
Key Volt, and Azure Rulebased Control
and the switching costs become enormous
because you'd need to rewrite your
entire application. You would need to
rebuild your data pipelines. The problem
with the vendor switching even as is
with the current combo of Microsoft 365
copilot and teams and when it needs to
be done companies hire full teams of
people just to do the vendor switch. I
mean I was recently doing a switch from
Apple to Google Workspace and I wanted
to shoot myself in the head. Microsoft
used OpenAI's virality to its advantage
and they used it to force cloud
migration to Microsoft and so much so
that there are a bunch of companies that
picked AWS as their cloud provider back
in 2020 and are now running significant
Azure workloads because their
engineering teams at some point said
that they wanted GBT and the only
compliant enterprisegrade path to access
GBT requires full Azure adoption. This
is the reason why Azure captured 62% of
the Genai cases despite only 29% of
cloud share. They're not just betting on
the infrastructure. They're betting on a
legal absolutely legal distribution for
the world's most in demand AI models.
And that exclusivity doesn't end until
someone declares AGI. And now coming
back to AGI, Microsoft has every reason
and every incentive to delay the
declaration of AGI indefinitely.
So this was the OpenAI Microsoft
situation and now let's look at OpenAI's
rival Anthropic and see what they've
come up with. Anthropic went in a
completely different but nevertheless
brilliant direction. I personally have
huge respect for Anthropic as a company
and I prefer their models to any other
LLM. And it's interesting how they're
balancing the power game while trying to
maintain independence. They secured
investments and compute commitments from
all three hyperscalers, Google Cloud,
AWS, and Microsoft. With Google, they
became their major investor and cloud
partner. Through Google, they got access
to 1 million of custom tensor processing
units or TPUs that will be coming online
by 2026 and 1 gawatt of power. To put
this in perspective, 1 gawatt of power
is comparable to the output of a large
nuclear power plant, which means an
enormous scale of AI compute capacity
dedicated to powering anthropics AI
models. Now for the AWS,
they made an $8 billion total investment
in Athropics, becoming their lead
financial backer. The motive of AWS was
well, first of all, securing Anthropic
as one of the largest clients for the
cloud business, but also to strengthen
AWS AI service with Anthropics Claude.
And on top of this, their deal included
a multi-billion dollar cash infusion and
commitments for AWS to supply 1 million
Tranium 2 processors, which is the
Amazon's custom chip specifically
designed for AI training, which means
that in practice, this made AWS
Anthropic's principal cloud provider.
And lastly, Microsoft and Nvidia. In
November 2025, Microsoft and Nvidia
announced a new partnership with
Enthropic that involves several moves.
Enthropic agreed to a $30 billion
commitment to use a Microsoft's Azure
cloud for future compute needs, meaning
that Anthropic will spend at least $30
billion on Azure Infra over multiple
years. This guarantees Azure a large and
long-term stream of revenue. And now
they're not only OpenAI's cloud
provider, they're also Anthropic's
provider. And this in combination means
that Anthropic is the only foundational
model company available on all three of
the world's most used cloud services.
And as we continue rotating this board,
Anthropic has committed to spend $50
billion in future compute spending,
which means that they're locking
themselves into these guys for years to
come. There is a fundamental divergence
in the anthropic and multicloud approach
and the Microsoft OpenAI partnership.
And the delta is in the revenue model,
who they sell to and how they make
money. OpenAI has a consumer first
business model. 73% of OpenAI's revenue
comes from consumer subscriptions, Chad
GPT plus and Chad GPT Pro with 27% and
some sources say 15% from API and
enterprise. They have 800 million weekly
users. They love the weekly user metric,
but only 5% of them are paid. This
consumer oriented business model
automatically implies dependency on one
cloud provider and not just any
provider. It must be always on because
they're serving 8 billion weekly users
and you cannot run a consumerf facing
product at this scale on multiple clouds
without catastrophic user experience
fragmentation. OpenAI has locked itself
into complete Azure dependence.
Anthropic on the other hand is
enterpriseoriented. Anthropic's revenue
model is the inverted version of open
AIS. They get around 85% of their
revenue from enterprise API calls and
only 20% from consumer subscriptions.
Enterprise customers access claude
through AWS bedrock, Google's vertex AI,
Azure AI foundry or direct API. Point is
it doesn't matter which cloud runs the
inference. Cloud is everywhere. This is
why the multicloud model is viable for
anthropic enterprise workloads are batch
oriented, latency tolerant and already
distributed across clouds based on a
company's IT infrastructure. It's a lot
easier for a business or an enterprise
to use claude than chajbt. And as Ben
Thompson observed, anthropic's lack of a
strong consumer play means that it is
much more tenable, if not downright
attractive, for them to have a supplier
type of relationship with AWS. Only
Anthropic can pull this off right now.
Open AAI cannot replicate this model
because their consumer success is their
blessing and their curse. Now, if you
paid attention to these numbers, a
question that may have formed in your
brain is seems like Anthropic is doing
better than OpenAI. Anthropic's
multicloud partnership despite
complexity and despite operational
overhead does deliver superior unit
economics. Their multicloud model
optimizes costs and once again remember
they're serving 8 billion users weekly
and this volume requires a unified
infrastructure that cannot tolerate
delays from crosscloud routing which
forces them to accept Azure's premium
pricing even though their inference
costs run at 200% revenue. This means
that Enthropic's multicloud partnership
is the key to profit margin.
The real war and the real race in AI is
control over the infrastructure. The big
boys are no longer competing for the
best models. Microsoft and AWS do not
have their native models at all. And
while Google is investing quite a bit
into Gemini, they're still competing to
own the compute layer where all models
must run. Anthropic's committed compute
spending across all three hyperscalers
is the only strategy that keeps them
independent. And even that locks
Enthropic into infrastructure
dependencies for the next decade. We're
getting into the middle game now where
all players are making their moves and
fighting for who owns the board. The
models are just the pieces. The board or
the infrastructure decides who is going
to win. As always, we hope this was
helpful. Till next time. Bye.
Ask follow-up questions or revisit key timestamps.
The AI race has shifted from competing on models and breakthroughs to controlling infrastructure, compute, and cloud partnerships. OpenAI's partnership with Microsoft highlights a deep dependency due to OpenAI's unsustainable unit economics and consumer-first business model, leading to Microsoft's significant influence and architectural lock-in for enterprise clients. In contrast, Anthropic has adopted a multi-cloud strategy, securing investments and compute commitments from Google Cloud, AWS, and Microsoft, viable due to its enterprise-oriented API revenue model. This allows Anthropic to optimize costs and maintain better unit economics, emphasizing that the true competition in AI is now over who owns the underlying compute infrastructure.
Videos recently processed by our community