NVIDIA's $100 Billion Problem With OpenAI Just Got Worse
423 segments
Imagine you had so much money that you
could give $2500 to every American, not
just adults, everyone, including kids,
including infants, all $330 million of
them. $2,500. Here you go. That means
10K for a family of four. That is how
much money Open AAI is now valued at.
Despite the fact that $70 billion just
vanished from this deal, the stock still
goes up. If you missed it or if you're
new to the channel, here's a quick recap
of events that bring us up to current
day. September 2025, OpenAI and Nvidia
announced a $100 billion quote landmark
partnership deploying 10 gawatts of
Nvidia systems. January 2026, we
reported on this channel that the deal
was on ice due to internal doubts at
Nvidia. And there's tons of juicy gossip
there, including Jensen saying some
things in private circles about the
stability and long-term prognosis of
OpenAI not looking so hot. Just this
month, Financial Times confirms a new
deal. Nvidia is still getting involved
with OpenAI, but not at that hundred
billion amount. They're actually just
getting involved only with $30 billion.
And the structure of the deal is now
completely different. But remember,
Jensen Hong was very careful to say
before the deal fell through, which is
what it did, that the original deal was
quote unquote nonbinding. That's per
CNBC. The implications of this are vast.
They ripple outwards. They affect
everything, even outside of AI. Let's
talk about where the shock wave goes.
First, the elephant in the room. This
was never a real deal. Nvidia was never
going to give a hund00 billion to
OpenAI. This was a press hype move. This
is a classical PR nut job. Let's pump up
the stock price so we can sell it high.
We can get more investors on board.
That's exactly what this is. This was
never going to be real. And if we sort
through the paperwork, you go back to
September 2025.
They do declare that they're doing this
partnership, but it's a letter of
intent, which seems solid. I mean, you
write a letter of intent. you you should
intend on doing something. But that's
our mistake for taking these companies
at their word. They're just manipulating
the market with a letter of intent. If
you follow the numbers, things get even
shadier. So, it's contingent on 10
gawatt of power, but that power supply
is not built yet, nor are there plans to
build it. But when it was announced, it
pushed Nvidia past a $5 trillion market
cap. And to just recap, OpenAI's total
announced deals, $1.4 $4 trillion total
in potential commitments, i.e. farts in
the wind. That's 250 billion from
Microsoft Azure, 38 billion from Amazon
AWS, Oracle kicking in 300 billion.
Coreweave 22 billion, AMD up to 300
billion. None of these are binding
purchase orders. They are letters of
intent, which we just saw in the case of
Nvidia don't mean anything to these
people. They've ratcheted it up the PR
game. Now, a letter of intent is just an
ad buy. The whole AI economy is built on
announced deals. Maybe, should we, in
two more weeks, we might, but it's not
based on signed contracts. And so,
here's what's likely happening, and I'm
reading the tea leaves, but this be
where my money is, is we do know, we do,
we do know is a fact that OpenAI was
dissatisfied with the inference speed of
Nvidia's chip. What inference is is if
you've interacted with chatgpt it's the
machine generating a response. So
inference is just the class of computing
you could bucket it under where uh an
LLM generates a response and actually
works with you. This is opposed to
training which is the other major use of
these chips. So as per Reuters, we know
that OpenAI was dissatisfied with the
performance of Nvidia GPUs uh
specifically on inference tasks. They're
serving chatbt to a record number of
people. The scalability concerns are off
the chart. And in a certain level, you
want to optimize a couple of things. And
so my thoughts are, and this is, you
know, coming from someone who's worked
with software and hardware in the
defense tech space, is it's always a
collaboration between software and
hardware. If you write very efficient
software, it doesn't have to use too
many system resources. And in fact, one
of the most tried and trueue interview
rounds for software engineers is to
write algorithms that maximize the
efficiency of the software. In essence,
that's why software engineers get paid
so much, especially at these big
corporations like Meta or Netflix, is if
they can shave 50 milliseconds of
compute time off of a function. That
sound that sounds like nothing. You
can't even 50 milliseconds is under
almost under a gap of what we can
perceive. But if you reduce 50
milliseconds across a bajillion
different instances of that function
call, then you've saved some serious
serious money. But at a certain point,
you might get constrained by your
hardware. This is a bit lofty. So let's
do an example to keep it concrete. Think
about a race car driver, a Formula 1
driver, let's say. And there's two
components of that equation in our
simplified system. There's the car and
there's the driver. And so if we
consider the car to be the hardware and
the driver to be the software, if we
have really bad software, like if you're
someone who's never driven a Formula 1
car, I've never driven a Formula 1 car.
Let's say that's a safe assumption. And
you just get put in a Formula 1 car.
Even if it's the best car in the world,
it won the world championship last year.
Like you're probably going to crash it
if you can even get it off the line and
not stall it out, right? because the
software is garbage even though the
hardware is great. Conversely, if you
take someone like Max Versappen, Formula
1 world champion, and you put him in a
Kia Soul, let's say, he's not going to
set a good lap time because he's in a
Kia Soul. He's constrained by the
hardware despite the fact that he's
maximizing all the juice out of that
hardware that there is to be gotten. So,
OpenAI gets to the point where they're
already paying AI researchers and
software engineers
millions of dollars. Millions of the
comp packages are insane. Insane.
They're work here for a year and cash
out after we go public and you don't
have to work again in your life type of
money. So, they have the best people on
it. They have maxed out the software on
the inference side. They cannot do
inference much more efficiently. They're
hardware limited. They need a better
car. They have the best. They have the
world champion racing drivers, but they
don't have the cars to allow them to go
as fast as they want to do. So, what
does OpenAI do? We're hardware
constrained. Nvidia is what we use for
inference. Is anybody else making
inference chips that are faster? And
confirmed by Reuters, OpenAI starts, you
know, looking around seeing there's some
competitors out here. We got AMD,
Cerebrris, and Grock. We covered that
acquisition by Nvidia all the way back
last uh last fall. And they start
thinking, you know, hey, maybe we need,
you know, to start on boarding some of
these other providers for inference. The
most damning thing, and I I'll translate
the slimy PR speak for you, is Altman
says on X, he tweets. He says, quote,
"We love working with Nvidia." The
translation of that into regular human
is, "Oh man, we really got into bed with
Nvidia. We're not sure about it, and now
it's really kneecapping us." Then you
have more slimy speak coming from Jensen
when he's in Taiwan and interviewed by
some reporters. And he calls those
dissatisfaction reports quote unquote
nonsense. He calls them nonsense while
slashing the deal by 70% though. So I
guess the actions speak a little louder
than words in that situation. But here's
where it gets really interesting and
maybe it's interesting to you, maybe
it's not. If you live through 1999,
maybe you're just like, "Ah, again, here
it goes." And it's the circular funding.
We've talked about this many times on
the channel, but a quick refresher.
Nvidia says, "Here's $30 billion to
OpenAI." The contingency being with
those $30 billion, you need to buy $30
billion worth of our chip. So that money
comes back to us. Nvidia's revenue goes
up, line go up, investor happy, investor
toss more money at Nvidia, and Nvidia
can invest more in AI companies. It's
the vehicle for this bubble, the nexus
of funding. There's shell games going on
everywhere. There's proxies. You also
have Cororeweave. This one's a little
tricky to get your head around. Nvidia
invests in Corewave. They're a big
backer. Coreweave buys Nvidia GPUs and
then they sell those Nvidia GPUs to
OpenAI. Microsoft though is triple
dipping. They they have this thing
figured out. They don't even like need
to make Windows anymore. Windows is just
like a a passion project for them at
this point because where they're making
the big bucks is they invest in OpenAI.
They then serve through Azure as
OpenAI's cloud compute provider. And
OpenAI's products also run on Microsoft
platforms. So that's licensing as well.
And if you live through 1999, this is
going to sound real familiar to what
Cisco did where Cisco backed telecom
providers. Telecom providers bought
Cisco hardware. Cisco said, "Hey, that's
not circular funding. That's actually
revenue." So, if they had a loan deal,
they provided vendor financing to
someone, they bought $20 billion worth
of Cisco equipment and then paid them
$20, even though Cisco gave them the $20
to buy the equipment in the first place
on some kind of a loan term. They
reported that as income, we use our
lemonade stand as the example. If I'm
running a lemonade stand on the corner
and you come by and you're like, "Yeah,
I mean, maybe I could go for some
lemonade." And I slip you five bucks.
And I'm like, "Pretend that I didn't do
this. Pretend that I didn't do this.
that I didn't just give you $5 and then
come to the stand and then buy $5 worth
of lemonade and then you did that and
then I told the IRS I made $5 this year.
That's exactly what these companies are
doing. And so when the music stopped for
Cisco in 2000, people found out, hey,
that's not really revenue. The company
lost 86% of its value. So multiple good
sources are saying that the ink on this
deal is already signed. It's just yet to
dry. Bunch of news outlets are reporting
on it over the past few days, but the
latest funding round would put OpenAI at
an $830 billion valuation for a a
company. A company, remember from the
beginning of our video, that's $2,500 to
every American. Not just adults, but
kids, infants. You're tossing a brick of
K. You probably can't toss like a brick
of $2,500.
I don't even know how much that is in
hundreds. I mean, it's at least like one
of those drug bricks, right? I mean, you
just think about an infant. You just
toss them $2,500. Like, it's probably
going to hurt them, but that's that's
10K for the family. That's 10K for the
family. That's a lot of money. That's
everyone. That's the amount of money
that OpenAI is valued at now. And some
crazy stats on that. That's a 65x
revenue rate. Remember the math we've
done on this before. They are making 4
cents on the dollar. OpenAI is not a
company that is profitable. And now it
is valued at $830 billion. A company
that is not profitable. That is does not
make money. They're not in the black,
they're in the red every year. If the
best positioned AI company needs a
massive cash influx like this, what does
that tell you about the industry at
large? That it's a bubble. And if we
extrapolate that to the future sentiment
on what this means for the valuation of
OpenAI and even the belief that these
other vendors and other investors have
in the success of the company, you have
their primary backer, Nvidia, cutting
their original amount that they
committed to down by 70%. That does not
signal confidence to me in the valuation
and long-term success of OpenAI or the
bubble at all, which again I've argued
has already popped. We're just waiting
for the financial ramifications at this
point. So, we'll watch the funding
round. Remember, this hasn't officially
closed yet, but is it actually going to
close at 100 billion? We don't know for
sure. We don't know for sure. That's
what the best guidance says right now.
That's what most of the sources are
saying is that we're closing at $100
billion. And the big move to watch,
we're going to watch this in Nvidia's
earnings on Feb 25. Remember, we'll drop
a video right after that as well, is
we're going to be real curious about
OpenAI's chip diversification strategy.
They have a deal with Cerebras. They
already demoed some tech. We talked
about that on the channel last week.
It's not there yet. It's not as good as
Nvidia's tech, but it's fast. It's It's
way way way faster. It's not as
accurate. They can't run as big of a
model on these chips, but it's way
faster. And that's that's not nothing,
especially if they can solve the the
accuracy of the models that run on those
chips, which I I assume they will
probably inside of 6 months. So, we got
to see what is OpenAI's future play
going to be on inference chips. We're
also going to keep a close eye on
terminology around this, especially
coming from the tech sponsored media.
And the term that we're starting to see
pop up is a quote unquote sober phase
for the market. Okay? And a quote
unquote sober phase. I'm sure these tech
sponsored outlets would define it
differently, but 24/7 Wall Street called
it quote moving from exuberant promises
into a more sober long-term growth
phase, which is optimists speak for the
bubble already popped and we're just
trying to slow drip the news so we don't
tank the economy. And the argument and
positioning of this channel, of myself,
is that AI is is transformative. It is a
landmark technological achievement that
is already made a lot of things easier.
It's going to change to a degree how we
work, but it is not the replacement drop
in replacement for a human technology,
the godlike intelligence that these
sages of Silicon Valley have led us to
believe. It's a better automation. And
for what it's worth, that's very
helpful. I mean, it's helps us do stuff
quicker, easier. Hopefully we have to do
less menial tasks because of this as it
gets more embedded into everything. But
it's not a drop-in replacement for a
human. That's the human. That's
ludicrous. Absolutely ludicrous. But
it's what these salesmen are pedalling
in the valley. But the prediction and
the stance of this channel is that the
thing that is unstable is the finance
and infrastructure behind this. It is
unmaintainable. The numbers are off.
It's unprecedented in every way. And the
closest thing that we have to this is
the 1999.com bubble where the
proportions are way worse than they are
for what it's worth than the 1999
bubble. But it's the closest analog that
we have to understanding what the market
is going through right now. So phase is
really just code word for we're going to
need to really PR this bubble pop
financially or else every other sector
is going to panic and we're going to be
in real trouble in the market. They
don't want to say they don't want to say
that. They don't want to spook people.
And remember, just like in the.com
bubble, the internet wasn't fake. You're
watching this video because of the
internet, because of tech that was
developed and started in the dot era.
But the finances were off. The finances
were incorrect, but the tech was
transformative. If you're not already
subscribed, let's fix that. Click the
subscribe button below the bell to be
notified. If you'd like to support the
channel, I've had a lot of people
reaching out in the comments about that.
Become a member of the channel. You can
sponsor me. I would appreciate that. All
of the money goes back into producing
this content. And if you really want to
dig into the facts and figures on this,
sign up for the newsletter down there in
the description. I send out a quarterly
analysis of the numbers behind the AI
finance bubble. No PR spin or messing
about. It's just the straight numbers.
If you want to dig into that, subscribe
to the newsletter. Thank you for
watching.
Ask follow-up questions or revisit key timestamps.
This video explores the massive $830 billion valuation of OpenAI and its complicated partnership with Nvidia, highlighting how a once-touted $100 billion deal has been reduced to $30 billion. The speaker critiques the AI industry's reliance on non-binding "letters of intent" and "circular funding"—where investors provide capital that is immediately used to buy their own products—drawing direct parallels to the 1999 dot-com bubble. Furthermore, it details OpenAI's technical frustrations with Nvidia's inference speeds and their search for more efficient hardware alternatives to maintain scalability.
Videos recently processed by our community