HomeVideos

NVIDIA's $100 Billion Problem With OpenAI Just Got Worse

Now Playing

NVIDIA's $100 Billion Problem With OpenAI Just Got Worse

Transcript

423 segments

0:00

Imagine you had so much money that you

0:03

could give $2500 to every American, not

0:08

just adults, everyone, including kids,

0:11

including infants, all $330 million of

0:14

them. $2,500. Here you go. That means

0:16

10K for a family of four. That is how

0:19

much money Open AAI is now valued at.

0:26

Despite the fact that $70 billion just

0:30

vanished from this deal, the stock still

0:33

goes up. If you missed it or if you're

0:35

new to the channel, here's a quick recap

0:37

of events that bring us up to current

0:39

day. September 2025, OpenAI and Nvidia

0:44

announced a $100 billion quote landmark

0:47

partnership deploying 10 gawatts of

0:50

Nvidia systems. January 2026, we

0:52

reported on this channel that the deal

0:54

was on ice due to internal doubts at

0:57

Nvidia. And there's tons of juicy gossip

0:59

there, including Jensen saying some

1:00

things in private circles about the

1:04

stability and long-term prognosis of

1:06

OpenAI not looking so hot. Just this

1:10

month, Financial Times confirms a new

1:12

deal. Nvidia is still getting involved

1:14

with OpenAI, but not at that hundred

1:16

billion amount. They're actually just

1:18

getting involved only with $30 billion.

1:22

And the structure of the deal is now

1:23

completely different. But remember,

1:25

Jensen Hong was very careful to say

1:27

before the deal fell through, which is

1:28

what it did, that the original deal was

1:30

quote unquote nonbinding. That's per

1:33

CNBC. The implications of this are vast.

1:36

They ripple outwards. They affect

1:37

everything, even outside of AI. Let's

1:40

talk about where the shock wave goes.

1:42

First, the elephant in the room. This

1:44

was never a real deal. Nvidia was never

1:47

going to give a hund00 billion to

1:49

OpenAI. This was a press hype move. This

1:53

is a classical PR nut job. Let's pump up

1:56

the stock price so we can sell it high.

1:59

We can get more investors on board.

2:01

That's exactly what this is. This was

2:02

never going to be real. And if we sort

2:04

through the paperwork, you go back to

2:06

September 2025.

2:08

They do declare that they're doing this

2:09

partnership, but it's a letter of

2:11

intent, which seems solid. I mean, you

2:14

write a letter of intent. you you should

2:16

intend on doing something. But that's

2:17

our mistake for taking these companies

2:19

at their word. They're just manipulating

2:21

the market with a letter of intent. If

2:23

you follow the numbers, things get even

2:25

shadier. So, it's contingent on 10

2:27

gawatt of power, but that power supply

2:30

is not built yet, nor are there plans to

2:32

build it. But when it was announced, it

2:34

pushed Nvidia past a $5 trillion market

2:37

cap. And to just recap, OpenAI's total

2:40

announced deals, $1.4 $4 trillion total

2:44

in potential commitments, i.e. farts in

2:47

the wind. That's 250 billion from

2:50

Microsoft Azure, 38 billion from Amazon

2:53

AWS, Oracle kicking in 300 billion.

2:55

Coreweave 22 billion, AMD up to 300

2:58

billion. None of these are binding

3:00

purchase orders. They are letters of

3:02

intent, which we just saw in the case of

3:05

Nvidia don't mean anything to these

3:08

people. They've ratcheted it up the PR

3:10

game. Now, a letter of intent is just an

3:13

ad buy. The whole AI economy is built on

3:16

announced deals. Maybe, should we, in

3:18

two more weeks, we might, but it's not

3:20

based on signed contracts. And so,

3:22

here's what's likely happening, and I'm

3:25

reading the tea leaves, but this be

3:27

where my money is, is we do know, we do,

3:30

we do know is a fact that OpenAI was

3:32

dissatisfied with the inference speed of

3:35

Nvidia's chip. What inference is is if

3:37

you've interacted with chatgpt it's the

3:40

machine generating a response. So

3:42

inference is just the class of computing

3:46

you could bucket it under where uh an

3:49

LLM generates a response and actually

3:52

works with you. This is opposed to

3:54

training which is the other major use of

3:56

these chips. So as per Reuters, we know

3:58

that OpenAI was dissatisfied with the

4:02

performance of Nvidia GPUs uh

4:04

specifically on inference tasks. They're

4:06

serving chatbt to a record number of

4:08

people. The scalability concerns are off

4:10

the chart. And in a certain level, you

4:13

want to optimize a couple of things. And

4:15

so my thoughts are, and this is, you

4:17

know, coming from someone who's worked

4:18

with software and hardware in the

4:20

defense tech space, is it's always a

4:22

collaboration between software and

4:24

hardware. If you write very efficient

4:27

software, it doesn't have to use too

4:29

many system resources. And in fact, one

4:31

of the most tried and trueue interview

4:33

rounds for software engineers is to

4:35

write algorithms that maximize the

4:38

efficiency of the software. In essence,

4:41

that's why software engineers get paid

4:43

so much, especially at these big

4:44

corporations like Meta or Netflix, is if

4:48

they can shave 50 milliseconds of

4:50

compute time off of a function. That

4:53

sound that sounds like nothing. You

4:55

can't even 50 milliseconds is under

4:58

almost under a gap of what we can

4:59

perceive. But if you reduce 50

5:01

milliseconds across a bajillion

5:04

different instances of that function

5:06

call, then you've saved some serious

5:09

serious money. But at a certain point,

5:10

you might get constrained by your

5:12

hardware. This is a bit lofty. So let's

5:14

do an example to keep it concrete. Think

5:17

about a race car driver, a Formula 1

5:19

driver, let's say. And there's two

5:21

components of that equation in our

5:22

simplified system. There's the car and

5:24

there's the driver. And so if we

5:27

consider the car to be the hardware and

5:29

the driver to be the software, if we

5:32

have really bad software, like if you're

5:34

someone who's never driven a Formula 1

5:36

car, I've never driven a Formula 1 car.

5:38

Let's say that's a safe assumption. And

5:40

you just get put in a Formula 1 car.

5:42

Even if it's the best car in the world,

5:43

it won the world championship last year.

5:45

Like you're probably going to crash it

5:46

if you can even get it off the line and

5:47

not stall it out, right? because the

5:49

software is garbage even though the

5:51

hardware is great. Conversely, if you

5:53

take someone like Max Versappen, Formula

5:56

1 world champion, and you put him in a

5:59

Kia Soul, let's say, he's not going to

6:02

set a good lap time because he's in a

6:04

Kia Soul. He's constrained by the

6:06

hardware despite the fact that he's

6:07

maximizing all the juice out of that

6:09

hardware that there is to be gotten. So,

6:11

OpenAI gets to the point where they're

6:13

already paying AI researchers and

6:15

software engineers

6:17

millions of dollars. Millions of the

6:20

comp packages are insane. Insane.

6:23

They're work here for a year and cash

6:25

out after we go public and you don't

6:27

have to work again in your life type of

6:29

money. So, they have the best people on

6:30

it. They have maxed out the software on

6:33

the inference side. They cannot do

6:35

inference much more efficiently. They're

6:38

hardware limited. They need a better

6:39

car. They have the best. They have the

6:41

world champion racing drivers, but they

6:43

don't have the cars to allow them to go

6:44

as fast as they want to do. So, what

6:46

does OpenAI do? We're hardware

6:47

constrained. Nvidia is what we use for

6:49

inference. Is anybody else making

6:51

inference chips that are faster? And

6:53

confirmed by Reuters, OpenAI starts, you

6:55

know, looking around seeing there's some

6:57

competitors out here. We got AMD,

6:59

Cerebrris, and Grock. We covered that

7:01

acquisition by Nvidia all the way back

7:03

last uh last fall. And they start

7:05

thinking, you know, hey, maybe we need,

7:06

you know, to start on boarding some of

7:08

these other providers for inference. The

7:09

most damning thing, and I I'll translate

7:11

the slimy PR speak for you, is Altman

7:14

says on X, he tweets. He says, quote,

7:17

"We love working with Nvidia." The

7:19

translation of that into regular human

7:22

is, "Oh man, we really got into bed with

7:25

Nvidia. We're not sure about it, and now

7:27

it's really kneecapping us." Then you

7:30

have more slimy speak coming from Jensen

7:33

when he's in Taiwan and interviewed by

7:35

some reporters. And he calls those

7:37

dissatisfaction reports quote unquote

7:39

nonsense. He calls them nonsense while

7:42

slashing the deal by 70% though. So I

7:44

guess the actions speak a little louder

7:46

than words in that situation. But here's

7:47

where it gets really interesting and

7:49

maybe it's interesting to you, maybe

7:51

it's not. If you live through 1999,

7:53

maybe you're just like, "Ah, again, here

7:54

it goes." And it's the circular funding.

7:57

We've talked about this many times on

7:58

the channel, but a quick refresher.

8:00

Nvidia says, "Here's $30 billion to

8:02

OpenAI." The contingency being with

8:03

those $30 billion, you need to buy $30

8:06

billion worth of our chip. So that money

8:08

comes back to us. Nvidia's revenue goes

8:10

up, line go up, investor happy, investor

8:14

toss more money at Nvidia, and Nvidia

8:16

can invest more in AI companies. It's

8:19

the vehicle for this bubble, the nexus

8:21

of funding. There's shell games going on

8:24

everywhere. There's proxies. You also

8:26

have Cororeweave. This one's a little

8:28

tricky to get your head around. Nvidia

8:30

invests in Corewave. They're a big

8:32

backer. Coreweave buys Nvidia GPUs and

8:36

then they sell those Nvidia GPUs to

8:38

OpenAI. Microsoft though is triple

8:41

dipping. They they have this thing

8:43

figured out. They don't even like need

8:45

to make Windows anymore. Windows is just

8:47

like a a passion project for them at

8:50

this point because where they're making

8:51

the big bucks is they invest in OpenAI.

8:54

They then serve through Azure as

8:58

OpenAI's cloud compute provider. And

9:00

OpenAI's products also run on Microsoft

9:04

platforms. So that's licensing as well.

9:07

And if you live through 1999, this is

9:09

going to sound real familiar to what

9:11

Cisco did where Cisco backed telecom

9:14

providers. Telecom providers bought

9:16

Cisco hardware. Cisco said, "Hey, that's

9:19

not circular funding. That's actually

9:21

revenue." So, if they had a loan deal,

9:23

they provided vendor financing to

9:24

someone, they bought $20 billion worth

9:26

of Cisco equipment and then paid them

9:29

$20, even though Cisco gave them the $20

9:31

to buy the equipment in the first place

9:33

on some kind of a loan term. They

9:34

reported that as income, we use our

9:36

lemonade stand as the example. If I'm

9:39

running a lemonade stand on the corner

9:40

and you come by and you're like, "Yeah,

9:43

I mean, maybe I could go for some

9:44

lemonade." And I slip you five bucks.

9:46

And I'm like, "Pretend that I didn't do

9:48

this. Pretend that I didn't do this.

9:49

that I didn't just give you $5 and then

9:51

come to the stand and then buy $5 worth

9:53

of lemonade and then you did that and

9:55

then I told the IRS I made $5 this year.

9:58

That's exactly what these companies are

10:00

doing. And so when the music stopped for

10:03

Cisco in 2000, people found out, hey,

10:06

that's not really revenue. The company

10:09

lost 86% of its value. So multiple good

10:12

sources are saying that the ink on this

10:14

deal is already signed. It's just yet to

10:16

dry. Bunch of news outlets are reporting

10:18

on it over the past few days, but the

10:20

latest funding round would put OpenAI at

10:23

an $830 billion valuation for a a

10:27

company. A company, remember from the

10:29

beginning of our video, that's $2,500 to

10:32

every American. Not just adults, but

10:34

kids, infants. You're tossing a brick of

10:36

K. You probably can't toss like a brick

10:38

of $2,500.

10:40

I don't even know how much that is in

10:41

hundreds. I mean, it's at least like one

10:43

of those drug bricks, right? I mean, you

10:45

just think about an infant. You just

10:48

toss them $2,500. Like, it's probably

10:50

going to hurt them, but that's that's

10:51

10K for the family. That's 10K for the

10:53

family. That's a lot of money. That's

10:55

everyone. That's the amount of money

10:57

that OpenAI is valued at now. And some

11:00

crazy stats on that. That's a 65x

11:02

revenue rate. Remember the math we've

11:04

done on this before. They are making 4

11:06

cents on the dollar. OpenAI is not a

11:07

company that is profitable. And now it

11:09

is valued at $830 billion. A company

11:12

that is not profitable. That is does not

11:13

make money. They're not in the black,

11:15

they're in the red every year. If the

11:17

best positioned AI company needs a

11:21

massive cash influx like this, what does

11:24

that tell you about the industry at

11:25

large? That it's a bubble. And if we

11:27

extrapolate that to the future sentiment

11:30

on what this means for the valuation of

11:31

OpenAI and even the belief that these

11:36

other vendors and other investors have

11:38

in the success of the company, you have

11:39

their primary backer, Nvidia, cutting

11:41

their original amount that they

11:43

committed to down by 70%. That does not

11:46

signal confidence to me in the valuation

11:49

and long-term success of OpenAI or the

11:52

bubble at all, which again I've argued

11:53

has already popped. We're just waiting

11:55

for the financial ramifications at this

11:57

point. So, we'll watch the funding

11:58

round. Remember, this hasn't officially

12:00

closed yet, but is it actually going to

12:02

close at 100 billion? We don't know for

12:05

sure. We don't know for sure. That's

12:06

what the best guidance says right now.

12:07

That's what most of the sources are

12:09

saying is that we're closing at $100

12:11

billion. And the big move to watch,

12:12

we're going to watch this in Nvidia's

12:14

earnings on Feb 25. Remember, we'll drop

12:17

a video right after that as well, is

12:18

we're going to be real curious about

12:20

OpenAI's chip diversification strategy.

12:24

They have a deal with Cerebras. They

12:25

already demoed some tech. We talked

12:26

about that on the channel last week.

12:28

It's not there yet. It's not as good as

12:31

Nvidia's tech, but it's fast. It's It's

12:34

way way way faster. It's not as

12:36

accurate. They can't run as big of a

12:37

model on these chips, but it's way

12:39

faster. And that's that's not nothing,

12:42

especially if they can solve the the

12:44

accuracy of the models that run on those

12:46

chips, which I I assume they will

12:48

probably inside of 6 months. So, we got

12:50

to see what is OpenAI's future play

12:52

going to be on inference chips. We're

12:54

also going to keep a close eye on

12:56

terminology around this, especially

12:58

coming from the tech sponsored media.

13:01

And the term that we're starting to see

13:02

pop up is a quote unquote sober phase

13:06

for the market. Okay? And a quote

13:07

unquote sober phase. I'm sure these tech

13:10

sponsored outlets would define it

13:12

differently, but 24/7 Wall Street called

13:15

it quote moving from exuberant promises

13:17

into a more sober long-term growth

13:20

phase, which is optimists speak for the

13:23

bubble already popped and we're just

13:25

trying to slow drip the news so we don't

13:27

tank the economy. And the argument and

13:30

positioning of this channel, of myself,

13:32

is that AI is is transformative. It is a

13:36

landmark technological achievement that

13:39

is already made a lot of things easier.

13:41

It's going to change to a degree how we

13:44

work, but it is not the replacement drop

13:47

in replacement for a human technology,

13:50

the godlike intelligence that these

13:53

sages of Silicon Valley have led us to

13:56

believe. It's a better automation. And

13:58

for what it's worth, that's very

14:00

helpful. I mean, it's helps us do stuff

14:02

quicker, easier. Hopefully we have to do

14:04

less menial tasks because of this as it

14:06

gets more embedded into everything. But

14:08

it's not a drop-in replacement for a

14:09

human. That's the human. That's

14:11

ludicrous. Absolutely ludicrous. But

14:12

it's what these salesmen are pedalling

14:14

in the valley. But the prediction and

14:16

the stance of this channel is that the

14:18

thing that is unstable is the finance

14:20

and infrastructure behind this. It is

14:22

unmaintainable. The numbers are off.

14:24

It's unprecedented in every way. And the

14:26

closest thing that we have to this is

14:28

the 1999.com bubble where the

14:30

proportions are way worse than they are

14:32

for what it's worth than the 1999

14:34

bubble. But it's the closest analog that

14:37

we have to understanding what the market

14:38

is going through right now. So phase is

14:41

really just code word for we're going to

14:42

need to really PR this bubble pop

14:45

financially or else every other sector

14:47

is going to panic and we're going to be

14:48

in real trouble in the market. They

14:51

don't want to say they don't want to say

14:52

that. They don't want to spook people.

14:53

And remember, just like in the.com

14:55

bubble, the internet wasn't fake. You're

14:57

watching this video because of the

14:59

internet, because of tech that was

15:00

developed and started in the dot era.

15:04

But the finances were off. The finances

15:06

were incorrect, but the tech was

15:08

transformative. If you're not already

15:10

subscribed, let's fix that. Click the

15:11

subscribe button below the bell to be

15:13

notified. If you'd like to support the

15:15

channel, I've had a lot of people

15:16

reaching out in the comments about that.

15:18

Become a member of the channel. You can

15:20

sponsor me. I would appreciate that. All

15:22

of the money goes back into producing

15:24

this content. And if you really want to

15:25

dig into the facts and figures on this,

15:28

sign up for the newsletter down there in

15:29

the description. I send out a quarterly

15:31

analysis of the numbers behind the AI

15:33

finance bubble. No PR spin or messing

15:36

about. It's just the straight numbers.

15:37

If you want to dig into that, subscribe

15:39

to the newsletter. Thank you for

15:40

watching.

Interactive Summary

This video explores the massive $830 billion valuation of OpenAI and its complicated partnership with Nvidia, highlighting how a once-touted $100 billion deal has been reduced to $30 billion. The speaker critiques the AI industry's reliance on non-binding "letters of intent" and "circular funding"—where investors provide capital that is immediately used to buy their own products—drawing direct parallels to the 1999 dot-com bubble. Furthermore, it details OpenAI's technical frustrations with Nvidia's inference speeds and their search for more efficient hardware alternatives to maintain scalability.

Suggested questions

5 ready-made prompts