HomeVideos

How Chinese A.I. Could Change Everything | Dr. Michael Power of Kaskazi Consulting

Now Playing

How Chinese A.I. Could Change Everything | Dr. Michael Power of Kaskazi Consulting

Transcript

2802 segments

0:00

When I speak to really well-informed

0:01

people of the US AI ecosystem, I'm

0:04

horrified by how little they know about

0:06

the competition. The Chinese approach,

0:08

which is open source, open weight

0:09

specifically, will likely win out.

0:11

There's not much road ahead for Nvidia

0:13

to continue to miniaturaturize. The

0:14

Chinese are now moving, I think, into

0:16

the next stage. Smart factories are just

0:17

spreading across China at an

0:18

extraordinary rate. Last year, China

0:19

installed more robots than the rest of

0:21

the world altogether. What Deep Seat

0:22

came up with is far more radical than

0:24

anything Open AI has ever come up. The

0:26

amount of debt that's creeping into the

0:27

system both on and off balance sheets at

0:28

the moment should be of concern. It's

0:30

particularly of concern in related areas

0:31

like dare I said Oracle and Paul the

0:34

numbers that are being talked about by

0:35

the likes of OpenAI every now and again

0:38

start to almost approach the same sort

0:40

of level of absurdity. If you're on

0:41

bubble watch at the moment you focus on

0:42

Oracle we're heading towards some sort

0:44

of crisis point.

0:45

>> I'm joined by Michael Power of Kuscazi

0:48

Consulting. Michael is a veteran of

0:51

macro strategy among many topics.

0:54

Michael, it's great to see you. But you

0:56

right now have have written an essay

0:58

that is kind of blowing my mind. It is

1:00

extremely indepth an analysis of US AI

1:04

architecture and Chinese AI

1:07

architecture. you find that Chinese AI

1:09

architecture is has significant uh

1:12

advantages over the US AI architecture

1:15

and that basically the hundreds of

1:18

billions of dollars and trillions of

1:19

dollars that are currently and going to

1:20

be invested in US AI right now may be

1:24

basically a bust. So that this has

1:26

extreme geopolitical consequences for

1:29

the entire world, China and and the US,

1:31

but also economic and market

1:33

consequences as well. So there's so many

1:36

strands we can grab on to, but just how

1:39

about you you you take us into the

1:41

journey. How did you first start

1:44

thinking about this? Tell us about the

1:45

process of you discovering this and then

1:48

we'll just get into it.

1:49

>> Well, thank you first of all, Jack, for

1:51

having me. I think that I'm now in

1:53

semi-retirement, though I've discovered

1:55

that the word retired doesn't exist in

1:57

retirement. And I'm one of these people

2:00

that was determined not to allow my

2:02

brain to atrophy. and I still continue

2:04

to make plenty of speeches um on those

2:07

more traditional subjects that you

2:08

mentioned before. But nevertheless um

2:11

I've made it my business to try and

2:12

understand probably the great uh theme

2:15

of the world today. Not least because it

2:17

has completely enraptured Wall Street.

2:20

Um and I felt that uh in order to be

2:24

able to to have a a meaningful

2:26

contribution to make I needed to

2:28

understand it. So what I did is that for

2:30

the first six weeks is I just immersed

2:32

myself in everything AI and in the first

2:35

instance that basically meant learning

2:38

the language of AI. Um because a lot of

2:41

it is jargon which is uh not easy to

2:44

understand. Um and I first translated it

2:47

into language which I could understand

2:50

and any reasonably intelligent person

2:52

such as yourself could understand as

2:54

well. And when I was doing this, I

2:57

started to realize coming from an

2:59

objective perspective that the narrative

3:02

that dominates much of uh Wall Street

3:05

thinking um was not as as strong as uh

3:09

it was made out to be and that the

3:11

Chinese although I don't think the

3:13

situation is as we speak today one of

3:16

the Chinese leading deep down the

3:19

structural process that they're putting

3:21

in place with regards to how they're

3:23

approaching AI I I think uh roll forward

3:27

three years uh will um outmaneuver that

3:30

of the United States and that their

3:33

model which is first and foremost built

3:35

on the idea of open source rather than

3:38

closed source and we can come to that if

3:39

you will um but nevertheless their model

3:43

um has a lot more runway ahead of it and

3:45

it's a lot cheaper runway uh than does

3:49

uh the US model which I think is

3:51

actually starting uh to run out of bread

3:53

and if If you look at the US AI

3:56

ecosystem, the valuations in that I

3:59

would roughly estimate at 15 trillion.

4:01

If you look at all the publicly traded

4:03

securities and then the venture capital

4:06

funded companies, whereas the the

4:08

Chinese the market cap of the Chinese

4:09

ecosystem like a lot of it is is private

4:12

or governmentbacked. But I mean, you

4:13

know, Alibaba has a has a market cap of

4:16

less than a half a trillion dollars,

4:17

which is certainly still a very large

4:20

company, but uh pales in comparison.

4:23

First of all, Michael, yeah, I I just

4:25

want to share some of some of the

4:27

consequences. I mean, you said that

4:29

basically the grim reaper is coming for

4:32

the American AI bubble. So we'll get

4:34

into that in a second, but but but but

4:36

first, what is the source of the Chinese

4:40

advantage over US technology and and how

4:44

does that disprove or challenge the

4:47

consensus uh within America about US AI

4:51

advantage uh in in Silicon Valley as

4:54

well as Wall Street?

4:55

>> Well, I'm going to use a four-letter

4:57

word on a very aerodite um discussion

5:00

like this. The essence of the Chinese AI

5:03

approach is that it's free.

5:05

And I can't say that enough. And that is

5:10

because they have a completely different

5:12

philosophy as to what AI should be as

5:16

compared to the US model.

5:19

China is building a structure at the

5:20

moment where AI will be a utility like

5:23

electricity

5:25

and the value that is going to be

5:27

derived from using the electricity is

5:30

where they're going to benefit but the

5:31

electricity itself is a pretty no value

5:35

product. Yes, you can make a little bit

5:37

of money on the way by generating

5:38

electricity but as you well know uh the

5:42

real valuation from electricity comes

5:44

from what what it is purposed to do. uh

5:47

whereas the US model is essentially that

5:50

it's a service that can be monetized if

5:53

they can monetize it to the degree that

5:55

they need to given capital expenditure

5:57

they've undertaken but that it's a

5:59

service and these two different

6:02

philosophies profoundly different

6:04

philosophies mean that there are two

6:06

paths developing and to some extent um

6:10

the analogy is not precise but it's good

6:12

enough um essentially the Chinese

6:15

approach is uh the android approach

6:18

because as you probably know although

6:20

Android is technically owned by Google,

6:24

Google deres no money from that. It's

6:26

actually controlled by a foundation and

6:28

that foundation uh is a nonprofit. But

6:32

compare that to Apple and their

6:34

ecosystem which is uh you know extremely

6:37

profitable. Um Android actually follows

6:41

another great example and that is of

6:43

course Linux. Um and if you look at the

6:45

top 100 supercomputers in the world

6:48

today,

6:49

Linux runs 100 out of 100. And so

6:54

essentially there is this philosophy of

6:57

open source or open weight which is

7:00

essentially the Chinese uh approach to

7:02

AI versus closed source. And my sense is

7:08

that over time, as with Linux, as with

7:10

as with Android, um the Chinese

7:14

approach, which is open source, open

7:16

weight specifically, um will likely win

7:20

out, which is why I say they have a

7:22

longer runway ahead of themselves uh

7:24

than uh does the United States. I should

7:27

say in this

7:29

that the US may be like Apple is able to

7:32

do create an ecosystem um that Apple is

7:36

secure in. And there are four countries

7:38

in the world where Apple is more popular

7:42

um than uh Android but everywhere else

7:45

in the world um Android is more popular

7:47

than Apple. Those four countries by the

7:49

way are the US, Canada, UK and Sweden.

7:52

And and what we may beginning to see in

7:54

a very analogous sense is the emergence

7:57

of a bifocated world where because in

8:01

large part the Chinese offering is free

8:05

uh it's winning big time outside of the

8:08

core uh capitalist world centered on

8:11

Wall Street. And so, and I've often had

8:14

chats to people who are essentially

8:17

Apple heads or iOS heads and they don't

8:20

really actually grasp the world of

8:22

Android. They don't actually understand

8:24

that there is another way. Um and I

8:27

think what's happening at the moment

8:29

that uh there is another way uh in AI

8:33

and it's winning big time uh in the

8:36

world at large not specifically by only

8:39

looking at what's happening in the

8:40

United States. And you note that Chinese

8:44

AI models have a very high rate of

8:47

adoption and that a a venture capitalist

8:49

I believe from A16Z

8:51

said that 80% of the startups that they

8:53

fund are using Chinese models not US

8:57

models which is certainly very

8:58

surprising. So the the open source

9:00

versus closed source I think they so

9:02

closed source what it literally means is

9:03

that the the models weights are not

9:06

disclosed whereas open source they are.

9:09

But you're referring to the the pricing

9:11

model. So it goes further than that.

9:14

>> Okay.

9:15

>> Uh as an open-source model, you can play

9:18

with it,

9:20

>> but with a closed source model, you

9:21

can't. And I always make the um

9:24

distinction that closed source is like

9:27

ordering beef Wellington. And a perfect

9:28

beef Wellington arrives on your plate.

9:31

open source. You order beef Wellington,

9:33

but when it arrives, you get a list of

9:36

all the ingredients and you can play

9:37

with it and mix the ingredients up and

9:39

completely reform it and conceivably

9:43

create egg and bacon ice cream out of

9:44

it. Um, but nevertheless, you have

9:47

complete freedom once you receive your

9:49

beef Wellington in the open-source world

9:53

to reconfigure it to whatever way you

9:55

like.

9:55

>> Okay. Okay. But when you say it's it's

9:57

free, you know, so so chatbt is

9:59

massively subsidized and the there are

10:01

free versions of Gemini, Chachbt, all

10:03

the American models and I'm sure there

10:05

are of China, but are you saying that

10:06

China doesn't charge its users or that

10:08

the pricing the price the charging model

10:10

the pricing model is different?

10:12

>> I'd say less than 5% of the Chinese

10:15

models um have uh a user fee attached to

10:20

them. Um and it's only for unbelievably

10:23

specialized areas. And to be perfectly

10:25

honest, um, generally speaking, the

10:28

answer is no, they don't have a fee

10:30

attached to.

10:31

>> Okay. But then what is the business

10:32

model? You know, a key part throughout

10:35

your piece, which which is excellent and

10:36

what we'll link for for our listeners to

10:38

to to read it, but a key part of your

10:41

piece is that China has a cost

10:42

advantage, but they still have costs.

10:44

So, how is the Chinese LLMs and the

10:47

clouds and the AI in China going to be

10:49

funded if they don't make any money?

10:51

Well, just take Quen, which is by far

10:54

the most powerful of all of them. That

10:56

is Alibaba's large language model. Quen

11:00

is applied and used everywhere else

11:02

within the Alibaba community. So, Tao,

11:05

which is their online shopping or Ali

11:07

Logistics or Alip Pay will use almost

11:11

certainly Quen as their model. Now in

11:15

using that model

11:17

there there may be a and often is a

11:20

small fee that gets paid across from Tao

11:24

to essentially the central resource that

11:27

is Tong Yi which is uh the IIT

11:31

sector that controls Alibaba and that's

11:34

how Alibaba then gets funded. Now this

11:37

is not the same I must say immediately

11:39

for all Chinese LLMs. There are some now

11:42

that are not connected to a broader

11:45

commercial network but we'll come on to

11:48

that later because there is a mechanism

11:50

now arising when they can be um and but

11:54

the central point is is that uh it's

11:57

like a central expense the research and

11:59

development budget as it were of Alibaba

12:03

that goes towards Quent

12:06

some fees do come in from uh other parts

12:09

of the Alibaba Empire to help fund uh

12:13

that budget.

12:14

>> So, let's say Michael, if you're right,

12:19

in five years, what does the world look

12:21

like? Uh Nvidia, all of

12:24

>> Can I just follow on something there

12:26

just because it's really it's clever

12:29

because it's only in the last week that

12:30

it's happened. As I said to you before

12:33

that Google essentially owns um Android,

12:37

but doesn't really make much money from

12:39

it. However, it has recently agreed

12:42

because virtually every Samsung phone

12:44

that I know of is run on Android. It's

12:47

done a deal with Samsung that in the

12:51

search engine of every Samsung phone

12:53

going forward will be Gemini, in other

12:57

words, Google's LLM. and Samsung is now

13:02

paying a fee to Google for the rights to

13:07

have that embedded in their phones. So

13:10

there is a way an example of how it's

13:12

being done in that context. But this is

13:15

just happening everywhere within the

13:17

Chinese community. It isn't just a you

13:19

know one-off exception. But but Android

13:22

is able to be partly monetized through

13:24

arrangements like that. And I'm just

13:26

using that as an example.

13:28

>> Thank you for explaining that Michael.

13:29

So now so now we have just just a sense

13:31

of our of our terms what we're dealing

13:33

with here. So I if you're right in 5

13:36

years what does the world look like the

13:39

publicly traded semiconductor supply

13:41

chain of of US AI the privately backed

13:44

AI models uh most recently you know uh

13:46

open AI anthropic etc. What does that

13:50

world look like? I imagine that those

13:52

companies would have you see severe

13:54

challenges and how does that world

13:55

differ from the the scenario envisioned

13:58

by imagine by many of the uh uh rosy

14:01

rosy eyed AI US AI optimists who

14:04

probably you know expect Nvidia's market

14:06

cap to be uh above 10 trillion and

14:08

expect open AI to be massively

14:10

profitable and and the rest well first

14:11

of all we have to be careful that when

14:13

we use the term world we don't just mean

14:16

US world we mean world

14:20

But unfortunately when I'm listening to

14:22

Bloomberg and CNBC and they use that

14:24

term world, it generally speaking means

14:28

like Apple world. It's just the world uh

14:31

that is defined by uh the United States.

14:34

There is some international dimensions

14:36

to it but it's a very small part of the

14:38

whole. So if we're talking about what

14:41

will US world to start to answer your

14:44

question look like I think we're going

14:46

to see that and we're already beginning

14:49

to see when Anderson Horowitz is seeing

14:53

you 80 90% of its people presenting to

14:56

it using free software from China that

15:00

there are and I'm borrowing a word here

15:02

from Jamie Diamond and you'll understand

15:05

it given your background cockroaches all

15:07

over the place at the moment. um that

15:09

are essentially starting to be used.

15:12

We're seeing Airbnb now is essentially

15:14

moved over uh to Quent. Um but

15:17

essentially there is nothing stopping

15:20

those people who find uh this software

15:24

and I'm going to use a phrase which is

15:26

not really fair because it's actually

15:27

very good but good enough. Let's just

15:29

start with good enough. And they're

15:32

finding if they can use it for free and

15:33

it's good enough. And just take the

15:35

example of Gwen, it's it's fluent in

15:38

something like 120 languages, which if

15:41

your Airbnb is rather a a good plus to

15:44

have. Um, so I think what we're seeing

15:48

now is that there is leakage between the

15:51

two ecosystems that have essentially

15:53

been described by us, US world and

15:55

world. And there is leakage from US

15:59

world into world war. So projecting five

16:03

years forward to go back to to your

16:05

question. I see this leakage is

16:07

continuing. However, it goes beyond that

16:10

because there are technological issues

16:12

that are now beginning to question

16:15

firstly the making of the hardware and

16:17

secondly construction of the software.

16:20

And they feed off each other but uh they

16:23

need to be looked at separately before

16:25

we combine them. First of all, I don't

16:28

think Nvidia's hold on the chipm is

16:31

going to be anything like as strong. And

16:33

again, uh part of this is because what

16:36

Amazon's doing uh with its own

16:39

chipmaking, what Google is doing, Amazon

16:41

has the tranium, Google has the TPU. Uh

16:44

and I think what we are seeing is that

16:45

even within the United States, a

16:47

low-level civil war is breaking out

16:49

between the major players in in big AI.

16:53

um so much so that the dependency on

16:56

Nvidia chips is starting uh to be

16:59

reduced. But outside of um US AI in the

17:04

rest of the world there is no doubt in

17:06

my mind that we are seeing all sorts of

17:08

efforts to diversify

17:11

uh the Chinese even have a word for

17:13

devidarization

17:15

um but diversify away from dependence on

17:19

let's just call it expensive chips and

17:21

most of those expensive chips

17:23

historically have been Nvidium chips. So

17:26

in the hardware side there are all sorts

17:28

of options that are starting to open up.

17:30

First of all, also and you'll see in the

17:32

open part of my my presentation, the

17:35

whole area of Moore's law is starting to

17:38

come under pressure because basically

17:40

we're getting to a point where you can't

17:42

really make chips much smaller um and

17:44

still hope to carry uh to continue uh

17:48

the compute level that comes from those

17:50

chips. There are rules of physics, rules

17:53

of what I call material, chemistry, and

17:56

then rules of economics. I call them the

17:58

three assassins that are actually saying

18:00

that there's not much road ahead for

18:03

Nvidia to continue to miniaturaturize

18:06

its uh its chips. What China is doing is

18:10

essentially creating a and they're not

18:12

alone in this. It's happening even in

18:14

the United States, but a whole new

18:16

ecosystem that is basically built around

18:18

not two or three NMS but somewhere in

18:21

the the 14 to 18 NM specs.

18:26

And what they're doing is that they are

18:28

building what I call cognitive

18:30

skyscrapers, cognitive towers. So they

18:33

use the chip as the base, but then they

18:35

layer other sorts of chips and uh

18:37

memories and various other things on top

18:39

of it and create a sort of megallike

18:41

world. Um and that is increasingly the

18:45

way forward. So I don't think China is

18:47

uh thinking smaller, it's thinking

18:49

smarter. Um, and this is one way where

18:52

you can continue to be very relevant in

18:55

the chip space without chasing um down

18:58

that rabbit hole um the idea of making

19:01

the chips ever more small. Um because as

19:04

I said Moors law is close to dying. It's

19:08

on its deathbed. It may not have passed

19:10

its last breath but it's not looking

19:13

good at the moment. So in the hardware

19:16

space,

19:17

uh, Nvidia and we're already seeing

19:19

certain, um, things that they're doing

19:21

here. They're doing it more in the in

19:23

the in the software space, but they too

19:24

are also starting quietly to do it in

19:26

the hardware space, too. But Nvidia is

19:29

recognizing that its model up until now

19:31

cannot be the only way forward for it.

19:33

has got to actually diversify away

19:35

actually potentially look at slightly

19:37

larger chips actually look and we

19:39

haven't talked about software yet but

19:41

actually look at complimentary software

19:43

with those slightly larger chips in

19:46

order for it to remain relevant the

19:48

problem with that for Nvidia is that the

19:51

margins associated with that alternative

19:53

are much much much lower than they are

19:54

with the model that they're now pursuing

19:56

which then starts to

19:59

moving forward potentially undermine the

20:01

hold that Nvidia has the margins that it

20:04

has and then that will potentially play

20:06

through into profits and therefore stock

20:08

market valuations. So that's what's

20:10

happening on the hardware side. And I

20:13

truly generalized in a number of areas

20:15

there in order to be able to answer your

20:17

question, but that's essentially what's

20:19

happening. And as I said, the Chinese

20:21

are doing it. Yes. And and they've got a

20:24

vested interest to do it. But even

20:26

Amazon's doing it. Even Google's doing

20:28

it um at the moment that they want to

20:30

diversify away from these high-priced

20:33

chips that Nvidia has that increasingly

20:36

are not built for purpose.

20:39

and uh the new world um that we're

20:42

moving into for chips requires a

20:44

different combination. So that's the

20:47

that's that's what's happening on the

20:48

hardware side. On the software side, um

20:52

there's breakthroughs happening

20:54

everywhere. Um and it's not that Nvidia

20:57

um hasn't got a very powerful software

21:01

hold on its on its chips. It does it

21:03

through something known as CUDA. um

21:06

essentially it ties in developers to the

21:08

way that uh they're able to use Nvidia's

21:11

chips. What's happening at the moment is

21:14

that um both in the US but especially

21:17

especially especially in China people

21:20

are finding way to circumvent and this

21:23

they are doing both two sides to chips

21:27

the training of them and the inference

21:30

that arrive derived once you train them.

21:32

So you endow a chip with knowledge and

21:34

then you get that chip uh able to answer

21:37

a question that you and I might pose to

21:39

it. Um but in the training stage which

21:42

is where Nvidia has especially good hold

21:45

um uh there are breakthroughs taking

21:49

place the most important of which

21:50

happened in the last 10 days um which is

21:54

something that Deep Seek did and it

21:56

happened on New Year's Eve. I don't

21:57

think the market has truly understood

21:59

the scale of what what that particular

22:02

uh breakthrough means. But it's also

22:04

happening uh in the inference side um

22:07

which is uh where to be perfectly honest

22:11

lesser chips are used often chips that

22:13

are previously involved on the training

22:15

side after two or three years they've

22:17

still got a little bit of useful life

22:19

left in them. So they get transferred

22:20

over to the inference side for another

22:22

couple of years where they can still be

22:24

usefully employed. But the margins on

22:26

inference chips are much lower than the

22:28

margins on training chips. But the

22:31

essential point is is Nvidia here is now

22:34

facing attacks on both fronts, hardware

22:37

and software. So, it sounds like the the

22:40

consequences of what what you're saying

22:43

are immense because all of this money in

22:47

the United States in public markets and

22:49

private markets uh is being deployed on

22:52

the premise that AI margins uh might be

22:58

slightly lower than the traditional

23:00

extremely profitable US software

23:02

business model that has 80% 90% gross

23:04

margins. slightly lower margins but

23:06

still a very profitable enterprise. You

23:09

are saying that it is we're likely

23:11

headed to a world where profits margins

23:14

are extremely low and it's something of

23:16

a um you know communitarian co-op model

23:19

an open- source model and that you know

23:22

the consequence you're saying Michael is

23:23

that the trillions of dollars being

23:24

spent by the US is basically just cash

23:26

incineration and that a lot of investors

23:29

are going to lose money. Obviously,

23:30

it'll take time to pan out and I'm not

23:32

saying it's going to happen tomorrow,

23:34

but I did my PhD thesis ultimately on

23:37

the concept of commoditization

23:39

and um I am able to recognize the traits

23:43

that indicate that a particular product

23:45

or service might be uh being subjected

23:48

to the forces of commoditization and I

23:51

can now see those forces gathering both

23:54

on the hardware side and on the software

23:56

side.

23:57

>> Yes. And you you come from South Africa

23:59

which is a you know dominant player in

24:01

commodities. You know it is no surprise

24:03

that the you know South African uh stock

24:07

market is a tiny fraction of the size of

24:09

the US stock market which you know has

24:11

commodities but also has these things.

24:12

commod commodities. It is hard to make

24:14

money from commodities and they

24:16

certainly do not command 40 or 50 times

24:18

earnings multiple

24:19

>> and South Africa is an example of that.

24:20

But there are plenty more commodities in

24:22

the world than simply those that are uh

24:25

like um you know wheat or or or or

24:28

metals or any of those or even dare I

24:31

say it some of the energy commodities.

24:33

Uh there are plenty of other commodities

24:35

that have existed that have come into

24:37

being. I mean I would say that um you

24:40

know petrol-based automobiles are on the

24:44

verge of becoming commodities. Um and so

24:47

the concept commoditization is not

24:50

exclusive as I say to the traditional

24:53

term that is described as a commodity.

24:56

>> Yes. And Michael so I've said what is in

24:59

your piece about how it could you know

25:01

end badly for US investors because of

25:02

the consequences. But can can I get you

25:04

to say it?

25:05

>> Yes. Um, essentially what is happening

25:08

is that China has realized that and part

25:12

of this has come about by the fact that

25:13

they've been subjected to certain

25:15

embargos or or or controls that

25:18

particularly from the United States.

25:20

China has been forced to look for

25:22

another way, another tow as I sometimes

25:25

like to call it because uh the tow is

25:28

the other way in in in Chinese in

25:30

Mandarin Chinese. And what's happening

25:33

is that uh necessity being the mother of

25:36

invention um because they didn't get

25:38

those Nvidia chips, they've had to find

25:40

other ways of doing it. And as I say,

25:43

they they thought not smaller, they

25:45

thought smarter both on the software and

25:47

on the hardware side. And what Deep Seek

25:50

did last year and what I predict it's

25:53

going to do this year and we've already

25:55

had a foretaste of that with their

25:57

latest paper is uh essentially challenge

26:02

um the margins that exist on the

26:03

software side of the business. Sunsu

26:06

would basically advise any Chinese

26:08

general if you don't think you've got

26:11

the right number of forces to be able to

26:12

beat the enemy change the battlefield.

26:15

And there are plenty of almost SunSu

26:17

pieces of advice that are now playing

26:19

out in the world of AI. Um, as I say,

26:23

there's going to be no gunfights at the

26:25

3NM or 2Nm corral between Chinese chip

26:28

makers and let's just say Nvidia because

26:31

the Chinese aren't going to fight there.

26:33

They know that's not a gunfight. They

26:35

can win. But, uh, they are now shifting

26:38

the battlefield and shifting it

26:40

dramatically.

26:42

And when you are dare I say stuck in the

26:45

US ecosystem, I don't think you can

26:47

imagine that there might actually be

26:49

another battlefield for you. But there

26:51

is and the rest of the world is catching

26:53

on. Um and it is starting to spread both

26:57

software and hardware. Although the

26:58

Chinese don't have a lot of chips to

27:01

spare to export at the moment, but come

27:02

2028, the forecast is that China will be

27:05

producing more chips than it needs. Um

27:07

and it already dominates um the world of

27:10

what's called commodity chips to go back

27:12

to where we were before. Um you may have

27:15

remember the story with NXeria which was

27:17

a Dutch company that got essentially

27:19

shut down by the Dutch authorities at

27:21

the heads of Trump administration.

27:24

NXeria produced

27:26

quote unquote commodity chips for the

27:29

particularly auto sector of Europe and

27:32

this brought the auto sector to a

27:33

standstill. The point being here is that

27:36

the low value low margin chips in the

27:39

world the Chinese already dominate

27:41

probably uh yeah up to the the sort of

27:45

almost twothirds of the actual chip

27:46

supply in the world now has quote

27:48

unquote been and I don't want to say

27:50

commoditized to the point where it

27:52

actually can't create a profit but where

27:54

the margins are very very thin and you

27:56

have to be incredibly efficient if you

27:59

want to stay in that space. So what's

28:01

happening is that we're seeing this uh

28:05

essentially march uh up the value added

28:08

ladder. Um and the Chinese are now

28:11

moving I think into the next stage which

28:13

is the um medium value chips 14nm to

28:19

18mm

28:20

um that essentially have huge uses

28:23

across the world. I mean massive uses

28:26

vehic vehicles u your cell phone cell

28:29

phone towers I mean the areas where

28:31

those sorts of chips are are are

28:33

prevalent um are are are almost too many

28:37

to mention in the world of let's call it

28:39

the latest Apple iPhone um yes you want

28:42

to have one of those tiny chips um but

28:44

there are many more applications for

28:46

chips in the world now I'll give you an

28:48

example which is not uh being wholly

28:51

recognized yet but chips that are being

28:53

embedded in the smart factories of China

28:57

so that these smart factories can

28:59

actually operate almost remotely. Um

29:03

they don't have to have people in them.

29:05

Uh and those chips are not necessarily

29:07

um driven by a constraint about size.

29:10

Size is not always the issue. Um but

29:13

they need to be able to form perform the

29:16

function. I'll get back to my earlier

29:17

comment. They're good enough. The result

29:20

is that smart factories are just

29:22

spreading across China at an

29:24

extraordinary rate. I mean just to

29:26

understand it last year China installed

29:28

more robots than the rest of the world

29:30

put altogether. So we are seeing this

29:32

this process take place across all sorts

29:35

of areas which are not necessarily again

29:39

wholly um acknowledged in the United

29:41

States. Now, I'm being a little cruel

29:43

here, and forgive me, but they're not

29:46

being acknowledged because there are not

29:47

many factories left in the United States

29:51

for their chips to be embedded in. And

29:54

the Chinese now with by 20, 30, 45% of

29:57

the world's manufacturing production

29:59

versus 10% in the United States, the

30:02

Chinese are essentially moving their

30:04

industrial structure over to smart

30:07

chips, smart factories. Um, and they're

30:10

not these 2Nm, 3NM chips that you're

30:13

going to find in the world of iPhones.

30:17

So, China has a, as I say, it's a

30:20

different path. It's a different road

30:22

and it's not fully recognized partly

30:25

because particularly in the United

30:27

States where consumption is 80% of GDP,

30:30

services is 80% of GDP,

30:33

most of the AI talk that comes out of

30:35

the United States is related to

30:38

services. agentic AI that is essentially

30:40

going to allow you to do to buy an air

30:43

ticket on your phone. These are what get

30:46

all the talk in the United States. They

30:48

get talk in China. I'm not saying that

30:50

they don't, but there are plenty other

30:52

conversations taking place about where

30:55

chips can be used um embedded in drones.

30:59

Um to an extraordinary degree, chips are

31:02

being used to uh run uh the electricity

31:06

system to an incredible degree across

31:07

China. the the solar system, the the the

31:10

wind turbines chips are the application

31:13

for chips in China tends to be a much

31:15

longer list than the application chips

31:17

in the United States.

31:18

>> Thank you, Michael. So, we're recording

31:20

January 8th, 2026,

31:23

a little less than a year ago in late

31:26

January 2025, Deepseek, a Chinese AI

31:30

model company that was actually, I

31:32

think, started by a Chinese AI hedge

31:34

fund manager of of all, you know, places

31:36

of all people. um uh launched their

31:40

deepseek model R something. uh oh

31:43

>> are are one and that model and this fear

31:47

that emerged a little less than than a

31:49

year ago uh caused a mini you know one

31:53

or two day crash in the US semiconductor

31:57

supply chain stocks particularly Nvidia

31:58

if I remember was down 16 or 17% at one

32:02

time now on January 1st 2026 so the

32:06

first day of the year while everyone was

32:08

uh partying in the west and the markets

32:10

were closed you're saying that they have

32:12

released a new model or a new paper that

32:14

it could have similar consequences. Tell

32:16

us about this.

32:17

>> Well, I think it's essentially um and

32:20

what Deep Seek is doing at the moment is

32:22

a sort of dance at the seven veils

32:24

before it actually drops R2 which is

32:27

going to do in my prediction just ahead

32:29

of the Chinese New Year which starts on

32:31

the 17th of February. But essentially

32:33

there have been a number of releases and

32:36

this is the last release I suspect big

32:38

release before that date. uh and the

32:41

biggest by far because what they've done

32:44

is they found a mechanism for

32:46

essentially um attacking uh the whole

32:51

idea of memory in the training process

32:54

of chips. So now they have found a way

32:57

where previously a particular chip

33:00

produced let's just say 100% of memory

33:03

that same chip now if properly arranged

33:06

within the software you only need 7% of

33:09

its power to produce that 100% that

33:12

previously you were able to get. So

33:15

essentially they have um increased the

33:17

power of uh a small chip by 15 times in

33:22

the training process and then these this

33:25

this essentially leads again towards the

33:28

commoditization of highv value chips

33:31

because they found a way round the whole

33:34

idea you need our chips because you need

33:36

100 and and what deepsekers said well

33:39

actually you only need seven to do that

33:43

uh because that seven will give you a

33:45

100. Uh and it's all about the new

33:49

phrase that everyone is talking about

33:50

architecture and architecture is

33:52

happening both on the side of training

33:55

chips as well as on inference. In fact,

33:58

until you know a month ago, people

34:01

didn't really talk about architecture on

34:03

the side of training. Yes, there was

34:05

some, but they did talk about

34:07

architecture on the side of inference.

34:08

the Chinese not just the Chinese uh this

34:11

big company which uh has just been

34:13

bought by Nvidia in Singapore. Manus is

34:15

very good at architecture but it's

34:18

inference architecture. What DeepS did

34:21

this time was come up with an

34:23

unbelievably radical way of reinventing

34:27

the architecture on the training side of

34:30

the chips which is largely been ignored

34:33

up until now. Now why I think this paper

34:36

which is part of a whole as I say seven

34:39

veils um is significant is that it's

34:42

setting things up for the release of R2

34:44

which as I say is probably going to

34:46

happen at the start of the next Chinese

34:48

year. Um we had another indication for

34:51

instance in mid December which only the

34:54

geeks really picked up on but the

34:56

Chinese redu Deep Seek reduced uh

34:59

produced something called uh I'll get it

35:01

right the V3.2 to special which was

35:06

essentially a mathematical standalone

35:09

model

35:10

and it basically went to the top of the

35:12

benchmarks. Not every one of them but

35:15

nearly every one of them went straight

35:16

to the top of the benchmarks. So what

35:18

we're seeing is that once you put all

35:20

these seven veils together, and I'm not

35:22

going to bore you with all sorts of

35:23

acronyms as to what each of those veils

35:25

constitute, but you put all of those

35:27

together and then tie them up in what is

35:30

going to be coming out, I suspect in the

35:32

middle of February, um, and that is

35:35

going to be monumentally significant

35:37

because, uh, Deep Seek is going to, I

35:39

think, in most areas, go to very near,

35:42

if not at the top of every benchmark

35:45

that counts. um they're going to have

35:47

and you may not know the name of the

35:49

game is is is the number of parameters.

35:52

They'll have in excess of one trillion

35:54

parameters. Um they will have this

35:56

what's called uh mixture of experts

35:59

structure and they will have MLA which

36:01

is the the

36:03

paper that we've just been talking about

36:05

dropped on New Year's Eve. Um, and

36:08

they'll put all of these together to

36:09

create an unbelievably powerful model

36:12

that is powerful both on the training

36:16

side and on the inference side and uh I

36:20

think intentionally um having even

36:23

greater effect than than R1 when it was

36:26

released a year ago. So there there's

36:28

two things training and inference like

36:30

let's say the old school um creating a

36:34

computer that can beat human beings in

36:36

in chess which I you know has existed

36:37

for close to 30 years now but training

36:40

is the process of getting the computer

36:43

to learn chess and develop its

36:44

strategies. inference is okay you're

36:47

playing Gary Kasparov now you have to

36:50

actually run Western US models of

36:52

training has been extremely capital

36:54

inensive and the Chinese models appeared

36:58

to be far less capital intensive and

37:00

that's why Nvidia you know crashed 17%

37:02

when this deepseek news came out uh you

37:05

know in late January 2025 because oh my

37:08

god they don't need to spend that much

37:09

on on Nvidia chips I know there was some

37:12

doubt about that Michael is it really

37:14

true that they only spent you know, a

37:16

tiny fraction of of uh Nvidia chips.

37:19

>> I saw a paper yesterday said that Deep

37:22

Seek actually spent 1.5 billion on that

37:25

first R1.

37:27

And even if it did,

37:31

OpenAI

37:33

alone is spending its own equity

37:35

contribution, not co-contributions from

37:38

other players towards Stargate is 19

37:42

billion.

37:45

And yet what Deep Seek came up with is

37:49

far more radical than anything Open AI

37:52

has ever come up with. I mean on a scale

37:54

of

37:56

20 30 times and it did so let's be take

38:00

it to the worst possible extent 1.5

38:02

billion. I I don't think it was anything

38:04

near 1.5 billion but I'll accept that

38:07

that that that forecast.

38:09

>> A lot of money but a tiny fraction of

38:10

what the US companies are spending. A

38:11

tiny fraction

38:12

>> tiny fraction. So the point being is

38:15

that um once R1 dropped last year, they

38:20

gave the model to Nature magazine in

38:22

London, which is one of the most

38:24

prestigious magazines in the world. And

38:25

over eight months, Nature tested that

38:28

model and in their September cover issue

38:32

last year came out and said every claim

38:34

that Deep Seek made as to what our one

38:38

could do was verified.

38:42

Now since then as I say they've dropped

38:44

a number of other upgrades and you've

38:46

had to be really buried in the whole

38:48

process to see each of these incremental

38:50

upgrades. I mentioned the one that

38:53

happened in December regarding the

38:54

mathematical capabilities. Um but I

38:57

think what's happening now is that there

38:59

is a cumulative effect of all these

39:01

upgrades that going to be rolled into R2

39:05

and I don't know how much it's going to

39:06

have cost them to get got to that spec.

39:08

I really don't I I'm I'm somewhat um

39:11

skeptical of the claim of 1.5 billion

39:13

because I'm not sure that the hedge fund

39:16

uh that owns Deep Seek had that sort of

39:19

money to spend unless someone was

39:21

handing them some money slightly, you

39:23

know, through the back pocket. But in a

39:26

way, it's irrelevant

39:29

because none of the claims as to what

39:31

these models capable of doing are being

39:33

disputed.

39:35

They're all showing up in the benchmarks

39:37

and they're all being subjected to

39:39

unbelievable peer review

39:42

and anyone who's looked at that paper on

39:45

the 31st December last year has come

39:49

back and said, "Yep, their claim is

39:50

absolutely spot on. They they've done

39:52

it. They found this way around this

39:55

whole problem that we've all been facing

39:57

for a long period of time, which is

39:59

what's called catastrophic

40:00

forgetfulness, where you're training a

40:02

model, you get up to a certain level,

40:05

um, and then you add more data into it

40:08

and it just forgets everything that it's

40:11

already learned. And they've essentially

40:12

created a very stable way for that model

40:16

to accumulate and to sort out

40:18

information and keep it properly

40:21

organized so that they continue to add

40:24

what we call scale data into that model.

40:29

And the result is they now have a very

40:30

very clever stable way to grow the

40:34

database of a model. And the result is

40:39

I'm I think it's pretty radical to be

40:41

perfectly honest and there are a number

40:42

of geeks out there and I'm not a geek.

40:44

I'm not a techie but I've read the

40:46

papers that I can understand and

40:49

virtually all of them have pretty much

40:50

confirmed uh what Deep Seek is playing.

40:54

>> Later on I've got some potential push

40:56

backs about your thesis, but I want to

41:00

get into the nitty-gritty uh where you

41:02

know you're not a tech geek but you've

41:04

kind of become a little bit of a tech

41:05

geek. Uh you say the three assassins of

41:08

the US AI architect architecture the US.

41:11

>> Oh, let's just call it chips generally.

41:13

>> Chips. Chips generally. Yeah. And

41:14

potentially the the bubble in AI in in

41:17

US uh private and public markets of US

41:20

AI. The three assassins you say are

41:22

physics, material science, and

41:24

economics. Let's begin with physics.

41:26

What what why is that an assassin of

41:29

chips? Well, you get down to a certain

41:32

level where essentially the process by

41:35

which uh uh the the chip operates in

41:39

physics becomes unstable.

41:43

You have basically switches that are

41:45

either on or off.

41:47

But the electrons that control those

41:49

switches are able to slip through

41:52

because they are so small.

41:55

they can slip through and essentially

41:57

turn that chip into not an on or an off

42:00

but a baby

42:02

and that basically starts to question

42:06

um the robustness of the model.

42:09

So what it does is when you're getting

42:11

down to these incredibly small

42:16

things, the the actual, you know, it may

42:18

look like a piece of steel or hard

42:20

silicon to you, but it's actually got

42:22

little gaps in it and these electrons

42:24

are finding way through it. They use the

42:26

expression like ghosts through a wall

42:28

and they move through and go to the

42:30

other side and then essentially um turn

42:33

that particular switch into a m which

42:36

which really starts to question um at

42:38

what point can you continue to

42:40

miniaturaturize everything.

42:43

Um and still have the security of

42:45

knowing that when I want that switch to

42:46

say off it says off. It doesn't say

42:49

maybe. You're saying that chips are

42:51

coming up to a limit, a retical limit.

42:55

Chip chip sizes are getting so small

42:58

that basically um you know electrons are

43:01

going crazy in there and it gets so hot

43:05

that you you know need a lot of the

43:07

chips to be devote devoted to uh cabling

43:10

to to you know uh control the thermic

43:13

output so so it doesn't overheat and and

43:16

ruin the chip. And and I will note

43:18

Michael, not you, but there have been

43:21

haters of Moore's law like over the past

43:23

20 years who have said Moors law is

43:26

dead. Moors law is going to, you know,

43:30

this drastic two years every two years

43:33

the amount of transistors that we could

43:34

put in a semi semiconductor roughly

43:36

double, which is, you know, held true

43:38

since the 1960s. That's no longer going

43:40

to happen. And I want to say Morsaw

43:43

refers to the number of transistors in a

43:45

chip. It does not refer to compute

43:46

power, actual compute power. Actual

43:49

compute power has way more than doubled

43:52

um over the past 15 years because of

43:54

Nvidia and because of parallel computing

43:56

and the fact that all the electrons are

43:57

are you know going at the same time. The

43:59

wires are going at the same time you

44:01

know and Nvidia invented that. So you

44:02

know Jensen Wong CEO of Nvidia has said

44:05

that Moors law is dead in the other way

44:07

that like computing power has way way

44:09

way more than doubled every two years

44:11

because of that thing. But you're you're

44:14

saying you're a critique and saying

44:16

finally like Moors law is going to be

44:17

dead and that you you you can't double

44:20

the number of transistors every two

44:21

years. It's simply getting too small. It

44:22

went from 30 nmters to 50 nanometers to

44:25

7 nmters and now the latest Nvidia

44:27

Blackwell is 3 nanometers. You're

44:30

obviously it can't be 0 nmters or

44:32

negative nanometers. That that kind of

44:35

there's not a ton more uh uh juice to be

44:38

squeezed out of that lemon.

44:39

>> We talk of disconomies of scale. There's

44:41

essentially diseconomies of physics and

44:42

diseconomies of material science. And at

44:45

some point, and you can't divorce this

44:47

entirely from the cost of being able to

44:50

achieve this, at some point it becomes

44:53

prohibitively expensive expensive

44:56

without that much uptick in what you

45:00

mentioned compute to move from 3nm to

45:03

2nm

45:04

and that there are complications that

45:06

start arising in physics, in material

45:08

science. and let's leave but one cannot

45:11

leave on one side the whole issue of

45:13

economics because ultimately that comes

45:15

in and that's probably where I come from

45:17

and spoils everything. So what we're

45:19

doing is we're seeing the diseconomies

45:21

of physics. We're seeing the

45:22

diseconomies of chemistry and yes there

45:25

are potential workarounds to use that

45:28

wonderful phrase but they are

45:30

unbelievably expensive and you ask for

45:32

instance and one critical player to ask

45:35

in this whole process is ASML in the

45:37

Netherlands. Can you carry on making

45:39

your EVU EUV machines so much so that

45:43

you can actually start producing 2Nm

45:46

chips? and they will say we can

45:50

but it's complicated and it's not just

45:52

complicated it's almost prohibitively

45:54

complicated to do so people have talked

45:57

about changing things from silicon to

45:59

something else you know there are all

46:01

sorts of areas and one of the most

46:02

interesting areas potentially is

46:04

photonics although I should hasten to

46:08

add here is that China probably leads in

46:10

the whole area of photonics which is the

46:12

whole idea of embedding data in light

46:15

itself uh which is a completely

46:17

different way of thinking about chips. I

46:19

mean it's it's it's just moving into a

46:21

whole different space. Um and and and

46:26

it's still five, six, seven years away

46:29

from having anything that's remotely um

46:31

practical. But in terms of of research,

46:35

China probably leads platonics at the

46:37

moment. It's not an undisputed claim,

46:39

but it's probably a claim. Um and so in

46:43

that that is essentially saying forget

46:45

silicon we're we're going to move to a

46:47

post silicon world but while we still

46:49

live in that silicon world we are seeing

46:51

these diseconomies of physics

46:53

disconomies of material science and

46:55

diseconomies

46:56

of economics and to some extent these

47:00

three assassins are working together not

47:02

consciously obviously but but there is a

47:04

sort of strange cooperation that's

47:07

happening between all three that are

47:09

making as I And I'm not going to say

47:12

that Moore is dead or Mozore is dead,

47:15

but he's on his deathbed and these three

47:17

guys are standing around that deathbed

47:19

sort of rubbing their hands saying, you

47:20

know, your time is up, mates. And what

47:23

the Chinese are saying, well, listen,

47:25

we're not going to have a gunfight at

47:26

the the 3M Corral. It's just becoming

47:29

unbelievably expensive to play at that

47:31

game. And we're not going to win because

47:33

we just don't have the EUV machines that

47:35

come from ASML to be able to play in

47:37

that game. Let's move the battlefield.

47:40

let's fight this war in another space

47:43

and using other other other sort of

47:46

mediums and this is why this soal SIP

47:50

system in processor

47:53

these cognitive towers is now becoming

47:56

something which pretty much everyone

47:57

including Nvidia is now pursuing. So,

47:59

Michael, the the game at which US chip

48:03

makers, primarily Nvidia, have excelled

48:05

and dominated and crushed the

48:07

opposition. That game is making chips

48:09

smaller and smaller and and more

48:11

efficient. You're saying that that game

48:14

is something where China is saying we're

48:15

not playing anymore. And that the

48:16

advances in computing have mostly come

48:18

from making chips smaller and smaller

48:19

and smaller. Parallel computing, of

48:21

course. um that you're saying that in

48:23

the future the the gains are going to be

48:26

made from connecting the architecture

48:30

connecting the chips themselves um

48:32

something called advanced packaging so

48:35

allowing uh the chip to be 3D and that

48:39

whole system so the chips can talk to

48:41

each other rather than just the most

48:44

efficient chip because the most powerful

48:46

chip is no longer you're saying going to

48:48

be what is the driver of effective

48:51

usable compute which is what it's all

48:53

about and you noted in your piece with

48:55

that the you know former uh uh you

48:58

Google executive Eric Schmidt said that

49:01

the constraint of AI is not chips it is

49:05

power and electricity

49:07

>> and that's that's true I mean when you

49:09

see the amount of power that's going to

49:12

be required to run the likes of Stargate

49:14

and you've seen the pictures or these

49:16

maps of where all the data centers are

49:19

being put up over the United States. And

49:21

you don't live in Northern Virginia, but

49:23

if you did, you were facing some fairly

49:25

severe power shortages in the in the

49:27

next five years because of all the data

49:30

centers, many of them military related

49:32

um that have been and are continuing to

49:34

be erected uh in in in Virginia,

49:37

Northern Virginia. But there are other

49:38

areas. There are five or six what I call

49:40

hotspots all over the United States

49:41

where power issues are going to be very

49:44

profound. But it's not just about power,

49:48

though I completely agree with what Eric

49:50

Schmidt had to say. I think that is the

49:52

the Achilles heel, the the black swan,

49:54

whatever phrase you want to use that

49:56

potentially threatens um uh um the US AI

50:01

model. And remember that China does not

50:04

face this constraint simply because they

50:07

have invested absolutely massively in

50:10

particularly renewable, but not only

50:12

renewable energy. um I mean massively um

50:16

and this is allowing them to not think

50:19

of um energy as a constraining factor at

50:23

all. Um they are able to do whatever

50:26

they want to do and they don't have to

50:28

think about energy. In fact the price of

50:29

energy has actually been falling

50:32

for the Chinese. So their particular

50:34

model which is what we call distributed

50:37

intelligence model as distinct from

50:39

concentrated intelligence model

50:41

personified by the likes of Stargate the

50:44

Chinese one is far less power hungry

50:46

anyway and to the extent that they need

50:49

that power they have it if you go to a

50:51

Chinese conference and it doesn't have

50:53

to be an AI conference and speak to all

50:55

the the geeks that are there the last

50:57

thing they're going to tell you about is

50:59

oh we're worried about our power

51:00

supplies the last thing. go to a US

51:03

equivalent conference and almost the

51:04

first thing they're going to talk to you

51:05

about is power.

51:07

>> What about the second assassin, material

51:09

science?

51:10

It's somewhat related and I always think

51:12

that there's a fairly thin line between

51:14

the physics and the chemistry but

51:17

essentially it's about degradation

51:21

of the materials

51:23

that at these incredibly small levels

51:26

with all the heat that you were

51:28

mentioning rightly um you're seeing the

51:31

materials starting to break down.

51:34

They're starting to um corrode if that's

51:38

the right word. It probably isn't in the

51:40

context, but it's something you and I

51:41

can

51:42

>> depreciate. How's that? How about that?

51:44

>> Well, depreciate. Yes, but that's we're

51:45

getting into the the language in Michael

51:47

Bur there, but yes. All right, let's

51:49

call it depreciate in the sense that

51:51

they're no longer useful. If that's what

51:52

you mean by depreciate, yes, I

51:54

completely accept that they're no longer

51:56

useful. And Michael Bur will say that a

51:59

high-end chip has three years of useful

52:01

life. Um, Amazon will claim it's five.

52:04

Um, I don't know. um account accountants

52:07

are going to be forced probably to

52:08

follow the absent line but uh

52:12

essentially depreciation

52:14

um uh corrosion happens uh at these uh

52:19

very small levels and that creates all

52:21

sorts of secondary issues like as you

52:24

mentioned heat um and the result is the

52:27

chip becomes less than useful. it starts

52:30

to break down. What we call yield, which

52:33

is the number of transistors that are

52:35

are operating within the chip at at full

52:39

strength, starts to fall fairly

52:40

dramatically. Um, and so it's all about

52:44

essentially the corrosion of the

52:47

metallic properties that exist in the

52:49

chip, particularly dare I say, silicon.

52:51

There are ways again of buying time. You

52:53

could I think it's called it's halfneium

52:55

or something like that that gets coated

52:57

on the chips.

52:59

>> Hafneium is one of those one of those

53:02

periodic table metals that you and I

53:04

never got down to. Um but it's

53:06

nevertheless it it's it's it's it can

53:09

buy a little bit of extra time. But the

53:12

point being is that um we really are

53:15

fiddling. It's like, you know, as I

53:16

said, we're using we're it's life

53:18

extension drugs if if you want to think

53:20

about it in that context. Um, but it

53:22

ain't going to last for long.

53:24

>> And this is where we get to that third

53:26

assassin. Michael,

53:28

>> can I just add there? I went to my essay

53:31

and I'm going to read one sentence, two,

53:34

but it basically says everything that

53:36

I've just said, but very very

53:38

technically.

53:39

>> Transistors now require, and we're

53:41

talking really here about materials

53:43

science issues. Transistors now require

53:46

ghost proofing such as hapneium oxide

53:49

layers. But by 2Nm even these are just a

53:53

few atoms thick. One missing oxygen atom

53:56

causes a short circuit and even

53:59

deposition of that creates gaps that

54:01

invite electron tunn tunneling. So

54:05

you're seeing what I'm talking about

54:06

here is we're we're reaching limits of

54:08

science both physics and chemistry um

54:11

that are starting to make get making

54:13

things smaller incredibly difficult

54:16

>> and this is where the third assassin

54:18

economics really comes in. uh you

54:21

referenced Michael Bur who has you know

54:25

reemerged and has made the the following

54:28

critique that the companies that are

54:31

spending massively on these chips they

54:35

put the capital up front and you know

54:38

that that is not recorded as at a loss

54:41

at all. It's you know net neutral. The

54:44

the cost comes and is depreciated over

54:46

the weighted life. So if you if you if

54:48

it had a weighted life of a 100 years

54:50

every year in its annual report that

54:52

cost would only be 1%. If it had a

54:54

weighted life of two years it would take

54:56

a 50% hit in the first year and a 50%

54:58

hit in the second year. The weighted

55:01

average life of chips and certain in

55:03

data center investments is my

55:05

understanding from like 2019 to 2021 it

55:08

had been three years. It was extended to

55:11

five years or maybe six years for some

55:14

companies. And it's my understanding

55:16

that a lot of that was for old CPUs that

55:19

actually was uh completely legit like

55:21

three years was too short and so it was

55:23

the it had been wrong and it was the

55:25

correct thing to to make it longer. The

55:27

critique is that for these newer GPUs

55:29

because the transformation and the

55:30

innovation is so rapid. uh you know five

55:33

or six years is not that relevant and

55:35

and the idea that in five years uh these

55:38

chips are still going to have uh you

55:41

know as serious value is is something of

55:43

a joke. I will also point out that you

55:45

know Michael Bur very very smart

55:47

investor but technically you know um

55:49

someone who had been saying that a lot

55:50

earlier than Michael Bur is Jim Chenos

55:52

the short seller noted for his shorting

55:53

Enron and being early there. Um just a

55:56

plug I did interview him in uh December

55:59

of of last year about this very issue.

56:01

So we can link to that and people

56:03

definitely should should check this out.

56:04

And then also this advanced packaging

56:06

thing. Um I interviewed Catrini about uh

56:09

a researcher known as Satrini about it

56:12

and and basically it's it's it's

56:14

everything you're saying that uh the

56:17

these scaling laws and the improvements

56:19

are going to be coming not from the

56:22

power of the chip itself but from the

56:26

interconnection and the architecture and

56:29

basically so so that uh you know so that

56:32

so that you maximize the effective

56:34

compute

56:35

>> there's nothing I can say to dispute I

56:37

agree 100%.

56:38

>> I I think you you know Michael this

56:40

issue has been raised to Jensen Wong

56:42

Nvidia CEO and he has said that with

56:44

every new Nvidia chip it it does get far

56:49

more efficient and that the uh you know

56:52

amount of energy it takes goes goes way

56:54

down per chip. to what degree is that a

56:58

uh you know a fair push back and a

57:01

justification of of the Nvidia's model

57:04

or or do you do you do you find issues

57:06

with that? Look, I think that he is

57:08

speaking correctly where it when it

57:10

comes to capability,

57:13

but as you probably know, the cost of

57:16

each of those chips and the next

57:18

generation chip is rising as a

57:21

percentage faster

57:24

than the useful compute that those new

57:28

chips are producing. Now, as an

57:31

economist thinking about that, he

57:33

basically says we're heading towards

57:35

some sort of crisis point where um you

57:40

can't just regardless of cost continue

57:44

to improve the the chip if the thing

57:47

that you really want it for

57:50

the usable compute is not rising at a

57:53

commensurate rate with the technology.

57:56

And this is where the economics really

57:59

does come in.

58:00

the diseconomies of scale derived in

58:03

part from the physics and material

58:05

science that you and I have talked from

58:07

um is now starting to weigh very very

58:10

heavily and I think that's where Michael

58:12

Bur is in part coming from. I I haven't

58:14

seen and I'll look it up the Jim Chos

58:16

interview, but there's a lot of other

58:18

people that have said this as well that

58:20

we're moving into a world where that

58:21

monolithic chip that Nvidia has been so

58:24

famous for um is unlikely to continue to

58:27

rule the Bruce for much longer. And the

58:30

replacement of the Nvidia chip, the

58:33

dominant Nvidia chip, is not so much the

58:35

new dominant player AMD, a competitor to

58:37

Nvidia, but rather a a custom ASIC chip.

58:41

uh o or so are you saying that uh

58:45

basically these the companies that are

58:48

uh building the data centers and

58:49

creating the models are going to be

58:51

making their own chips and probably

58:52

hiring a company like Broadcom or

58:54

MediaTek in order to make to make their

58:56

own chip rather than just buying a chip

58:58

from Nvidia or AMD.

59:00

>> I think that's absolutely right. I think

59:02

what Amazon is doing with Traium, what

59:04

Google is doing with its TPUs are

59:06

particularly interesting, but you can

59:07

buy them in from third parties. Yes. But

59:10

they're actually doing it inhouse. And

59:12

the chips that they're building,

59:13

designing are not

59:17

side by side as powerful as the ones

59:19

that Nvidia produces. But they're built

59:22

for purpose.

59:24

They work for what Amazon needs them to

59:28

do. They work for what Google needs them

59:32

to do. So, it's a bit like buying um I

59:37

don't know, a truly magnificent

59:40

Mercedes-Benz that's got off-road

59:42

capability, but the reality is is that

59:44

you basically going to drive around town

59:47

with it. It's just not needed to have

59:51

that off-road capability, but it adds

59:52

huge amounts to the cost. and what what

59:56

Nvidia has come up sorry Amazon and and

59:59

Google and I'm oversimplifying here but

60:01

they come up with a chip that works for

60:04

the specific needs that they have for

60:07

that chip. Now a lot of these chips are

60:10

being used not just on the training side

60:12

but on the inference side and this is

60:15

something which by its behavior uh

60:18

Nvidia has started to recognize and then

60:20

moving over to chips that are more

60:24

geared towards achieving success in

60:26

inference but also the software that's

60:29

required to get the best out of those

60:32

chips which is why they bought Grock

60:35

with a Q

60:37

>> um That was precisely a recognition that

60:41

Nvidia is now stopping from being simply

60:44

a supplier to other big

60:48

chip based tech companies actually

60:51

becoming a player. It's it's it's

60:53

building its own stack from hardware

60:55

through to software. And so essentially

60:58

um it's starting to shoot itself I think

61:00

in its own revenue foot because it's

61:02

starting to compete with its best

61:03

customers. Um, and that is something

61:06

which can only go on for a certain

61:08

period of time. I mean, if I was Amazon

61:10

at the moment, if I was Google at the

61:12

moment, I would just say to my chip

61:14

development department, full speed

61:15

ahead, guys. We can no longer rely on

61:17

Nvidia because they're actually trying

61:19

to become a competitor to us. Um, and so

61:23

I think there's a very interesting

61:24

low-level civil war breaking out in the

61:26

United States at the moment between the

61:28

big players in the world today. What do

61:31

you think is going to happen to the US

61:34

model providers? So not talking about

61:35

Nvidia, but I'm talking about OpenAI,

61:38

I'm talking about Gemini of Google, I'm

61:40

talking about Anthropic, uh, as well as

61:42

the other, let's call them lesser

61:44

players. Where are they going to be in

61:46

three to five years in your view on the

61:49

spectrum from they have a product, it's

61:52

you know, modestly profitable, somewhat

61:54

of a success, but not the lights out

61:56

versus this company is not going to

61:57

exist anymore. Well, um I I please don't

62:00

think I'm trying to be a stock promoter

62:01

here, but the model I like most at the

62:03

moment is Google's um because um they

62:07

have uh and I think they're doing

62:08

something which is again happening at

62:11

the very early stages but they appear to

62:14

be essentially um courting Apple at the

62:17

moment and bringing Apple as in in as an

62:19

ally. The great thing about that is that

62:21

it can't be seen as competitor or from

62:24

an antitrust perspective. it just to be

62:26

seen as a an ally. So, um I like what

62:30

Google's doing. They have a very

62:32

powerful model. Gemini is a very

62:34

powerful model. They have an

62:35

unbelievable distribution capability.

62:38

They actually still technically own

62:40

Android. And now they are quietly cozing

62:45

up to the other great phone based

62:48

software company, Apple.

62:51

So, I like what they've got most of the

62:54

pieces of the jigsaw puzzle in place

62:56

already. They still need to work hard on

62:59

all of them, but nevertheless, they seem

63:01

to be putting it all together almost

63:04

better than anyone else at the moment.

63:05

Of course, Nvidia, as I said, has broken

63:07

ranks with its old model and is now

63:09

trying to do all of these things as

63:11

well. But the orphans, and I would think

63:15

of a barropic as an orphan,

63:17

um I think they're going to have a tough

63:19

time of it staying independent. Um I

63:23

think unless

63:25

Open II has the likes of Microsoft

63:28

behind it, I wouldn't be

63:32

and of course I suppose um the big

63:34

Japanese um companies that are

63:36

supporting open AI, but I'm not I'm not

63:39

a huge fan of open AI. I don't think

63:41

it's going to be a winner in this setup.

63:43

I think they're essentially taking on

63:45

more uh capital cost um than they will

63:48

able to be able to generate sufficient

63:51

revenues from. So I think OpenAI has got

63:54

its work cut out for it to an

63:56

extraordinary degree. Amazon is is an

63:59

interesting pair at the moment. Um is is

64:01

doing what Google is doing, the tranium

64:04

chip for instance, but they don't have

64:06

the consumer reach that Google has.

64:09

doesn't have a a browser like Google.

64:12

>> Michael, who what is Amazon's model?

64:15

>> They are starting to make their own

64:16

chips.

64:17

>> Their own chips, but they don't have a

64:18

model. I think they they own a little

64:20

>> They don't have a model. Absolutely. No,

64:22

but Amazon is as as I said, it's it's

64:24

it's an interesting and I think probably

64:27

I'm making a prediction here, but but

64:29

you know, maybe Amazon will buy

64:31

anthropic and then suddenly it will

64:32

jumpst start its model position. the the

64:35

the point is Amazon is essentially now

64:37

offering um a data service um data

64:42

centers but its data centers are

64:44

increasingly being offered to third

64:46

parties um and it's doing that with its

64:48

own chips but all I will say is that you

64:50

know not nothing compared to Google

64:53

which I think is really doing a great

64:54

job at the moment um but I think that uh

64:58

Amazon has got an interesting one and

64:59

they would be for me a potential um

65:02

acquirer of one of the models the orphan

65:04

models as I like to think of them that

65:06

we're talking about. I mean for instance

65:08

another one out there is meta. Um I mean

65:11

meta's had a a disastrous year in my

65:14

humble opinion. I mean the whole llama

65:16

story which was and I'm using the

65:18

appropriate euphemism here put out to

65:20

grass in August last year. Um llama is

65:24

an orphanford now. It's a good or

65:26

ironically it's an open-source or but

65:29

nevertheless

65:31

uh it hasn't been improved. not that we

65:33

know of uh in any material way since

65:35

August last year and Mark Zuckerberg

65:37

seems to be going down a completely

65:39

different path now and I I I'm not

65:41

exactly sure of what that path is but

65:45

Meta would be another company that for

65:47

me would on the basis of current

65:50

behavior be struggling in five years

65:51

time.

65:52

>> So so Amazon does not currently I

65:55

believe have their own model or any

65:56

model that is you know serious they are

65:58

a huge cloud provider. they were you

66:00

know the first really uh large cloud

66:02

provider. Microsoft is is now

66:06

>> and increasingly Google as well. You

66:09

know the cloud computing is a profitable

66:12

business uh and quite growing rapidly in

66:15

particular is growing rapidly now but

66:17

how much of that is because the

66:19

customers are is open AAI and all of

66:22

these other unprofitable AI startups. So

66:24

everyone says the dem the demand for

66:26

compute is so high the demand for

66:28

compute is so high. So you know data and

66:30

that is demand for people uh becoming

66:32

customers of data centers but how much

66:35

of it is you know real and sustainable.

66:40

>> You ask a very profound question which

66:41

actually leads us even back to Nvidia.

66:44

Nvidia might be immensely profitable and

66:46

indeed it is immensely profitable at the

66:48

moment but are its customers profitable?

66:52

And so to

66:53

>> Michael sorry an an amazing question I

66:56

want to say technically a ton of its

67:00

customers are immensely profitable like

67:03

Microsoft

67:04

>> yeah but but the customers of its

67:05

customers are not profitable that's

67:07

although and to the extent that its

67:10

customers are profitable they're not

67:11

generally speaking always very

67:12

profitable from their artificial

67:14

intelligence activities I mean to the

67:17

extent that um uh its customers u might

67:21

be uh doing well. They're able Meta is

67:24

able to subsidize uh its activities in

67:27

AI because of the advertising that it

67:29

gets from Facebook. But if you actually

67:32

look at if you can compartmentalize it

67:36

when I ask the question again,

67:39

how profitable are Nvidia's customers

67:44

from their AI activities? It's a much

67:48

more complex question to answer. they've

67:50

got associated areas which can subsidize

67:53

for now um those areas but one of the

67:56

interesting things is is that we've seen

67:58

a lot of the companies meta being one of

68:01

them move out of the fact that they

68:03

could finance their AI activities from

68:06

free cash flow to now having to borrow

68:09

again a slight warning sign echoes of

68:13

1999 2000 I don't want to make too great

68:16

a parallel I'm not Michael Bur

68:19

nevertheless

68:20

a warning sign. The amount of debt

68:22

that's creeping into the system both on

68:24

and off balance sheet at the moment um

68:26

should be of concern. It's particularly

68:29

of concern in related areas like dare I

68:32

said Oracle and Pwe.

68:34

>> Yeah. The point being is it's part of

68:36

the ecosystem. So one has to look to

68:38

some extent of the health of the

68:40

ecosystem as a whole. Though one can

68:42

recognize that there are parts of that

68:44

ecosystem that ostensibly are very

68:46

healthy at the moment.

68:47

>> Yes. And Michael, it has been said by by

68:49

others as well as I have said the

68:51

following statement that the amount of

68:54

the money being spent on AI to people

68:56

who are building the data centers and

68:58

buying the chips are among are the most

69:02

profitable and you know largest

69:04

companies that have ever existed. I

69:06

stand by that claim in its technicality.

69:09

I want to add the caveat that the

69:12

customers of those immensely profitable

69:14

companies name Microsoft, Amazon, and

69:16

Google, those c customers are often

69:19

VCbacked companies that are losing a

69:22

gajillion dollars a year. That's that's

69:23

a technical term. So, so the customers

69:26

of Nvidia are making money. The

69:28

customers of the customers of Nvidia are

69:30

not making money.

69:31

>> I'm happy to be go with your

69:33

qualification.

69:34

>> Yes. And and this example you said of

69:37

Meta buying a ton of Nvidia chips in

69:39

order to make its own process better and

69:40

you know to to serve ads with AI ads

69:43

that is a somewhat rare scenario. I

69:46

think a lot of it is cloud computing

69:47

that is profitable but the customers of

69:50

that cloud computing uh are are losing a

69:52

a ton.

69:53

>> I I'm happy to accept your

69:55

qualification. No, I'm not going to

69:56

speak.

69:56

>> And so, so you you make a lot of uh

69:58

military analogies and you basically

70:01

compare the US architecture and and

70:04

Invidia to to um the German tanks uh uh

70:10

during World War II, which were

70:12

extremely effective tanks. It's just

70:15

that the the German economy and

70:17

industrial uh uh uh powerhouse was

70:21

unable to make enough of them compared

70:23

to the Soviet tanks and the American

70:25

tanks that the tanks were maybe not as

70:27

powerful but they were able to produce

70:29

them at scale and you know ultimately

70:30

led to uh defeating uh uh uh the Germans

70:34

thankfully. Tell us about that uh

70:36

analogy. There's a very famous infamous

70:39

apocryphal story of a rather put out

70:43

German tank commander who said, "One of

70:46

our tigers is worth four Shermans. The

70:48

problem is the Americans always bring

70:49

fire." Now that saying has been

70:53

subjected to scrutiny and it doesn't

70:56

hold precise water but the concept

70:59

everyone agrees that in the end if you

71:03

can moanize enough material

71:08

um you can overwhelm people who have

71:11

oneonone better pieces of equipment than

71:15

you do and this is something which the

71:18

Chinese are essentially doing now when

71:20

it comes to chips and even David Saxs in

71:23

the White House had admitted as much to

71:25

this that that China doesn't need our

71:28

chips because what they do is they

71:31

essentially amass so many chips from

71:33

Huawei that they can outshoot in terms

71:35

of usable compute your earlier term um

71:39

an Amidia cluster which is really what

71:42

we're talking about. So the the Huawei

71:43

supercluster versus the Nvidia cluster

71:46

um there are just so many more Huawei

71:48

chips in that cluster and the net effect

71:50

is that it outshoots the Nvidia cluster

71:53

and that's what has started to happen

71:57

in China. Now, it's not fully

72:00

operational at the moment. And this

72:02

whole will they won't they story that's

72:05

coming out of China at the moment with

72:06

regards to will they allow uh the H200

72:10

or the H100 to be imported from Nvidia

72:13

uh is part and parcel. It's caught in

72:15

the crossbar mix of my met, but not

72:17

entirely um in this whole process at the

72:20

moment because the Chinese feel they're

72:23

close. My own estimate is that come

72:26

2028, they will have met with parity are

72:30

being able to match

72:32

perhaps on scale if not on quantity

72:35

anything that can be thrown up by the

72:37

likes of the amount of of of of effort

72:41

of money of resources that are being

72:43

mobilized to the producing of just huge

72:46

numbers or chips in China

72:50

is such that while it's touch and go and

72:54

the comparison today and David Sachs may

72:56

be right or it may be wrong or may be

72:57

technically right but not practically uh

73:00

right but nevertheless in two years time

73:02

he absolutely will be right my own view

73:05

is that what China's setting itself up

73:07

to do is to tide itself over it

73:10

basically needs to buy time probably two

73:12

years and it may take a dollop of H200s

73:16

H100s from Nvidia for 2026 and 27 but by

73:21

2028 it won't need those chips any

73:23

longer and it's not that they won't be

73:24

able to that they can produce better

73:26

chips It's just that they're going to be

73:28

able to produce massively more.

73:30

>> You have several agenda in your piece.

73:33

The first is a story which I love takes

73:36

me back 20 years when I when I read the

73:38

story you know as a child of the the

73:41

Indian tale about the king that uh you

73:44

know said I will grant you any favor and

73:47

the guy said give me a grain of rice on

73:49

day one on day two double it and then

73:51

double it on day three. And basically by

73:53

the end of the month or by the end of

73:55

two months uh it was a million trillions

73:57

of of rice. So you know the possible

74:00

number I believe that is a quadrillion.

74:03

H I love that tale. It's the story of

74:05

compound interest. How does it apply to

74:06

what we're talking about right now?

74:07

>> Well I look I just wanted to uh

74:10

essentially one of the problems that

74:12

happens often in the whole area of um

74:17

talking about AI is that we get drowned

74:19

in big numbers. So, I essentially wanted

74:22

to go to one of the biggest numbers I'd

74:23

ever seen, which is the number of grades

74:26

of rice that will be on the 64 square

74:28

chessboard. Um, and essentially use that

74:32

um to explain a story. Um, and in fact

74:37

the it's 1 2 3 4 5 6 7 8 9 10 11 12 13

74:44

14 15 16 17 20 numbers on that last

74:49

uh 64 square. Um and the point that I

74:53

was making here is that to some extent

74:58

uh the numbers that are being talked

74:59

about by the likes of Open AI every now

75:03

and again start to uh almost approach

75:08

the same sort of level of absurdity.

75:10

Eventually you run out. You can't

75:13

mobilize the amount of rice that's

75:15

required to cover that 64th square. Um I

75:18

came across a statistic and I'm just

75:20

going to quietly bring it up uh

75:22

yesterday

75:24

when talking about we've mentioned it

75:26

earlier in the context of what Eric

75:28

Schmidt had to say but of energy and on

75:32

on the current level of um energy needs

75:36

that open AAI has. They need to increase

75:38

their energy capacity over the next

75:41

eight years by I kid you not 125 times.

75:45

Now, these are the sorts of compounding

75:48

numbers that eventually

75:51

cause me to say enough's enough. This

75:53

isn't possible. You can't carry on like

75:55

that. Um, and

75:58

the the whole process starts to come off

76:00

the rails. I'm sure that that Indian

76:02

prince or king uh eventually when things

76:05

were starting to get 24 square was

76:07

saying, "Oh my god, I'm going to

76:08

bankrupt this nation." you know, the

76:10

number of grains of rice that's on the

76:11

24th square is just more than three

76:15

years worth of production. And I think

76:18

that that same sort of logic eventually

76:21

overtakes the likes of Open AI because

76:24

the numbers if they have to increase

76:27

energy consumption 125 times over the

76:30

next eight years,

76:32

are they mobilizing

76:35

or at least is the system mobilizing

76:39

the amount of energy supply to be able

76:40

to meet that demand?

76:43

And I I think that those it was just a

76:46

way of playing with the numbers, the

76:48

power of compounding and to use it in

76:51

this context and to say um I think the

76:55

numbers being talked about here are just

76:57

um off the charts and are not realistic.

77:00

So, Michael, a key point you're saying

77:01

is that uh all the huge, you know, AI um

77:05

AI bulls say, Michael, the the revenue

77:08

is going to grow exponentially. And

77:09

you're saying, of course, but the costs

77:11

are growing exponentially as well. And

77:13

that is something that wasn't true of

77:14

software. And software, the everything,

77:16

you know, the revenues grow

77:17

exponentially, but the costs stay

77:19

somewhat fixed over time. And yes, you

77:21

have to depreciate the the the

77:24

commissions you give to sales people,

77:25

etc., etc. yada yada. But software, you

77:27

know, as it existed in the US over the

77:29

past 25 years, is a ridiculously good

77:31

business. Um,

77:33

>> Microsoft produce another copy of

77:35

Windows, it cost them an infinite

77:37

decimally small amount of money.

77:39

>> Mhm.

77:40

>> So what they can sell that for,

77:43

which is, as you and I both know, quite

77:45

a lot, compared to what it costs them to

77:48

produce, gives them spectacular margins.

77:50

Making your point. So, Michael, you now

77:53

I want to get to the point where I I

77:55

want to give potential counterarguments.

77:58

Some of the counterarguments are going

78:00

to be what if you're wrong about what

78:02

you said. Some of it I what I'll start

78:03

out with is let's say you're right that

78:07

the Chinese AI architecture is is going

78:10

to reach the scale of US and soon um

78:13

exceed it uh especially when you adjust

78:15

for costs and inputs.

78:18

Why does that mean automatically that

78:20

China is going to dominate AI? You know,

78:22

I'm, you know, not a tech geek at all.

78:24

I'm the furthest thing from it. But I

78:25

know that many tech geeks, people who

78:27

are tech geeks say that Linux is in many

78:31

uh areas much more preferable to Mac or

78:34

to Windows. Uh people say yes, Android

78:38

and all this open source stuff much

78:40

preferable to the iPhone and Android.

78:41

And you started our conversation by

78:43

noting the success of Linux and

78:45

operating systems and the success of uh

78:48

um Android over micro uh Microsoft and

78:52

Google. But you know the the the

78:56

Microsoft and Apple are still enormously

78:59

profitable enterprises and it seems like

79:01

the power of American technology

79:04

companies to extract uh monopoly rents

79:08

or monopoly like rents from their

79:10

software has been true despite the fact

79:12

that yes there are cheap alternatives

79:14

whether in the case of Microsoft it is

79:16

through uh you know somewhat

79:18

anti-competitive practices maybe or in

79:20

the case of Apple it's in the fact that

79:21

people brand people just associate Mac

79:24

with good things and therefore they use

79:25

it way more often and and you know maybe

79:28

not in terms of the amount of people

79:30

amount of customers but in terms of

79:31

dollars and particularly profits you

79:33

know Apple Apple is dominant there why

79:35

isn't that going to be the case for you

79:36

for um for AI

79:38

>> look it may well continue to be

79:44

but it's a closed ecosystem that we're

79:46

talking about and Apple has got an

79:49

amazing mechanism for essentially

79:51

extracting

79:52

rent from what is essentially a closed

79:55

ecosystem.

79:57

But it gets back to my earlier comment

80:00

about not just the US world but the

80:02

world's world.

80:04

The world world is seeing this

80:06

differently. And I just happen to live

80:07

in one part of the world that's seeing

80:09

it differently. And we're not just

80:11

talking geopolitics here. We're talking

80:13

geoeconomics.

80:15

We're seeing what Indonesia setting up a

80:18

sovereign AI system which it's announced

80:21

it will do. And yes, it will get some

80:22

technical support from some of the US

80:24

players, but when it comes to the

80:26

software, it looks as if it's basically

80:28

going to go with qu.

80:30

And the point being here is that the

80:32

software comes in inverted commas for

80:36

free.

80:38

Now,

80:40

we know the reason why

80:43

the top 100 supercomputers in the world

80:45

use Linux is that Linux comes for free.

80:50

That was not the case in the the mid1

80:51

1990s where basically it was Microsoft

80:54

that was supplying the software to the

80:56

what supercomputers were around at the

80:58

time. But what has happened in the last

81:01

20 years, 25 years, is that the geeks

81:05

that run the world supercomputers

81:07

basically, and there's another dimension

81:10

here, but it's an incredibly important

81:12

dimension, preferred an opensource

81:15

option which they then could manipulate

81:17

in order to build that supercomput in

81:20

the way that they wanted it to look.

81:24

And this is critical that what we're

81:28

seeing is that the the usability

81:32

of open source stroke open weight and

81:36

open weight is not as flexible as open

81:38

source. open source is unbelievably

81:40

flexible. Um is that

81:44

it, you know, it will will it's going to

81:47

win in the wider world.

81:50

And if all you're saying is yes, but

81:53

Apple will continue to be able to

81:54

extract huge profits from its core

81:56

markets. And by the way, I I need to

81:59

correct what I said earlier. There are a

82:00

couple of extra markets in the Apple

82:02

world, which is South Korea and Japan.

82:04

and I think I missed them out but

82:06

nevertheless that's six out of what does

82:08

CNN say that there are you know 200

82:11

territories and countries um the rest of

82:14

the world is moving very heavily towards

82:16

using Chinese software because it's free

82:20

and you know there is free is free it's

82:23

a very very powerful model the point is

82:26

that there are other ways in which

82:29

people are going to have to android is

82:30

the classic example here there other

82:33

ways in which people are going to have

82:34

learned to monetize the language. And it

82:37

is a language we're talking about here.

82:39

It's like English. You know, you and I

82:41

don't actually pay anybody a tax for

82:44

using English. But by using English, you

82:47

and I are often able I can go and make a

82:49

speech and thereby monetize my use of

82:52

that English and give myself an income.

82:56

And it's the same thing. It's

82:57

essentially becoming a a software

82:59

language that's available for free.

83:03

And this is where I think that

83:07

the clash will ultimately arise. Now

83:08

what we may end up with, and I'm not

83:11

disputing this, is a bifocated world

83:14

with these reinforced islands that Apple

83:17

can occupy, reinforced islands that USI

83:21

can occupy,

83:23

six or seven or eight of them. But the

83:26

rest of the world which of course is

83:28

growing economically much faster than

83:29

those reinforced islands

83:32

um will be opting for something else.

83:35

And where does memory come into this? I

83:38

know there is a giant squeeze in memory.

83:41

The stocks of these companies like

83:43

Micron or SanDisk are up, you know, up

83:45

300 400 500%. It really is ridiculous.

83:48

You referenced earlier that DeepSk

83:52

memory usage or or demands are down 93%

83:55

from 100% to 7%. How is it so much more

83:58

efficient as well?

83:59

>> The MLA architecture that they've come

84:01

up with and really I'm not a geek so I'm

84:03

not getting into the specifics here

84:05

essentially allows them to pack down the

84:07

usage

84:09

of the space

84:12

far more effectively, far more

84:14

efficiently.

84:16

So much so that they don't need to crowd

84:17

the memory with a hundred. They just

84:20

crowd it with seven if crowd is the

84:22

right verb. Um the point being is that

84:25

memory suddenly becomes scalable

84:29

whereas previously it didn't. But I

84:31

fully accept

84:33

that in the US ecosystem at the moment

84:37

memory is a huge issue. In fact it's

84:39

also an issue in the Chinese ecosystem.

84:41

I'm not going to deny that. But after

84:43

MLA not to the scale that it is in the

84:45

US. which is why you're absolutely right

84:47

to reference the the memory suppliers

84:50

the those people who provide uh that

84:53

capability the AMDs of the world um

84:56

you're absolutely right to reference but

84:59

the point is that again in this

85:00

bifocated world that I've talked about

85:03

um they're finding a way of doing things

85:06

differently

85:07

and their demand on memory is nothing

85:10

like let's give it three years for MLA

85:14

to roll

85:15

properly is going to be nothing like it

85:17

is today compared to if the US does not

85:21

go down this path what it will be for

85:22

the US but I have to say I think that a

85:26

lot of because MLA is essentially

85:28

available on an opensource

85:31

platform namely Deep Seek um most of the

85:34

US the Googles and the or should I say

85:37

the neatrons and the Geminis and the

85:39

Open AIs they're going to be looking at

85:41

what this MLA is all about and most

85:45

likely incorporating it into their own

85:48

next generation software because it's

85:52

architecture, it's available, it's a way

85:54

of doing things. It's not something

85:55

that's essentially owned by the Chinese.

85:58

Not at all. Anybody can use it. So I

86:00

think that the memory crunch

86:04

which we are seeing at the moment will

86:06

continue to exist for a couple of years.

86:09

But if I had to make a forecast, I'd say

86:11

it starts ease times.

86:13

>> And so you said that companies around

86:15

the world can use this Chinese open

86:17

source architecture and that they're not

86:20

sharing their data with China. Because I

86:22

imagine a lot of companies and people in

86:24

particular would say, "Okay, I

86:26

understand if I'm using all this US

86:27

technology that all of these American

86:29

corporations have my data and they're

86:31

exploiting that and I'm not happy about

86:32

it, but it's better than the Chinese

86:34

government having it and you know using

86:37

that and there being some geopolitical

86:38

fears there. Maybe I'm wrong. Maybe I'm

86:40

wrong about that. But are are you saying

86:41

that people can use these Chinese things

86:42

and and not share the data with China?"

86:44

>> Yes. Is the is the big answer. But I'll

86:47

take it one stage further. If you've got

86:49

it on your phone, you can be offline and

86:51

still use the software. And so if you're

86:53

offline, there's no way you can be

86:55

sharing the data. And that's the

86:56

extraordinary thing about the

86:58

open-source network is that it is very

87:01

distributed. I used that term earlier.

87:03

They have this phrase translated into

87:06

English called the edge. And essentially

87:08

the connectivity of Chinese software to

87:10

the edge to my cell phone here

87:14

is incredible. And the interesting thing

87:17

about that is that

87:19

one of the problems that uh

87:23

US AI models are facing at the moment is

87:25

called data exhaustion. They're running

87:26

out of high quality data to scrape.

87:29

Wonderful verb. And they're looking to

87:33

create um their own forms of data,

87:36

artificial data, synthetic data, which

87:39

they can do. But the reliability of

87:41

synthetic data is is is questioned. I

87:44

won't say it's not worth anything but it

87:47

is questioned and hallucination

87:51

imagining things as as software

87:53

sometimes does uh is said to be more

87:56

likely to happen with synthetic data

87:59

than with real

88:01

it makes sense.

88:03

>> The point about being connected to the

88:05

edge is that the data that's coming in

88:08

from the edge is

88:10

real world everyday data.

88:14

And so what's happening is that the

88:15

extent that Chinese software is updating

88:18

itself, which it can do by virtue of it

88:21

being open source, it's updating itself

88:24

with a whole new supply of fresh air

88:26

data that's coming in from the edge.

88:29

Now the availability of that data though

88:31

there are some interesting pieces of

88:33

software being developed in the US

88:35

ecosystem at the moment that potentially

88:37

can mimic that. And so I don't say that

88:40

it's going to be possible for us closed

88:43

weight models to somewhat emulate that

88:45

idea. But as it stand at the moment, the

88:48

Chinese just do it easily. So they

88:50

basically pick up on on edge data to

88:54

refresh their models, fresh air I call

88:56

it. Um that is not accessible by and

88:59

large to the US closed wave models. And

89:02

so it sounds like you think a lot of US

89:05

closed source models in particular open

89:06

AAI are going to have a tough time.

89:09

Let's put it that you know you don't

89:10

want to use the word screwed but are

89:12

going to have some challenges. What

89:13

about Google? You said some you know

89:15

some maybe nice things about Google and

89:17

you know you said I don't want to be a

89:18

stock promoter. I think with this

89:19

interview uh no one's no one's going to

89:21

accuse you of promoting uh uh you know

89:23

American securities at least. Um but

89:25

Google I believe uses something called

89:28

uh TPUs, tensor processing units that

89:31

they invented and Google buys GPUs,

89:34

graphic uh uh processing units from

89:36

Nvidia primarily for its external cloud

89:39

customers for itself, Gemini, which it

89:41

owns and produces. It uses primarily

89:44

TPUs and perhaps exclusively TPUs as

89:46

well as maybe some CPUs as well. Um, so

89:48

it doesn't, you know, so it's made this

89:50

I use I use Gemini exclusively and I'm

89:52

extremely happy with it and many people

89:54

who, you know, know a lot more about me

89:55

say that Gemini model is is is extremely

89:58

powerful and good. Does Gemini uh does

90:03

does Gemini avoid these pitfalls of of

90:07

Moore's law and all these scaling things

90:09

that you say that Western US AI is is

90:12

plagued with?

90:12

>> Yes. in a I mean I I it's it's too sharp

90:16

an answer but essentially yes because

90:19

the TPUs are not facing the same sorts

90:22

of levels and the imperatives you've got

90:25

to get smaller and smaller and smaller.

90:27

They're basically saying no no no just

90:29

build me a chip that's a horse for this

90:31

course. It's it's it's built for

90:34

purpose. Um and it serves a purpose. It

90:36

doesn't it's not that you know

90:38

four-wheel drive Mercedes that only

90:41

drives around on city roads. It's just

90:44

maybe just a four-wheel drive vehicle

90:46

that only goes,

90:47

>> you know, in the in the rural areas or

90:50

it may just be your little hatchback

90:52

that runs around, you know, in the

90:54

local. The point being is that it is not

90:57

an all singing and all dancing Nvidia

90:59

full service chip. It's a much more

91:03

restricted

91:05

chip in terms of its capabilities, but

91:08

nevertheless, it serves the purpose that

91:11

Google and indeed I'd say that with the

91:13

trainium for Amazon, Amazon, but

91:16

recognizing Amazon is not as full

91:18

service as as Google is. Um, but

91:20

nevertheless, it's interesting to see.

91:23

And what this is doing is it's starting

91:26

to violate the most sacred

91:29

piece of uh of intellectual property

91:32

that that Nvidia has, which is CUDA.

91:34

They're their their essentially their

91:36

moat, it's called that protects the

91:39

Nvidia chip. Um, and what ultimately

91:42

gives it, dare I say, the value that it

91:45

has. what you can do now. It's a much

91:48

more limited service chip that that that

91:51

is being produced by the likes of Google

91:53

with a TPU. Um and nevertheless, um

91:58

Google is not left saying, "Oh, but we

92:00

need to be able to do that as well." Or

92:02

to the extent that they are, they might

92:04

buy um those GPUs um as an extra, but

92:08

for the purpose that they are talking

92:11

about now, that TPU serves that purpose.

92:15

It's amazing how ext.

92:18

A year ago, people said Google is going

92:20

to be dead because of AI because it's

92:22

going to replace their core business,

92:24

search. Now, not only is that probably

92:27

not true, search is is still very

92:28

dominant, but it has its own

92:31

architecture that in some ways is better

92:32

than open AI, which you know, and the

92:34

Microsoft uh universe. So, in terms of

92:37

investing implications, Michael, how are

92:39

you appro approaching this? Look, uh, if

92:42

all my universe was Wall Street,

92:46

um,

92:48

I would worry about quite a few of the

92:50

players of the so-called Max 7. I mean,

92:54

number one would be nobody we've talked

92:56

about at the moment, although they do

92:57

have some AI capabilities or potentials,

92:59

which of course Tesla. Um, but let's

93:02

leave them out of the equation. I think

93:03

that they are um when it comes to EVs

93:06

anyway um yesterday stores um of the

93:10

other big players out there I I worry

93:13

about Microsoft because I think they've

93:16

got themselves attached to what I think

93:18

is going to end up turning out to be a

93:21

debt in open

93:23

um which is very sad because I think

93:26

although I paid too much money towards

93:28

Microsoft every year to be able to speak

93:29

to you on a day like this um but I think

93:32

Microsoft has has has done well. Um, and

93:35

I think that they are showing signs that

93:38

they might be trying to diversify away

93:41

from open AI as their only area where

93:45

they're going to have exposure moving

93:46

forward. I think that they, as I

93:48

understand it, they've got a nice deal,

93:49

sweet deal through through 2032 or

93:52

something with with open AI. And I think

93:55

that they basically have decided that in

93:57

that time they better come up with

93:58

something as a precaution that is

94:02

alternative to our exposure to open AI.

94:04

But nevertheless, I think it's

94:06

potentially short to medium term bit of

94:08

a dead weight. Um I think meta is um

94:12

lost.

94:14

I don't think I think Zuckerberg is

94:17

casting around for all sorts of ideas at

94:19

the moment and I can't see them putting

94:21

together a story

94:24

and I think the fact that they've just

94:26

lost their best scientist who just

94:29

walked out of the door and you may have

94:31

read the financial times interview on

94:33

last weekend

94:34

Lee Khan uh and he basically had very

94:38

few nice things to say about what's

94:40

happening in met now he's a cheering

94:42

prizler so He is no he's difficult man

94:46

and very difficult what but he's

94:48

nevertheless genius there's no question

94:49

about it he's an absolute genius and he

94:52

had some pretty

94:54

harsh things to say about meta so I

94:56

think meta is a bit of an orphan and

94:58

they better get their act together

94:59

quickly otherwise I see meta in 3 to 5

95:03

years time being a souped-up search

95:07

engine attached to

95:10

retail options that go into Facebook and

95:14

the other members of the meta WhatsApp

95:17

and the like. I really do think I mean

95:19

that's the LLM's offering is is nowhere.

95:24

>> Yes. And I actually I believe that

95:27

um so Meta's

95:31

model is as you say open source and so

95:33

they're distributing it everywhere. So

95:34

it's not just at meta.ai but I believe

95:37

the web traffic of meta.ai AI was

95:41

something like only 10 times more than

95:43

my podcast. So my podcast gets onetenth

95:46

of the traffic and in terms of listening

95:47

time in terms of time as meta.ai

95:50

>> maybe being modest about your podcast.

95:52

So maybe that you know such a high level

95:55

of traffic that you've got. Um that's

95:57

how um you know poor old Meta can't keep

96:00

up with my point is that it's not a lot.

96:02

You know

96:03

>> it's not a lot. I I I understand where

96:05

you're coming from.

96:06

>> Yeah. Um, but no, I think I think Meta

96:09

has got it worked out and and losing

96:12

which which is what Lee Kan said in his

96:14

interview, losing their way with Llama,

96:16

which was something that was an

96:17

incredible option and then basically

96:20

having, you know, been all in Llama

96:22

until about August last year and then

96:23

suddenly going silent alto together on

96:25

it. Um, and not having anything to

96:27

replace. Um, and obviously, you know,

96:30

they haven't got a closed source unless

96:32

they buy anthropic. I mean, that's

96:33

something that they could do.

96:35

You know, Anthropic is a great LLM, but

96:39

it's a bit of an orphan and you're not

96:41

sure how they're essentially going to

96:43

pay for themselves moving forward unless

96:45

they have a a sugar daddy like Meta

96:47

behind,

96:47

>> right? And Meta Meta does have a hugely

96:50

profitable business and it's investing

96:52

in all this stuff to AI to make its own

96:55

product better rather than, you know,

96:56

serve the compute out to to uh clients

96:59

who who are other LLMs. That's why I

97:01

think like uh Microsoft and particularly

97:04

Oracle might get a little more screwed

97:06

than than Meta because Meta at the end

97:07

of the day let's say let's say you're

97:09

right Michael sorry let's say you're

97:11

right and you know Meta loses tons of

97:13

money the depreciation is is immense and

97:15

the revenue growth is simply not there

97:17

not enough and then the stock you know

97:19

goes down 80% like it did again I think

97:21

the playbook is probably pretty similar

97:22

to 2022 where it's a buying opportunity

97:24

because it's it's kind of like if

97:25

Coca-Cola was spending billions of

97:27

dollars on trying to you know go to Mars

97:29

or something it's like yeah they'll just

97:30

eventually stop losing money and they'll

97:32

do it. You know, Mark has nothing with

97:34

Oracle. Oracle's not it's not on its

97:36

balance sheet yet, but it's it's it's

97:38

committed to a quarter of a trillion of

97:40

of uh lease obligations for its data

97:42

centers.

97:43

>> If you are I didn't wasn't including

97:45

Oracle and Core Weave in our

97:46

conversation at the first instance

97:48

because they're not in the Mac 7. But if

97:51

you were asking me that given the

97:52

choice, would I buy Meta ahead of Coree

97:54

or Oracle? I would buy Meta. If that if

97:57

that's my menu, I would buy Meta. and

98:00

you're damning it with fake comparison,

98:03

if that's a phrase that we can come up

98:05

with. Um, and you're absolutely right. I

98:07

mean, if you want to focus on where the

98:12

problems in if there is a bubble, and

98:15

let's not get into that subject, but if

98:17

you were on bubble watch at the moment,

98:19

you should focus on Oracle.

98:22

>> And I mean, you said you don't want to

98:23

use the word bubble, but in the piece

98:25

you use the bubble several times. You

98:26

say bubble bubble toil and trouble and

98:28

you say you know maybe there's a bubble

98:30

in Oracle and Corey which are publicly

98:32

traded the massive the granddaddy of all

98:34

the bubbles open AI right which is

98:36

private.

98:37

>> Look I did speak about and I spoke about

98:39

the finances particularly underlying um

98:42

uh Stargate for instance. However, my

98:46

central thesis at the moment is that the

98:49

real bubble is technological, but the

98:51

financial bubble is a symptom of that

98:53

technological bubble. And the two are

98:56

interconnected.

98:57

And that's not to say if one is to

98:59

burst, which one is going to burst

99:01

first, but the relationship's almost

99:04

umbilical, so that if one does burst,

99:05

the other one will probably suffer. Um,

99:08

but my central thesis in that paper is

99:12

that the bubble is technological. Where

99:14

does that leave us with the Chinese

99:17

investable AI universe? There are

99:20

publicly traded securities. Uh Alibaba

99:25

uh is the is the producer of of Quen uh

99:28

which we referenced. Deepseek is is a

99:30

you know is a private company. There is

99:32

10 cent which interestingly is uh you

99:35

know a lot of which is is owned by uh um

99:37

Naspers you know the largest South

99:39

African company. Um, and then there's

99:41

tons of Chinese companies that I've

99:44

never heard of, most people have never

99:45

heard of that are up over 100% and I

99:47

think a lot of those companies are

99:48

suppliers to the AI infrastructure as

99:50

well. And that would certainly be what I

99:52

consider a high-risisk area. But, uh,

99:54

are do you have any bullishness? You so

99:56

you have a certain caution, let's not

99:57

call this bearishness, but certain

99:58

caution over, you know, strategists are

100:00

never bearish, they're only cautious.

100:02

Um, certain caution over US securities.

100:05

Uh what about are you are you bullish or

100:08

optimistic about Chinese securities?

100:10

>> I prefer the ones like 10 cent and

100:11

Alibaba

100:13

where there is something else involved

100:15

in the company.

100:17

Um but when METAX soared 700% on its IPO

100:23

debut

100:24

um we're seeing some unbelievable first

100:27

day pops uh that are happening. I'm

100:31

actually deep down quite concerned about

100:33

particularly the new issues that are

100:35

coming to market in the US um sorry in

100:38

China at the moment and that's both on

100:41

um Hong Kong and on things like the

100:44

Shanghai or the Shenzen indexes. So, I'm

100:46

not just confining myself to Hong Kong,

100:49

but be careful because I I think there

100:52

is

100:53

a bubble potentially performing in some

100:56

of these second tier. And I don't wish

100:58

to sound that I don't think that they're

101:00

producing some great product for some of

101:02

these second tiers. If it's chip

101:04

related, I'm probably a little less um

101:08

worried. uh if it's just software

101:11

related I'm hugely worried because uh I

101:13

think they will face the same pressures

101:15

the commoditization pressures um that I

101:19

fear you where open AI

101:22

which of course is moving into providing

101:25

uh hardware as well in the form of data

101:28

centers but nevertheless something that

101:31

is standalone and just an LLM at the

101:33

moment will be something that I would be

101:35

very very um careful about. So u uh

101:38

tencent and u Alibaba absolutely by do

101:41

potentially which is a very interesting

101:43

I mean it's the main search engine and

101:45

they're starting to develop their own AI

101:47

capabilities they're to some extent

101:49

doing what Google is doing dare I say it

101:52

it's not completely fair comparison I'm

101:56

probably um okay with at the moment but

102:00

some of the more recent issues um uh

102:03

with those sorts of levels of of first

102:06

day pops

102:07

Um, I am really quite concerned. Um, and

102:11

uh, so I I I'm not a, you know, what

102:14

whatever it takes commentator on on on

102:18

Chinese stocks.

102:19

>> That makes sense.

102:21

uh what about the more uh semiconductor

102:24

supply chain companies like ASML uh Lamb

102:28

Research Ka these companies which you

102:30

know semiconductors have historically

102:32

been a very volatile industries you know

102:34

bankruptcies and the like but more

102:35

recently you know these companies appear

102:37

to be much more dominant companies that

102:39

in the case of ASML have a literal

102:41

monopoly on certain technologies um and

102:43

are therefore extremely profitable is

102:46

this uh is your thesis also a threat to

102:49

these companies in the same way it's a

102:50

threat to Nvidia and the oracles of the

102:52

world.

102:52

>> Well, yes, but perhaps for a slightly

102:55

different reason. Um, and essentially

102:58

because and ASML has got two or three

103:01

years of uninterrupted

103:04

blue sea, calm sea ahead of it. But the

103:08

Chinese have have essentially been able

103:10

to come up with their own EUV at the

103:12

moment. And um, it's probably 2028 when

103:16

it will be commercially available and

103:18

being used to make chips in China. The

103:20

company there is called SME SME

103:23

um and recently it basically sold its

103:25

non EUV related activities

103:29

uh out and it's essentially

103:30

concentrating now on its EUV

103:32

capabilities but they uh essentially

103:35

have to what has been called China's

103:37

Manhattan project in Shenzen and have

103:40

built their own EUV which appears by all

103:42

accounts to work. Obviously, there's

103:44

going to be some teething issues, I

103:47

suppose, in in in putting it together,

103:49

but as I say, they're probably going to

103:51

have it up and running um within two or

103:53

three years. If that's the case, then

103:57

the ASML is no longer the only kid on

103:59

the rock. Um and uh and that then starts

104:02

to uh cause issues for the likes of

104:06

ASML. Uh I think with regards to other

104:09

players in the in the hardware supply

104:12

sector, you have to look at it on a

104:13

case-by case basis. Um see the market

104:16

that they operate in, see what the

104:19

potential competition might be, not just

104:20

from China, but but in the first

104:22

instance from China before you make

104:25

comment that yes, AMD for instance is

104:28

super secure. Um there are all sorts of

104:31

other players that are now starting um

104:33

to come in. I mean there's an very

104:34

interesting company out of China at the

104:36

moment called more threads interestingly

104:38

using more in the same sense of Mo's law

104:42

um and they are nicknamed

104:44

China's Nvidia now they are far from

104:48

being a serious competitor to Nvidia at

104:50

the moment but the technology that

104:53

they're working on at the moment is

104:55

pretty amazing

104:57

um and again fast forward three years

104:59

maybe four years um more threats

105:03

not take on Nvidia but it can start

105:05

impacting its margins and and I think

105:08

that that's the point that's how

105:10

commoditization takes place it doesn't

105:12

necessarily happen on day one two three

105:14

or year one two three but eventually

105:17

when there is and Nvidia is pretty much

105:19

monopolist high-end chips is pretty much

105:22

monopolist in EUV machines but when

105:24

there is another kid on the block um you

105:27

know they no longer uh are a monopolist

105:29

and they can no longer necessarily

105:32

guarantee to extract monopoly rents.

105:35

>> You write in the piece that the

105:37

hyperscalers are US data center

105:40

companies are trapped in prisoners

105:42

dilemma. No one can stop spending unless

105:46

because then the rivals will surge

105:47

ahead. So collective overinvestment

105:50

guarantees systemic collapse. Is it your

105:53

base case that these companies keep on

105:55

spending money into the ground so that

105:58

capex in 2026 is way higher than it was

106:01

in 2025 and the same way capex in 2025

106:03

was way higher than it was in 2024 and

106:05

that at some point there's going to be

106:08

as you say a systemic collapse a giant

106:10

bust where the market and and investors

106:14

and the world suddenly realizes that

106:16

these investments were to put it mildly

106:19

uh excessive. I wish I could say no.

106:23

what you've described and I didn't

106:25

mention it in the essay but has also

106:27

been referred to as what's called the

106:28

red queen dilemma from Alice and

106:30

Wonderland um from Lewis Carol where the

106:34

red queen says I have to run twice as

106:36

fast just to stay in the same position

106:39

and to some extent I when I listen to

106:41

some of the hyperscalers speaking at the

106:42

moment about their capex plans that

106:45

seems to be the logic behind uh what

106:48

they're doing that they can't stop now

106:50

because they will fall over it was

106:52

fascinating to look at what happened

106:53

after Meta released its latest set of

106:56

earnings. I think the stock went down

106:58

20%. And the reason why is that people

107:01

were saying, "Hang on a sec, this is

107:02

just becoming ridiculous." Have you seen

107:05

those capex plans? Um, and there is a

107:08

sense that at some point, and one has to

107:11

think about who are going to be the

107:12

survivors of this, some of the dare I

107:15

say it weaker players will probably fall

107:18

by the wayside. Now whether that impacts

107:22

negatively on those that continue to

107:24

operate I can't say it probably does but

107:27

they will be survivors and one can go

107:29

back to what happened in 1999 you know

107:34

Amazon saw it correct me if I'm wrong

107:37

prices fall 80% in the com bubble burst

107:41

but it was a survivor

107:43

it lived to bite another day and it

107:45

found something called Amazon Web

107:46

Services it found computer as this new

107:49

province.

107:51

I I I think that there will be

107:53

casualties along the way. I think that

107:56

there those that remain as survivors

107:58

will be negatively impacted, but they

108:01

will fight live to fight another day. Um

108:04

and it's going to be fascinating to see

108:05

which are the survivors. And to some

108:07

extent, your earlier set of questions

108:09

could have actually been uh my answers

108:12

could have been uh essentially so who do

108:14

you think the survivors are?

108:17

Um, and and I think that's what if

108:19

you're a serious investor in the US

108:20

markets at the moment, you need to start

108:23

asking yourself just as a precautionary

108:26

exercise, if nothing else. So, who are

108:28

going to be the survivors? How how are

108:30

you assessing the odds that um, you

108:33

know, if let's say someone's equally

108:34

invested in the oracles and the the US

108:38

AI of the world and the Chinese AI of

108:40

the world, public and privates, are they

108:42

going to lose? I I think you you based

108:44

on your thesis I I you probably think

108:46

they're going to lose money on their US

108:47

investments and make money on their

108:48

Chinese investments. Are they gonna make

108:49

more?

108:50

>> They're going to lose less on their

108:52

Chinese.

108:53

>> If you're talking about a broad

108:55

cross-section on both sides that

108:57

includes equivalent of Chinese corewees

109:00

and Chinese oracles, there are some

109:02

potential losers in the Chinese

109:05

>> stack to use the term that is often used

109:07

in the world of computers just as there

109:09

are potential losers in the US stack.

109:12

Um, which one is most overpriced at the

109:15

moment? The US stack. Um, but does that

109:17

mean that there are no overpriced

109:19

options available in inside China? No.

109:23

And secondary, I mean, just like I'm

109:26

moving into a different space at the

109:27

moment. If you ask me to invest in

109:29

Chinese EUVs, there are just so many of

109:32

them out there at the moment, and

109:33

they're all doing spectacularly well in

109:35

terms of their technological

109:36

capabilities, but I'm not sure they're

109:38

all survivors. So I think there's going

109:40

to be a thinning out in EUV sector and I

109:42

think there will be thinning out stroke

109:45

consolidation that's going to happen in

109:46

the AI sector too. Um but who are going

109:50

to be the survivors? Well, I I've got my

109:52

own ideas. I mentioned three of them

109:53

just now. I think 10 cent, Alibaba and

109:56

BU definitely will be survivors. But

109:58

will they sail through this without any

110:00

scars on their face?

110:01

>> No.

110:02

>> All right. Now, um I'm going to ask

110:05

another question. Not about just the

110:06

broad ecosystem, but specific companies.

110:08

Uh there's a guy named Doug. Doug owns

110:12

six stocks. He owns the US AI which is

110:16

let's say Nvidia, Oracle, and um

110:21

Microsoft, Nvidia, Oracle, Microsoft.

110:23

And then he owns three Chinese stocks,

110:26

BU, Alibaba, uh uh and 1010en. Is he

110:30

going to lose more on his US investments

110:32

that he's going to make on the Chinese

110:33

investments or you don't?

110:34

>> Such a difficult question to answer. I

110:37

really don't know what his in price was

110:38

on all of them. But if we can say

110:41

today's price is the starting price is

110:43

in price

110:45

as a sort of to indulge your question.

110:48

Um I would worry more about um the US uh

110:52

stack uh than I would about the Chinese

110:55

stack.

110:56

>> Thank you. Uh well Michael it's it's

110:58

been an enormous uh privilege to hear

111:00

your your views on this. You you've put

111:01

a lot of work into it. we will be

111:04

attaching your uh paper which can be

111:06

read uh in full on your website and you

111:09

also posted it on LinkedIn. Uh please

111:12

just tell us a little bit about some of

111:14

the work that you do at Kasc Consulting.

111:16

You were a a strategist um for an

111:20

investing firm for for a long time. Your

111:22

main focus most of the time is not AI.

111:25

Other than this, what are you focusing

111:26

on and and you know what kind of work

111:28

are you doing right I sometimes feel as

111:29

though I'm a sort of deep seek in a

111:31

sense that I'm not out to make lots of

111:33

money at the moment. Um I'm just out to

111:36

try and understand what's happening. Um

111:38

and I get invited to speak at a lot of

111:40

venues. Some which I do remotely like

111:43

I'm doing with you today. Others which I

111:45

do in person. I have to say I say no to

111:47

quite a few of the invites because I

111:49

simply don't want to crowd my life too

111:51

much. So um consider yourself honored.

111:54

No, I don't mean that. But you

111:55

understand what I'm saying.

111:56

>> No, I am. I am. Um but um I do uh speak,

112:01

I do write. Um I'm a one-man band. Um

112:05

and so uh my research goes where I want

112:08

it to go. I'm not driven by somebody who

112:10

says, "Right, I want you to look at this

112:12

subject now or I want you to look at

112:13

that subject now." Um for the moment and

112:17

for the foreseeable future, I will focus

112:20

on on AI um as it manifests itself. And

112:24

I particularly feel as though I have

112:26

something extra to add

112:29

um in understanding

112:31

um the Chinese ecosystem. Um when I

112:36

speak to really well-informed people of

112:39

the US AI ecosystem,

112:42

I'm horrified by how little they know

112:45

about the competition.

112:48

So I don't know nearly as much as they

112:50

do about the USI system, but I know a

112:54

gazillion times, not gazillion, but a

112:57

multiple times more about what's

112:59

happening in the Chinese system. And for

113:01

me, for instance, the fact that many of

113:04

them don't really know that Chinese AI

113:06

software is offered for free. I I mean

113:08

it's a simple fact but it's a powerful

113:10

one. Just tells me that they probably

113:13

don't know what's coming which is why I

113:16

suppose I do get invited to speak um to

113:19

people like yourself. Um because I'm

113:22

trying to understand what's coming.

113:24

Um and I don't think I'm going to get it

113:27

all right. I don't think I'm going to

113:28

get it even half right. But I think I'm

113:30

going to get it more right. Partly

113:32

because I've actually investigated it

113:34

seriously.

113:36

um I'm probably going to get it more

113:37

right than wrong as compared to many

113:40

western analysts at the moment. So

113:42

that's probably what I have to offer at

113:44

the moment and that is that I I spend my

113:47

days searching through and if I can

113:49

leave one piece of advice if people want

113:51

to try and acclimatize themselves get a

113:54

breast of what's happening in Chinese

113:55

tech they should subscribe to the South

113:57

China Morning Post um which has the best

114:01

tech writers uh in English in the world

114:04

partly because and many people don't

114:06

really understand it um Hong Kong is now

114:11

um as it was one stage very much um the

114:14

big brother to Shenzen. It's now the

114:17

small brother and yet the two are two

114:21

metro stops away from each other. Um and

114:24

the real action is happening in Chenzen

114:28

but also in Wangjo which is where

114:30

Alibaba's based uh and Deepseek are

114:32

based. So uh but Chen for some unknown

114:37

reason that people at the South China

114:39

Morning Post and I'm not being paid by

114:41

them to tell you this. All I'm trying to

114:43

say is if someone wants to play catchup

114:45

and trying to understand what's

114:46

happening in the Chinese uh tech space,

114:49

start reading South China first.

114:51

>> Yes, we will leave it there. Thank you

114:53

everyone for watching. Please leave a

114:55

rating and review for Monetary Matters

114:56

on Apple Podcast and Spotify and

114:58

subscribe to the Monetary Matters

115:00

YouTube channel.

115:01

>> Thank you, Jack.

Interactive Summary

The discussion centers on the accelerating advancements in AI, particularly highlighting the contrasting approaches and potential future dominance of China over the US in this field. Michael Power argues that China's open-source, open-weight AI model strategy, coupled with significant investment in smart factories and robotics, positions it to outmaneuver the US. He criticizes the US's closed-source, service-oriented, and capital-intensive AI model, suggesting it may lead to a bubble bursting due to unsustainable costs and limited long-term viability. Key themes include the economic advantages of China's AI model, the limitations of hardware miniaturization (Moore's Law), the concept of commoditization, and the critical role of energy and alternative chip architectures. The conversation also touches upon the potential for a bifurcated global AI landscape and the differing investment landscapes in the US and China.

Suggested questions

5 ready-made prompts