How Chinese A.I. Could Change Everything | Dr. Michael Power of Kaskazi Consulting
2802 segments
When I speak to really well-informed
people of the US AI ecosystem, I'm
horrified by how little they know about
the competition. The Chinese approach,
which is open source, open weight
specifically, will likely win out.
There's not much road ahead for Nvidia
to continue to miniaturaturize. The
Chinese are now moving, I think, into
the next stage. Smart factories are just
spreading across China at an
extraordinary rate. Last year, China
installed more robots than the rest of
the world altogether. What Deep Seat
came up with is far more radical than
anything Open AI has ever come up. The
amount of debt that's creeping into the
system both on and off balance sheets at
the moment should be of concern. It's
particularly of concern in related areas
like dare I said Oracle and Paul the
numbers that are being talked about by
the likes of OpenAI every now and again
start to almost approach the same sort
of level of absurdity. If you're on
bubble watch at the moment you focus on
Oracle we're heading towards some sort
of crisis point.
>> I'm joined by Michael Power of Kuscazi
Consulting. Michael is a veteran of
macro strategy among many topics.
Michael, it's great to see you. But you
right now have have written an essay
that is kind of blowing my mind. It is
extremely indepth an analysis of US AI
architecture and Chinese AI
architecture. you find that Chinese AI
architecture is has significant uh
advantages over the US AI architecture
and that basically the hundreds of
billions of dollars and trillions of
dollars that are currently and going to
be invested in US AI right now may be
basically a bust. So that this has
extreme geopolitical consequences for
the entire world, China and and the US,
but also economic and market
consequences as well. So there's so many
strands we can grab on to, but just how
about you you you take us into the
journey. How did you first start
thinking about this? Tell us about the
process of you discovering this and then
we'll just get into it.
>> Well, thank you first of all, Jack, for
having me. I think that I'm now in
semi-retirement, though I've discovered
that the word retired doesn't exist in
retirement. And I'm one of these people
that was determined not to allow my
brain to atrophy. and I still continue
to make plenty of speeches um on those
more traditional subjects that you
mentioned before. But nevertheless um
I've made it my business to try and
understand probably the great uh theme
of the world today. Not least because it
has completely enraptured Wall Street.
Um and I felt that uh in order to be
able to to have a a meaningful
contribution to make I needed to
understand it. So what I did is that for
the first six weeks is I just immersed
myself in everything AI and in the first
instance that basically meant learning
the language of AI. Um because a lot of
it is jargon which is uh not easy to
understand. Um and I first translated it
into language which I could understand
and any reasonably intelligent person
such as yourself could understand as
well. And when I was doing this, I
started to realize coming from an
objective perspective that the narrative
that dominates much of uh Wall Street
thinking um was not as as strong as uh
it was made out to be and that the
Chinese although I don't think the
situation is as we speak today one of
the Chinese leading deep down the
structural process that they're putting
in place with regards to how they're
approaching AI I I think uh roll forward
three years uh will um outmaneuver that
of the United States and that their
model which is first and foremost built
on the idea of open source rather than
closed source and we can come to that if
you will um but nevertheless their model
um has a lot more runway ahead of it and
it's a lot cheaper runway uh than does
uh the US model which I think is
actually starting uh to run out of bread
and if If you look at the US AI
ecosystem, the valuations in that I
would roughly estimate at 15 trillion.
If you look at all the publicly traded
securities and then the venture capital
funded companies, whereas the the
Chinese the market cap of the Chinese
ecosystem like a lot of it is is private
or governmentbacked. But I mean, you
know, Alibaba has a has a market cap of
less than a half a trillion dollars,
which is certainly still a very large
company, but uh pales in comparison.
First of all, Michael, yeah, I I just
want to share some of some of the
consequences. I mean, you said that
basically the grim reaper is coming for
the American AI bubble. So we'll get
into that in a second, but but but but
first, what is the source of the Chinese
advantage over US technology and and how
does that disprove or challenge the
consensus uh within America about US AI
advantage uh in in Silicon Valley as
well as Wall Street?
>> Well, I'm going to use a four-letter
word on a very aerodite um discussion
like this. The essence of the Chinese AI
approach is that it's free.
And I can't say that enough. And that is
because they have a completely different
philosophy as to what AI should be as
compared to the US model.
China is building a structure at the
moment where AI will be a utility like
electricity
and the value that is going to be
derived from using the electricity is
where they're going to benefit but the
electricity itself is a pretty no value
product. Yes, you can make a little bit
of money on the way by generating
electricity but as you well know uh the
real valuation from electricity comes
from what what it is purposed to do. uh
whereas the US model is essentially that
it's a service that can be monetized if
they can monetize it to the degree that
they need to given capital expenditure
they've undertaken but that it's a
service and these two different
philosophies profoundly different
philosophies mean that there are two
paths developing and to some extent um
the analogy is not precise but it's good
enough um essentially the Chinese
approach is uh the android approach
because as you probably know although
Android is technically owned by Google,
Google deres no money from that. It's
actually controlled by a foundation and
that foundation uh is a nonprofit. But
compare that to Apple and their
ecosystem which is uh you know extremely
profitable. Um Android actually follows
another great example and that is of
course Linux. Um and if you look at the
top 100 supercomputers in the world
today,
Linux runs 100 out of 100. And so
essentially there is this philosophy of
open source or open weight which is
essentially the Chinese uh approach to
AI versus closed source. And my sense is
that over time, as with Linux, as with
as with Android, um the Chinese
approach, which is open source, open
weight specifically, um will likely win
out, which is why I say they have a
longer runway ahead of themselves uh
than uh does the United States. I should
say in this
that the US may be like Apple is able to
do create an ecosystem um that Apple is
secure in. And there are four countries
in the world where Apple is more popular
um than uh Android but everywhere else
in the world um Android is more popular
than Apple. Those four countries by the
way are the US, Canada, UK and Sweden.
And and what we may beginning to see in
a very analogous sense is the emergence
of a bifocated world where because in
large part the Chinese offering is free
uh it's winning big time outside of the
core uh capitalist world centered on
Wall Street. And so, and I've often had
chats to people who are essentially
Apple heads or iOS heads and they don't
really actually grasp the world of
Android. They don't actually understand
that there is another way. Um and I
think what's happening at the moment
that uh there is another way uh in AI
and it's winning big time uh in the
world at large not specifically by only
looking at what's happening in the
United States. And you note that Chinese
AI models have a very high rate of
adoption and that a a venture capitalist
I believe from A16Z
said that 80% of the startups that they
fund are using Chinese models not US
models which is certainly very
surprising. So the the open source
versus closed source I think they so
closed source what it literally means is
that the the models weights are not
disclosed whereas open source they are.
But you're referring to the the pricing
model. So it goes further than that.
>> Okay.
>> Uh as an open-source model, you can play
with it,
>> but with a closed source model, you
can't. And I always make the um
distinction that closed source is like
ordering beef Wellington. And a perfect
beef Wellington arrives on your plate.
open source. You order beef Wellington,
but when it arrives, you get a list of
all the ingredients and you can play
with it and mix the ingredients up and
completely reform it and conceivably
create egg and bacon ice cream out of
it. Um, but nevertheless, you have
complete freedom once you receive your
beef Wellington in the open-source world
to reconfigure it to whatever way you
like.
>> Okay. Okay. But when you say it's it's
free, you know, so so chatbt is
massively subsidized and the there are
free versions of Gemini, Chachbt, all
the American models and I'm sure there
are of China, but are you saying that
China doesn't charge its users or that
the pricing the price the charging model
the pricing model is different?
>> I'd say less than 5% of the Chinese
models um have uh a user fee attached to
them. Um and it's only for unbelievably
specialized areas. And to be perfectly
honest, um, generally speaking, the
answer is no, they don't have a fee
attached to.
>> Okay. But then what is the business
model? You know, a key part throughout
your piece, which which is excellent and
what we'll link for for our listeners to
to to read it, but a key part of your
piece is that China has a cost
advantage, but they still have costs.
So, how is the Chinese LLMs and the
clouds and the AI in China going to be
funded if they don't make any money?
Well, just take Quen, which is by far
the most powerful of all of them. That
is Alibaba's large language model. Quen
is applied and used everywhere else
within the Alibaba community. So, Tao,
which is their online shopping or Ali
Logistics or Alip Pay will use almost
certainly Quen as their model. Now in
using that model
there there may be a and often is a
small fee that gets paid across from Tao
to essentially the central resource that
is Tong Yi which is uh the IIT
sector that controls Alibaba and that's
how Alibaba then gets funded. Now this
is not the same I must say immediately
for all Chinese LLMs. There are some now
that are not connected to a broader
commercial network but we'll come on to
that later because there is a mechanism
now arising when they can be um and but
the central point is is that uh it's
like a central expense the research and
development budget as it were of Alibaba
that goes towards Quent
some fees do come in from uh other parts
of the Alibaba Empire to help fund uh
that budget.
>> So, let's say Michael, if you're right,
in five years, what does the world look
like? Uh Nvidia, all of
>> Can I just follow on something there
just because it's really it's clever
because it's only in the last week that
it's happened. As I said to you before
that Google essentially owns um Android,
but doesn't really make much money from
it. However, it has recently agreed
because virtually every Samsung phone
that I know of is run on Android. It's
done a deal with Samsung that in the
search engine of every Samsung phone
going forward will be Gemini, in other
words, Google's LLM. and Samsung is now
paying a fee to Google for the rights to
have that embedded in their phones. So
there is a way an example of how it's
being done in that context. But this is
just happening everywhere within the
Chinese community. It isn't just a you
know one-off exception. But but Android
is able to be partly monetized through
arrangements like that. And I'm just
using that as an example.
>> Thank you for explaining that Michael.
So now so now we have just just a sense
of our of our terms what we're dealing
with here. So I if you're right in 5
years what does the world look like the
publicly traded semiconductor supply
chain of of US AI the privately backed
AI models uh most recently you know uh
open AI anthropic etc. What does that
world look like? I imagine that those
companies would have you see severe
challenges and how does that world
differ from the the scenario envisioned
by imagine by many of the uh uh rosy
rosy eyed AI US AI optimists who
probably you know expect Nvidia's market
cap to be uh above 10 trillion and
expect open AI to be massively
profitable and and the rest well first
of all we have to be careful that when
we use the term world we don't just mean
US world we mean world
But unfortunately when I'm listening to
Bloomberg and CNBC and they use that
term world, it generally speaking means
like Apple world. It's just the world uh
that is defined by uh the United States.
There is some international dimensions
to it but it's a very small part of the
whole. So if we're talking about what
will US world to start to answer your
question look like I think we're going
to see that and we're already beginning
to see when Anderson Horowitz is seeing
you 80 90% of its people presenting to
it using free software from China that
there are and I'm borrowing a word here
from Jamie Diamond and you'll understand
it given your background cockroaches all
over the place at the moment. um that
are essentially starting to be used.
We're seeing Airbnb now is essentially
moved over uh to Quent. Um but
essentially there is nothing stopping
those people who find uh this software
and I'm going to use a phrase which is
not really fair because it's actually
very good but good enough. Let's just
start with good enough. And they're
finding if they can use it for free and
it's good enough. And just take the
example of Gwen, it's it's fluent in
something like 120 languages, which if
your Airbnb is rather a a good plus to
have. Um, so I think what we're seeing
now is that there is leakage between the
two ecosystems that have essentially
been described by us, US world and
world. And there is leakage from US
world into world war. So projecting five
years forward to go back to to your
question. I see this leakage is
continuing. However, it goes beyond that
because there are technological issues
that are now beginning to question
firstly the making of the hardware and
secondly construction of the software.
And they feed off each other but uh they
need to be looked at separately before
we combine them. First of all, I don't
think Nvidia's hold on the chipm is
going to be anything like as strong. And
again, uh part of this is because what
Amazon's doing uh with its own
chipmaking, what Google is doing, Amazon
has the tranium, Google has the TPU. Uh
and I think what we are seeing is that
even within the United States, a
low-level civil war is breaking out
between the major players in in big AI.
um so much so that the dependency on
Nvidia chips is starting uh to be
reduced. But outside of um US AI in the
rest of the world there is no doubt in
my mind that we are seeing all sorts of
efforts to diversify
uh the Chinese even have a word for
devidarization
um but diversify away from dependence on
let's just call it expensive chips and
most of those expensive chips
historically have been Nvidium chips. So
in the hardware side there are all sorts
of options that are starting to open up.
First of all, also and you'll see in the
open part of my my presentation, the
whole area of Moore's law is starting to
come under pressure because basically
we're getting to a point where you can't
really make chips much smaller um and
still hope to carry uh to continue uh
the compute level that comes from those
chips. There are rules of physics, rules
of what I call material, chemistry, and
then rules of economics. I call them the
three assassins that are actually saying
that there's not much road ahead for
Nvidia to continue to miniaturaturize
its uh its chips. What China is doing is
essentially creating a and they're not
alone in this. It's happening even in
the United States, but a whole new
ecosystem that is basically built around
not two or three NMS but somewhere in
the the 14 to 18 NM specs.
And what they're doing is that they are
building what I call cognitive
skyscrapers, cognitive towers. So they
use the chip as the base, but then they
layer other sorts of chips and uh
memories and various other things on top
of it and create a sort of megallike
world. Um and that is increasingly the
way forward. So I don't think China is
uh thinking smaller, it's thinking
smarter. Um, and this is one way where
you can continue to be very relevant in
the chip space without chasing um down
that rabbit hole um the idea of making
the chips ever more small. Um because as
I said Moors law is close to dying. It's
on its deathbed. It may not have passed
its last breath but it's not looking
good at the moment. So in the hardware
space,
uh, Nvidia and we're already seeing
certain, um, things that they're doing
here. They're doing it more in the in
the in the software space, but they too
are also starting quietly to do it in
the hardware space, too. But Nvidia is
recognizing that its model up until now
cannot be the only way forward for it.
has got to actually diversify away
actually potentially look at slightly
larger chips actually look and we
haven't talked about software yet but
actually look at complimentary software
with those slightly larger chips in
order for it to remain relevant the
problem with that for Nvidia is that the
margins associated with that alternative
are much much much lower than they are
with the model that they're now pursuing
which then starts to
moving forward potentially undermine the
hold that Nvidia has the margins that it
has and then that will potentially play
through into profits and therefore stock
market valuations. So that's what's
happening on the hardware side. And I
truly generalized in a number of areas
there in order to be able to answer your
question, but that's essentially what's
happening. And as I said, the Chinese
are doing it. Yes. And and they've got a
vested interest to do it. But even
Amazon's doing it. Even Google's doing
it um at the moment that they want to
diversify away from these high-priced
chips that Nvidia has that increasingly
are not built for purpose.
and uh the new world um that we're
moving into for chips requires a
different combination. So that's the
that's that's what's happening on the
hardware side. On the software side, um
there's breakthroughs happening
everywhere. Um and it's not that Nvidia
um hasn't got a very powerful software
hold on its on its chips. It does it
through something known as CUDA. um
essentially it ties in developers to the
way that uh they're able to use Nvidia's
chips. What's happening at the moment is
that um both in the US but especially
especially especially in China people
are finding way to circumvent and this
they are doing both two sides to chips
the training of them and the inference
that arrive derived once you train them.
So you endow a chip with knowledge and
then you get that chip uh able to answer
a question that you and I might pose to
it. Um but in the training stage which
is where Nvidia has especially good hold
um uh there are breakthroughs taking
place the most important of which
happened in the last 10 days um which is
something that Deep Seek did and it
happened on New Year's Eve. I don't
think the market has truly understood
the scale of what what that particular
uh breakthrough means. But it's also
happening uh in the inference side um
which is uh where to be perfectly honest
lesser chips are used often chips that
are previously involved on the training
side after two or three years they've
still got a little bit of useful life
left in them. So they get transferred
over to the inference side for another
couple of years where they can still be
usefully employed. But the margins on
inference chips are much lower than the
margins on training chips. But the
essential point is is Nvidia here is now
facing attacks on both fronts, hardware
and software. So, it sounds like the the
consequences of what what you're saying
are immense because all of this money in
the United States in public markets and
private markets uh is being deployed on
the premise that AI margins uh might be
slightly lower than the traditional
extremely profitable US software
business model that has 80% 90% gross
margins. slightly lower margins but
still a very profitable enterprise. You
are saying that it is we're likely
headed to a world where profits margins
are extremely low and it's something of
a um you know communitarian co-op model
an open- source model and that you know
the consequence you're saying Michael is
that the trillions of dollars being
spent by the US is basically just cash
incineration and that a lot of investors
are going to lose money. Obviously,
it'll take time to pan out and I'm not
saying it's going to happen tomorrow,
but I did my PhD thesis ultimately on
the concept of commoditization
and um I am able to recognize the traits
that indicate that a particular product
or service might be uh being subjected
to the forces of commoditization and I
can now see those forces gathering both
on the hardware side and on the software
side.
>> Yes. And you you come from South Africa
which is a you know dominant player in
commodities. You know it is no surprise
that the you know South African uh stock
market is a tiny fraction of the size of
the US stock market which you know has
commodities but also has these things.
commod commodities. It is hard to make
money from commodities and they
certainly do not command 40 or 50 times
earnings multiple
>> and South Africa is an example of that.
But there are plenty more commodities in
the world than simply those that are uh
like um you know wheat or or or or
metals or any of those or even dare I
say it some of the energy commodities.
Uh there are plenty of other commodities
that have existed that have come into
being. I mean I would say that um you
know petrol-based automobiles are on the
verge of becoming commodities. Um and so
the concept commoditization is not
exclusive as I say to the traditional
term that is described as a commodity.
>> Yes. And Michael so I've said what is in
your piece about how it could you know
end badly for US investors because of
the consequences. But can can I get you
to say it?
>> Yes. Um, essentially what is happening
is that China has realized that and part
of this has come about by the fact that
they've been subjected to certain
embargos or or or controls that
particularly from the United States.
China has been forced to look for
another way, another tow as I sometimes
like to call it because uh the tow is
the other way in in in Chinese in
Mandarin Chinese. And what's happening
is that uh necessity being the mother of
invention um because they didn't get
those Nvidia chips, they've had to find
other ways of doing it. And as I say,
they they thought not smaller, they
thought smarter both on the software and
on the hardware side. And what Deep Seek
did last year and what I predict it's
going to do this year and we've already
had a foretaste of that with their
latest paper is uh essentially challenge
um the margins that exist on the
software side of the business. Sunsu
would basically advise any Chinese
general if you don't think you've got
the right number of forces to be able to
beat the enemy change the battlefield.
And there are plenty of almost SunSu
pieces of advice that are now playing
out in the world of AI. Um, as I say,
there's going to be no gunfights at the
3NM or 2Nm corral between Chinese chip
makers and let's just say Nvidia because
the Chinese aren't going to fight there.
They know that's not a gunfight. They
can win. But, uh, they are now shifting
the battlefield and shifting it
dramatically.
And when you are dare I say stuck in the
US ecosystem, I don't think you can
imagine that there might actually be
another battlefield for you. But there
is and the rest of the world is catching
on. Um and it is starting to spread both
software and hardware. Although the
Chinese don't have a lot of chips to
spare to export at the moment, but come
2028, the forecast is that China will be
producing more chips than it needs. Um
and it already dominates um the world of
what's called commodity chips to go back
to where we were before. Um you may have
remember the story with NXeria which was
a Dutch company that got essentially
shut down by the Dutch authorities at
the heads of Trump administration.
NXeria produced
quote unquote commodity chips for the
particularly auto sector of Europe and
this brought the auto sector to a
standstill. The point being here is that
the low value low margin chips in the
world the Chinese already dominate
probably uh yeah up to the the sort of
almost twothirds of the actual chip
supply in the world now has quote
unquote been and I don't want to say
commoditized to the point where it
actually can't create a profit but where
the margins are very very thin and you
have to be incredibly efficient if you
want to stay in that space. So what's
happening is that we're seeing this uh
essentially march uh up the value added
ladder. Um and the Chinese are now
moving I think into the next stage which
is the um medium value chips 14nm to
18mm
um that essentially have huge uses
across the world. I mean massive uses
vehic vehicles u your cell phone cell
phone towers I mean the areas where
those sorts of chips are are are
prevalent um are are are almost too many
to mention in the world of let's call it
the latest Apple iPhone um yes you want
to have one of those tiny chips um but
there are many more applications for
chips in the world now I'll give you an
example which is not uh being wholly
recognized yet but chips that are being
embedded in the smart factories of China
so that these smart factories can
actually operate almost remotely. Um
they don't have to have people in them.
Uh and those chips are not necessarily
um driven by a constraint about size.
Size is not always the issue. Um but
they need to be able to form perform the
function. I'll get back to my earlier
comment. They're good enough. The result
is that smart factories are just
spreading across China at an
extraordinary rate. I mean just to
understand it last year China installed
more robots than the rest of the world
put altogether. So we are seeing this
this process take place across all sorts
of areas which are not necessarily again
wholly um acknowledged in the United
States. Now, I'm being a little cruel
here, and forgive me, but they're not
being acknowledged because there are not
many factories left in the United States
for their chips to be embedded in. And
the Chinese now with by 20, 30, 45% of
the world's manufacturing production
versus 10% in the United States, the
Chinese are essentially moving their
industrial structure over to smart
chips, smart factories. Um, and they're
not these 2Nm, 3NM chips that you're
going to find in the world of iPhones.
So, China has a, as I say, it's a
different path. It's a different road
and it's not fully recognized partly
because particularly in the United
States where consumption is 80% of GDP,
services is 80% of GDP,
most of the AI talk that comes out of
the United States is related to
services. agentic AI that is essentially
going to allow you to do to buy an air
ticket on your phone. These are what get
all the talk in the United States. They
get talk in China. I'm not saying that
they don't, but there are plenty other
conversations taking place about where
chips can be used um embedded in drones.
Um to an extraordinary degree, chips are
being used to uh run uh the electricity
system to an incredible degree across
China. the the solar system, the the the
wind turbines chips are the application
for chips in China tends to be a much
longer list than the application chips
in the United States.
>> Thank you, Michael. So, we're recording
January 8th, 2026,
a little less than a year ago in late
January 2025, Deepseek, a Chinese AI
model company that was actually, I
think, started by a Chinese AI hedge
fund manager of of all, you know, places
of all people. um uh launched their
deepseek model R something. uh oh
>> are are one and that model and this fear
that emerged a little less than than a
year ago uh caused a mini you know one
or two day crash in the US semiconductor
supply chain stocks particularly Nvidia
if I remember was down 16 or 17% at one
time now on January 1st 2026 so the
first day of the year while everyone was
uh partying in the west and the markets
were closed you're saying that they have
released a new model or a new paper that
it could have similar consequences. Tell
us about this.
>> Well, I think it's essentially um and
what Deep Seek is doing at the moment is
a sort of dance at the seven veils
before it actually drops R2 which is
going to do in my prediction just ahead
of the Chinese New Year which starts on
the 17th of February. But essentially
there have been a number of releases and
this is the last release I suspect big
release before that date. uh and the
biggest by far because what they've done
is they found a mechanism for
essentially um attacking uh the whole
idea of memory in the training process
of chips. So now they have found a way
where previously a particular chip
produced let's just say 100% of memory
that same chip now if properly arranged
within the software you only need 7% of
its power to produce that 100% that
previously you were able to get. So
essentially they have um increased the
power of uh a small chip by 15 times in
the training process and then these this
this essentially leads again towards the
commoditization of highv value chips
because they found a way round the whole
idea you need our chips because you need
100 and and what deepsekers said well
actually you only need seven to do that
uh because that seven will give you a
100. Uh and it's all about the new
phrase that everyone is talking about
architecture and architecture is
happening both on the side of training
chips as well as on inference. In fact,
until you know a month ago, people
didn't really talk about architecture on
the side of training. Yes, there was
some, but they did talk about
architecture on the side of inference.
the Chinese not just the Chinese uh this
big company which uh has just been
bought by Nvidia in Singapore. Manus is
very good at architecture but it's
inference architecture. What DeepS did
this time was come up with an
unbelievably radical way of reinventing
the architecture on the training side of
the chips which is largely been ignored
up until now. Now why I think this paper
which is part of a whole as I say seven
veils um is significant is that it's
setting things up for the release of R2
which as I say is probably going to
happen at the start of the next Chinese
year. Um we had another indication for
instance in mid December which only the
geeks really picked up on but the
Chinese redu Deep Seek reduced uh
produced something called uh I'll get it
right the V3.2 to special which was
essentially a mathematical standalone
model
and it basically went to the top of the
benchmarks. Not every one of them but
nearly every one of them went straight
to the top of the benchmarks. So what
we're seeing is that once you put all
these seven veils together, and I'm not
going to bore you with all sorts of
acronyms as to what each of those veils
constitute, but you put all of those
together and then tie them up in what is
going to be coming out, I suspect in the
middle of February, um, and that is
going to be monumentally significant
because, uh, Deep Seek is going to, I
think, in most areas, go to very near,
if not at the top of every benchmark
that counts. um they're going to have
and you may not know the name of the
game is is is the number of parameters.
They'll have in excess of one trillion
parameters. Um they will have this
what's called uh mixture of experts
structure and they will have MLA which
is the the
paper that we've just been talking about
dropped on New Year's Eve. Um, and
they'll put all of these together to
create an unbelievably powerful model
that is powerful both on the training
side and on the inference side and uh I
think intentionally um having even
greater effect than than R1 when it was
released a year ago. So there there's
two things training and inference like
let's say the old school um creating a
computer that can beat human beings in
in chess which I you know has existed
for close to 30 years now but training
is the process of getting the computer
to learn chess and develop its
strategies. inference is okay you're
playing Gary Kasparov now you have to
actually run Western US models of
training has been extremely capital
inensive and the Chinese models appeared
to be far less capital intensive and
that's why Nvidia you know crashed 17%
when this deepseek news came out uh you
know in late January 2025 because oh my
god they don't need to spend that much
on on Nvidia chips I know there was some
doubt about that Michael is it really
true that they only spent you know, a
tiny fraction of of uh Nvidia chips.
>> I saw a paper yesterday said that Deep
Seek actually spent 1.5 billion on that
first R1.
And even if it did,
OpenAI
alone is spending its own equity
contribution, not co-contributions from
other players towards Stargate is 19
billion.
And yet what Deep Seek came up with is
far more radical than anything Open AI
has ever come up with. I mean on a scale
of
20 30 times and it did so let's be take
it to the worst possible extent 1.5
billion. I I don't think it was anything
near 1.5 billion but I'll accept that
that that that forecast.
>> A lot of money but a tiny fraction of
what the US companies are spending. A
tiny fraction
>> tiny fraction. So the point being is
that um once R1 dropped last year, they
gave the model to Nature magazine in
London, which is one of the most
prestigious magazines in the world. And
over eight months, Nature tested that
model and in their September cover issue
last year came out and said every claim
that Deep Seek made as to what our one
could do was verified.
Now since then as I say they've dropped
a number of other upgrades and you've
had to be really buried in the whole
process to see each of these incremental
upgrades. I mentioned the one that
happened in December regarding the
mathematical capabilities. Um but I
think what's happening now is that there
is a cumulative effect of all these
upgrades that going to be rolled into R2
and I don't know how much it's going to
have cost them to get got to that spec.
I really don't I I'm I'm somewhat um
skeptical of the claim of 1.5 billion
because I'm not sure that the hedge fund
uh that owns Deep Seek had that sort of
money to spend unless someone was
handing them some money slightly, you
know, through the back pocket. But in a
way, it's irrelevant
because none of the claims as to what
these models capable of doing are being
disputed.
They're all showing up in the benchmarks
and they're all being subjected to
unbelievable peer review
and anyone who's looked at that paper on
the 31st December last year has come
back and said, "Yep, their claim is
absolutely spot on. They they've done
it. They found this way around this
whole problem that we've all been facing
for a long period of time, which is
what's called catastrophic
forgetfulness, where you're training a
model, you get up to a certain level,
um, and then you add more data into it
and it just forgets everything that it's
already learned. And they've essentially
created a very stable way for that model
to accumulate and to sort out
information and keep it properly
organized so that they continue to add
what we call scale data into that model.
And the result is they now have a very
very clever stable way to grow the
database of a model. And the result is
I'm I think it's pretty radical to be
perfectly honest and there are a number
of geeks out there and I'm not a geek.
I'm not a techie but I've read the
papers that I can understand and
virtually all of them have pretty much
confirmed uh what Deep Seek is playing.
>> Later on I've got some potential push
backs about your thesis, but I want to
get into the nitty-gritty uh where you
know you're not a tech geek but you've
kind of become a little bit of a tech
geek. Uh you say the three assassins of
the US AI architect architecture the US.
>> Oh, let's just call it chips generally.
>> Chips. Chips generally. Yeah. And
potentially the the bubble in AI in in
US uh private and public markets of US
AI. The three assassins you say are
physics, material science, and
economics. Let's begin with physics.
What what why is that an assassin of
chips? Well, you get down to a certain
level where essentially the process by
which uh uh the the chip operates in
physics becomes unstable.
You have basically switches that are
either on or off.
But the electrons that control those
switches are able to slip through
because they are so small.
they can slip through and essentially
turn that chip into not an on or an off
but a baby
and that basically starts to question
um the robustness of the model.
So what it does is when you're getting
down to these incredibly small
things, the the actual, you know, it may
look like a piece of steel or hard
silicon to you, but it's actually got
little gaps in it and these electrons
are finding way through it. They use the
expression like ghosts through a wall
and they move through and go to the
other side and then essentially um turn
that particular switch into a m which
which really starts to question um at
what point can you continue to
miniaturaturize everything.
Um and still have the security of
knowing that when I want that switch to
say off it says off. It doesn't say
maybe. You're saying that chips are
coming up to a limit, a retical limit.
Chip chip sizes are getting so small
that basically um you know electrons are
going crazy in there and it gets so hot
that you you know need a lot of the
chips to be devote devoted to uh cabling
to to you know uh control the thermic
output so so it doesn't overheat and and
ruin the chip. And and I will note
Michael, not you, but there have been
haters of Moore's law like over the past
20 years who have said Moors law is
dead. Moors law is going to, you know,
this drastic two years every two years
the amount of transistors that we could
put in a semi semiconductor roughly
double, which is, you know, held true
since the 1960s. That's no longer going
to happen. And I want to say Morsaw
refers to the number of transistors in a
chip. It does not refer to compute
power, actual compute power. Actual
compute power has way more than doubled
um over the past 15 years because of
Nvidia and because of parallel computing
and the fact that all the electrons are
are you know going at the same time. The
wires are going at the same time you
know and Nvidia invented that. So you
know Jensen Wong CEO of Nvidia has said
that Moors law is dead in the other way
that like computing power has way way
way more than doubled every two years
because of that thing. But you're you're
saying you're a critique and saying
finally like Moors law is going to be
dead and that you you you can't double
the number of transistors every two
years. It's simply getting too small. It
went from 30 nmters to 50 nanometers to
7 nmters and now the latest Nvidia
Blackwell is 3 nanometers. You're
obviously it can't be 0 nmters or
negative nanometers. That that kind of
there's not a ton more uh uh juice to be
squeezed out of that lemon.
>> We talk of disconomies of scale. There's
essentially diseconomies of physics and
diseconomies of material science. And at
some point, and you can't divorce this
entirely from the cost of being able to
achieve this, at some point it becomes
prohibitively expensive expensive
without that much uptick in what you
mentioned compute to move from 3nm to
2nm
and that there are complications that
start arising in physics, in material
science. and let's leave but one cannot
leave on one side the whole issue of
economics because ultimately that comes
in and that's probably where I come from
and spoils everything. So what we're
doing is we're seeing the diseconomies
of physics. We're seeing the
diseconomies of chemistry and yes there
are potential workarounds to use that
wonderful phrase but they are
unbelievably expensive and you ask for
instance and one critical player to ask
in this whole process is ASML in the
Netherlands. Can you carry on making
your EVU EUV machines so much so that
you can actually start producing 2Nm
chips? and they will say we can
but it's complicated and it's not just
complicated it's almost prohibitively
complicated to do so people have talked
about changing things from silicon to
something else you know there are all
sorts of areas and one of the most
interesting areas potentially is
photonics although I should hasten to
add here is that China probably leads in
the whole area of photonics which is the
whole idea of embedding data in light
itself uh which is a completely
different way of thinking about chips. I
mean it's it's it's just moving into a
whole different space. Um and and and
it's still five, six, seven years away
from having anything that's remotely um
practical. But in terms of of research,
China probably leads platonics at the
moment. It's not an undisputed claim,
but it's probably a claim. Um and so in
that that is essentially saying forget
silicon we're we're going to move to a
post silicon world but while we still
live in that silicon world we are seeing
these diseconomies of physics
disconomies of material science and
diseconomies
of economics and to some extent these
three assassins are working together not
consciously obviously but but there is a
sort of strange cooperation that's
happening between all three that are
making as I And I'm not going to say
that Moore is dead or Mozore is dead,
but he's on his deathbed and these three
guys are standing around that deathbed
sort of rubbing their hands saying, you
know, your time is up, mates. And what
the Chinese are saying, well, listen,
we're not going to have a gunfight at
the the 3M Corral. It's just becoming
unbelievably expensive to play at that
game. And we're not going to win because
we just don't have the EUV machines that
come from ASML to be able to play in
that game. Let's move the battlefield.
let's fight this war in another space
and using other other other sort of
mediums and this is why this soal SIP
system in processor
these cognitive towers is now becoming
something which pretty much everyone
including Nvidia is now pursuing. So,
Michael, the the game at which US chip
makers, primarily Nvidia, have excelled
and dominated and crushed the
opposition. That game is making chips
smaller and smaller and and more
efficient. You're saying that that game
is something where China is saying we're
not playing anymore. And that the
advances in computing have mostly come
from making chips smaller and smaller
and smaller. Parallel computing, of
course. um that you're saying that in
the future the the gains are going to be
made from connecting the architecture
connecting the chips themselves um
something called advanced packaging so
allowing uh the chip to be 3D and that
whole system so the chips can talk to
each other rather than just the most
efficient chip because the most powerful
chip is no longer you're saying going to
be what is the driver of effective
usable compute which is what it's all
about and you noted in your piece with
that the you know former uh uh you
Google executive Eric Schmidt said that
the constraint of AI is not chips it is
power and electricity
>> and that's that's true I mean when you
see the amount of power that's going to
be required to run the likes of Stargate
and you've seen the pictures or these
maps of where all the data centers are
being put up over the United States. And
you don't live in Northern Virginia, but
if you did, you were facing some fairly
severe power shortages in the in the
next five years because of all the data
centers, many of them military related
um that have been and are continuing to
be erected uh in in in Virginia,
Northern Virginia. But there are other
areas. There are five or six what I call
hotspots all over the United States
where power issues are going to be very
profound. But it's not just about power,
though I completely agree with what Eric
Schmidt had to say. I think that is the
the Achilles heel, the the black swan,
whatever phrase you want to use that
potentially threatens um uh um the US AI
model. And remember that China does not
face this constraint simply because they
have invested absolutely massively in
particularly renewable, but not only
renewable energy. um I mean massively um
and this is allowing them to not think
of um energy as a constraining factor at
all. Um they are able to do whatever
they want to do and they don't have to
think about energy. In fact the price of
energy has actually been falling
for the Chinese. So their particular
model which is what we call distributed
intelligence model as distinct from
concentrated intelligence model
personified by the likes of Stargate the
Chinese one is far less power hungry
anyway and to the extent that they need
that power they have it if you go to a
Chinese conference and it doesn't have
to be an AI conference and speak to all
the the geeks that are there the last
thing they're going to tell you about is
oh we're worried about our power
supplies the last thing. go to a US
equivalent conference and almost the
first thing they're going to talk to you
about is power.
>> What about the second assassin, material
science?
It's somewhat related and I always think
that there's a fairly thin line between
the physics and the chemistry but
essentially it's about degradation
of the materials
that at these incredibly small levels
with all the heat that you were
mentioning rightly um you're seeing the
materials starting to break down.
They're starting to um corrode if that's
the right word. It probably isn't in the
context, but it's something you and I
can
>> depreciate. How's that? How about that?
>> Well, depreciate. Yes, but that's we're
getting into the the language in Michael
Bur there, but yes. All right, let's
call it depreciate in the sense that
they're no longer useful. If that's what
you mean by depreciate, yes, I
completely accept that they're no longer
useful. And Michael Bur will say that a
high-end chip has three years of useful
life. Um, Amazon will claim it's five.
Um, I don't know. um account accountants
are going to be forced probably to
follow the absent line but uh
essentially depreciation
um uh corrosion happens uh at these uh
very small levels and that creates all
sorts of secondary issues like as you
mentioned heat um and the result is the
chip becomes less than useful. it starts
to break down. What we call yield, which
is the number of transistors that are
are operating within the chip at at full
strength, starts to fall fairly
dramatically. Um, and so it's all about
essentially the corrosion of the
metallic properties that exist in the
chip, particularly dare I say, silicon.
There are ways again of buying time. You
could I think it's called it's halfneium
or something like that that gets coated
on the chips.
>> Hafneium is one of those one of those
periodic table metals that you and I
never got down to. Um but it's
nevertheless it it's it's it's it can
buy a little bit of extra time. But the
point being is that um we really are
fiddling. It's like, you know, as I
said, we're using we're it's life
extension drugs if if you want to think
about it in that context. Um, but it
ain't going to last for long.
>> And this is where we get to that third
assassin. Michael,
>> can I just add there? I went to my essay
and I'm going to read one sentence, two,
but it basically says everything that
I've just said, but very very
technically.
>> Transistors now require, and we're
talking really here about materials
science issues. Transistors now require
ghost proofing such as hapneium oxide
layers. But by 2Nm even these are just a
few atoms thick. One missing oxygen atom
causes a short circuit and even
deposition of that creates gaps that
invite electron tunn tunneling. So
you're seeing what I'm talking about
here is we're we're reaching limits of
science both physics and chemistry um
that are starting to make get making
things smaller incredibly difficult
>> and this is where the third assassin
economics really comes in. uh you
referenced Michael Bur who has you know
reemerged and has made the the following
critique that the companies that are
spending massively on these chips they
put the capital up front and you know
that that is not recorded as at a loss
at all. It's you know net neutral. The
the cost comes and is depreciated over
the weighted life. So if you if you if
it had a weighted life of a 100 years
every year in its annual report that
cost would only be 1%. If it had a
weighted life of two years it would take
a 50% hit in the first year and a 50%
hit in the second year. The weighted
average life of chips and certain in
data center investments is my
understanding from like 2019 to 2021 it
had been three years. It was extended to
five years or maybe six years for some
companies. And it's my understanding
that a lot of that was for old CPUs that
actually was uh completely legit like
three years was too short and so it was
the it had been wrong and it was the
correct thing to to make it longer. The
critique is that for these newer GPUs
because the transformation and the
innovation is so rapid. uh you know five
or six years is not that relevant and
and the idea that in five years uh these
chips are still going to have uh you
know as serious value is is something of
a joke. I will also point out that you
know Michael Bur very very smart
investor but technically you know um
someone who had been saying that a lot
earlier than Michael Bur is Jim Chenos
the short seller noted for his shorting
Enron and being early there. Um just a
plug I did interview him in uh December
of of last year about this very issue.
So we can link to that and people
definitely should should check this out.
And then also this advanced packaging
thing. Um I interviewed Catrini about uh
a researcher known as Satrini about it
and and basically it's it's it's
everything you're saying that uh the
these scaling laws and the improvements
are going to be coming not from the
power of the chip itself but from the
interconnection and the architecture and
basically so so that uh you know so that
so that you maximize the effective
compute
>> there's nothing I can say to dispute I
agree 100%.
>> I I think you you know Michael this
issue has been raised to Jensen Wong
Nvidia CEO and he has said that with
every new Nvidia chip it it does get far
more efficient and that the uh you know
amount of energy it takes goes goes way
down per chip. to what degree is that a
uh you know a fair push back and a
justification of of the Nvidia's model
or or do you do you do you find issues
with that? Look, I think that he is
speaking correctly where it when it
comes to capability,
but as you probably know, the cost of
each of those chips and the next
generation chip is rising as a
percentage faster
than the useful compute that those new
chips are producing. Now, as an
economist thinking about that, he
basically says we're heading towards
some sort of crisis point where um you
can't just regardless of cost continue
to improve the the chip if the thing
that you really want it for
the usable compute is not rising at a
commensurate rate with the technology.
And this is where the economics really
does come in.
the diseconomies of scale derived in
part from the physics and material
science that you and I have talked from
um is now starting to weigh very very
heavily and I think that's where Michael
Bur is in part coming from. I I haven't
seen and I'll look it up the Jim Chos
interview, but there's a lot of other
people that have said this as well that
we're moving into a world where that
monolithic chip that Nvidia has been so
famous for um is unlikely to continue to
rule the Bruce for much longer. And the
replacement of the Nvidia chip, the
dominant Nvidia chip, is not so much the
new dominant player AMD, a competitor to
Nvidia, but rather a a custom ASIC chip.
uh o or so are you saying that uh
basically these the companies that are
uh building the data centers and
creating the models are going to be
making their own chips and probably
hiring a company like Broadcom or
MediaTek in order to make to make their
own chip rather than just buying a chip
from Nvidia or AMD.
>> I think that's absolutely right. I think
what Amazon is doing with Traium, what
Google is doing with its TPUs are
particularly interesting, but you can
buy them in from third parties. Yes. But
they're actually doing it inhouse. And
the chips that they're building,
designing are not
side by side as powerful as the ones
that Nvidia produces. But they're built
for purpose.
They work for what Amazon needs them to
do. They work for what Google needs them
to do. So, it's a bit like buying um I
don't know, a truly magnificent
Mercedes-Benz that's got off-road
capability, but the reality is is that
you basically going to drive around town
with it. It's just not needed to have
that off-road capability, but it adds
huge amounts to the cost. and what what
Nvidia has come up sorry Amazon and and
Google and I'm oversimplifying here but
they come up with a chip that works for
the specific needs that they have for
that chip. Now a lot of these chips are
being used not just on the training side
but on the inference side and this is
something which by its behavior uh
Nvidia has started to recognize and then
moving over to chips that are more
geared towards achieving success in
inference but also the software that's
required to get the best out of those
chips which is why they bought Grock
with a Q
>> um That was precisely a recognition that
Nvidia is now stopping from being simply
a supplier to other big
chip based tech companies actually
becoming a player. It's it's it's
building its own stack from hardware
through to software. And so essentially
um it's starting to shoot itself I think
in its own revenue foot because it's
starting to compete with its best
customers. Um, and that is something
which can only go on for a certain
period of time. I mean, if I was Amazon
at the moment, if I was Google at the
moment, I would just say to my chip
development department, full speed
ahead, guys. We can no longer rely on
Nvidia because they're actually trying
to become a competitor to us. Um, and so
I think there's a very interesting
low-level civil war breaking out in the
United States at the moment between the
big players in the world today. What do
you think is going to happen to the US
model providers? So not talking about
Nvidia, but I'm talking about OpenAI,
I'm talking about Gemini of Google, I'm
talking about Anthropic, uh, as well as
the other, let's call them lesser
players. Where are they going to be in
three to five years in your view on the
spectrum from they have a product, it's
you know, modestly profitable, somewhat
of a success, but not the lights out
versus this company is not going to
exist anymore. Well, um I I please don't
think I'm trying to be a stock promoter
here, but the model I like most at the
moment is Google's um because um they
have uh and I think they're doing
something which is again happening at
the very early stages but they appear to
be essentially um courting Apple at the
moment and bringing Apple as in in as an
ally. The great thing about that is that
it can't be seen as competitor or from
an antitrust perspective. it just to be
seen as a an ally. So, um I like what
Google's doing. They have a very
powerful model. Gemini is a very
powerful model. They have an
unbelievable distribution capability.
They actually still technically own
Android. And now they are quietly cozing
up to the other great phone based
software company, Apple.
So, I like what they've got most of the
pieces of the jigsaw puzzle in place
already. They still need to work hard on
all of them, but nevertheless, they seem
to be putting it all together almost
better than anyone else at the moment.
Of course, Nvidia, as I said, has broken
ranks with its old model and is now
trying to do all of these things as
well. But the orphans, and I would think
of a barropic as an orphan,
um I think they're going to have a tough
time of it staying independent. Um I
think unless
Open II has the likes of Microsoft
behind it, I wouldn't be
and of course I suppose um the big
Japanese um companies that are
supporting open AI, but I'm not I'm not
a huge fan of open AI. I don't think
it's going to be a winner in this setup.
I think they're essentially taking on
more uh capital cost um than they will
able to be able to generate sufficient
revenues from. So I think OpenAI has got
its work cut out for it to an
extraordinary degree. Amazon is is an
interesting pair at the moment. Um is is
doing what Google is doing, the tranium
chip for instance, but they don't have
the consumer reach that Google has.
doesn't have a a browser like Google.
>> Michael, who what is Amazon's model?
>> They are starting to make their own
chips.
>> Their own chips, but they don't have a
model. I think they they own a little
>> They don't have a model. Absolutely. No,
but Amazon is as as I said, it's it's
it's an interesting and I think probably
I'm making a prediction here, but but
you know, maybe Amazon will buy
anthropic and then suddenly it will
jumpst start its model position. the the
the point is Amazon is essentially now
offering um a data service um data
centers but its data centers are
increasingly being offered to third
parties um and it's doing that with its
own chips but all I will say is that you
know not nothing compared to Google
which I think is really doing a great
job at the moment um but I think that uh
Amazon has got an interesting one and
they would be for me a potential um
acquirer of one of the models the orphan
models as I like to think of them that
we're talking about. I mean for instance
another one out there is meta. Um I mean
meta's had a a disastrous year in my
humble opinion. I mean the whole llama
story which was and I'm using the
appropriate euphemism here put out to
grass in August last year. Um llama is
an orphanford now. It's a good or
ironically it's an open-source or but
nevertheless
uh it hasn't been improved. not that we
know of uh in any material way since
August last year and Mark Zuckerberg
seems to be going down a completely
different path now and I I I'm not
exactly sure of what that path is but
Meta would be another company that for
me would on the basis of current
behavior be struggling in five years
time.
>> So so Amazon does not currently I
believe have their own model or any
model that is you know serious they are
a huge cloud provider. they were you
know the first really uh large cloud
provider. Microsoft is is now
>> and increasingly Google as well. You
know the cloud computing is a profitable
business uh and quite growing rapidly in
particular is growing rapidly now but
how much of that is because the
customers are is open AAI and all of
these other unprofitable AI startups. So
everyone says the dem the demand for
compute is so high the demand for
compute is so high. So you know data and
that is demand for people uh becoming
customers of data centers but how much
of it is you know real and sustainable.
>> You ask a very profound question which
actually leads us even back to Nvidia.
Nvidia might be immensely profitable and
indeed it is immensely profitable at the
moment but are its customers profitable?
And so to
>> Michael sorry an an amazing question I
want to say technically a ton of its
customers are immensely profitable like
Microsoft
>> yeah but but the customers of its
customers are not profitable that's
although and to the extent that its
customers are profitable they're not
generally speaking always very
profitable from their artificial
intelligence activities I mean to the
extent that um uh its customers u might
be uh doing well. They're able Meta is
able to subsidize uh its activities in
AI because of the advertising that it
gets from Facebook. But if you actually
look at if you can compartmentalize it
when I ask the question again,
how profitable are Nvidia's customers
from their AI activities? It's a much
more complex question to answer. they've
got associated areas which can subsidize
for now um those areas but one of the
interesting things is is that we've seen
a lot of the companies meta being one of
them move out of the fact that they
could finance their AI activities from
free cash flow to now having to borrow
again a slight warning sign echoes of
1999 2000 I don't want to make too great
a parallel I'm not Michael Bur
nevertheless
a warning sign. The amount of debt
that's creeping into the system both on
and off balance sheet at the moment um
should be of concern. It's particularly
of concern in related areas like dare I
said Oracle and Pwe.
>> Yeah. The point being is it's part of
the ecosystem. So one has to look to
some extent of the health of the
ecosystem as a whole. Though one can
recognize that there are parts of that
ecosystem that ostensibly are very
healthy at the moment.
>> Yes. And Michael, it has been said by by
others as well as I have said the
following statement that the amount of
the money being spent on AI to people
who are building the data centers and
buying the chips are among are the most
profitable and you know largest
companies that have ever existed. I
stand by that claim in its technicality.
I want to add the caveat that the
customers of those immensely profitable
companies name Microsoft, Amazon, and
Google, those c customers are often
VCbacked companies that are losing a
gajillion dollars a year. That's that's
a technical term. So, so the customers
of Nvidia are making money. The
customers of the customers of Nvidia are
not making money.
>> I'm happy to be go with your
qualification.
>> Yes. And and this example you said of
Meta buying a ton of Nvidia chips in
order to make its own process better and
you know to to serve ads with AI ads
that is a somewhat rare scenario. I
think a lot of it is cloud computing
that is profitable but the customers of
that cloud computing uh are are losing a
a ton.
>> I I'm happy to accept your
qualification. No, I'm not going to
speak.
>> And so, so you you make a lot of uh
military analogies and you basically
compare the US architecture and and
Invidia to to um the German tanks uh uh
during World War II, which were
extremely effective tanks. It's just
that the the German economy and
industrial uh uh uh powerhouse was
unable to make enough of them compared
to the Soviet tanks and the American
tanks that the tanks were maybe not as
powerful but they were able to produce
them at scale and you know ultimately
led to uh defeating uh uh uh the Germans
thankfully. Tell us about that uh
analogy. There's a very famous infamous
apocryphal story of a rather put out
German tank commander who said, "One of
our tigers is worth four Shermans. The
problem is the Americans always bring
fire." Now that saying has been
subjected to scrutiny and it doesn't
hold precise water but the concept
everyone agrees that in the end if you
can moanize enough material
um you can overwhelm people who have
oneonone better pieces of equipment than
you do and this is something which the
Chinese are essentially doing now when
it comes to chips and even David Saxs in
the White House had admitted as much to
this that that China doesn't need our
chips because what they do is they
essentially amass so many chips from
Huawei that they can outshoot in terms
of usable compute your earlier term um
an Amidia cluster which is really what
we're talking about. So the the Huawei
supercluster versus the Nvidia cluster
um there are just so many more Huawei
chips in that cluster and the net effect
is that it outshoots the Nvidia cluster
and that's what has started to happen
in China. Now, it's not fully
operational at the moment. And this
whole will they won't they story that's
coming out of China at the moment with
regards to will they allow uh the H200
or the H100 to be imported from Nvidia
uh is part and parcel. It's caught in
the crossbar mix of my met, but not
entirely um in this whole process at the
moment because the Chinese feel they're
close. My own estimate is that come
2028, they will have met with parity are
being able to match
perhaps on scale if not on quantity
anything that can be thrown up by the
likes of the amount of of of of effort
of money of resources that are being
mobilized to the producing of just huge
numbers or chips in China
is such that while it's touch and go and
the comparison today and David Sachs may
be right or it may be wrong or may be
technically right but not practically uh
right but nevertheless in two years time
he absolutely will be right my own view
is that what China's setting itself up
to do is to tide itself over it
basically needs to buy time probably two
years and it may take a dollop of H200s
H100s from Nvidia for 2026 and 27 but by
2028 it won't need those chips any
longer and it's not that they won't be
able to that they can produce better
chips It's just that they're going to be
able to produce massively more.
>> You have several agenda in your piece.
The first is a story which I love takes
me back 20 years when I when I read the
story you know as a child of the the
Indian tale about the king that uh you
know said I will grant you any favor and
the guy said give me a grain of rice on
day one on day two double it and then
double it on day three. And basically by
the end of the month or by the end of
two months uh it was a million trillions
of of rice. So you know the possible
number I believe that is a quadrillion.
H I love that tale. It's the story of
compound interest. How does it apply to
what we're talking about right now?
>> Well I look I just wanted to uh
essentially one of the problems that
happens often in the whole area of um
talking about AI is that we get drowned
in big numbers. So, I essentially wanted
to go to one of the biggest numbers I'd
ever seen, which is the number of grades
of rice that will be on the 64 square
chessboard. Um, and essentially use that
um to explain a story. Um, and in fact
the it's 1 2 3 4 5 6 7 8 9 10 11 12 13
14 15 16 17 20 numbers on that last
uh 64 square. Um and the point that I
was making here is that to some extent
uh the numbers that are being talked
about by the likes of Open AI every now
and again start to uh almost approach
the same sort of level of absurdity.
Eventually you run out. You can't
mobilize the amount of rice that's
required to cover that 64th square. Um I
came across a statistic and I'm just
going to quietly bring it up uh
yesterday
when talking about we've mentioned it
earlier in the context of what Eric
Schmidt had to say but of energy and on
on the current level of um energy needs
that open AAI has. They need to increase
their energy capacity over the next
eight years by I kid you not 125 times.
Now, these are the sorts of compounding
numbers that eventually
cause me to say enough's enough. This
isn't possible. You can't carry on like
that. Um, and
the the whole process starts to come off
the rails. I'm sure that that Indian
prince or king uh eventually when things
were starting to get 24 square was
saying, "Oh my god, I'm going to
bankrupt this nation." you know, the
number of grains of rice that's on the
24th square is just more than three
years worth of production. And I think
that that same sort of logic eventually
overtakes the likes of Open AI because
the numbers if they have to increase
energy consumption 125 times over the
next eight years,
are they mobilizing
or at least is the system mobilizing
the amount of energy supply to be able
to meet that demand?
And I I think that those it was just a
way of playing with the numbers, the
power of compounding and to use it in
this context and to say um I think the
numbers being talked about here are just
um off the charts and are not realistic.
So, Michael, a key point you're saying
is that uh all the huge, you know, AI um
AI bulls say, Michael, the the revenue
is going to grow exponentially. And
you're saying, of course, but the costs
are growing exponentially as well. And
that is something that wasn't true of
software. And software, the everything,
you know, the revenues grow
exponentially, but the costs stay
somewhat fixed over time. And yes, you
have to depreciate the the the
commissions you give to sales people,
etc., etc. yada yada. But software, you
know, as it existed in the US over the
past 25 years, is a ridiculously good
business. Um,
>> Microsoft produce another copy of
Windows, it cost them an infinite
decimally small amount of money.
>> Mhm.
>> So what they can sell that for,
which is, as you and I both know, quite
a lot, compared to what it costs them to
produce, gives them spectacular margins.
Making your point. So, Michael, you now
I want to get to the point where I I
want to give potential counterarguments.
Some of the counterarguments are going
to be what if you're wrong about what
you said. Some of it I what I'll start
out with is let's say you're right that
the Chinese AI architecture is is going
to reach the scale of US and soon um
exceed it uh especially when you adjust
for costs and inputs.
Why does that mean automatically that
China is going to dominate AI? You know,
I'm, you know, not a tech geek at all.
I'm the furthest thing from it. But I
know that many tech geeks, people who
are tech geeks say that Linux is in many
uh areas much more preferable to Mac or
to Windows. Uh people say yes, Android
and all this open source stuff much
preferable to the iPhone and Android.
And you started our conversation by
noting the success of Linux and
operating systems and the success of uh
um Android over micro uh Microsoft and
Google. But you know the the the
Microsoft and Apple are still enormously
profitable enterprises and it seems like
the power of American technology
companies to extract uh monopoly rents
or monopoly like rents from their
software has been true despite the fact
that yes there are cheap alternatives
whether in the case of Microsoft it is
through uh you know somewhat
anti-competitive practices maybe or in
the case of Apple it's in the fact that
people brand people just associate Mac
with good things and therefore they use
it way more often and and you know maybe
not in terms of the amount of people
amount of customers but in terms of
dollars and particularly profits you
know Apple Apple is dominant there why
isn't that going to be the case for you
for um for AI
>> look it may well continue to be
but it's a closed ecosystem that we're
talking about and Apple has got an
amazing mechanism for essentially
extracting
rent from what is essentially a closed
ecosystem.
But it gets back to my earlier comment
about not just the US world but the
world's world.
The world world is seeing this
differently. And I just happen to live
in one part of the world that's seeing
it differently. And we're not just
talking geopolitics here. We're talking
geoeconomics.
We're seeing what Indonesia setting up a
sovereign AI system which it's announced
it will do. And yes, it will get some
technical support from some of the US
players, but when it comes to the
software, it looks as if it's basically
going to go with qu.
And the point being here is that the
software comes in inverted commas for
free.
Now,
we know the reason why
the top 100 supercomputers in the world
use Linux is that Linux comes for free.
That was not the case in the the mid1
1990s where basically it was Microsoft
that was supplying the software to the
what supercomputers were around at the
time. But what has happened in the last
20 years, 25 years, is that the geeks
that run the world supercomputers
basically, and there's another dimension
here, but it's an incredibly important
dimension, preferred an opensource
option which they then could manipulate
in order to build that supercomput in
the way that they wanted it to look.
And this is critical that what we're
seeing is that the the usability
of open source stroke open weight and
open weight is not as flexible as open
source. open source is unbelievably
flexible. Um is that
it, you know, it will will it's going to
win in the wider world.
And if all you're saying is yes, but
Apple will continue to be able to
extract huge profits from its core
markets. And by the way, I I need to
correct what I said earlier. There are a
couple of extra markets in the Apple
world, which is South Korea and Japan.
and I think I missed them out but
nevertheless that's six out of what does
CNN say that there are you know 200
territories and countries um the rest of
the world is moving very heavily towards
using Chinese software because it's free
and you know there is free is free it's
a very very powerful model the point is
that there are other ways in which
people are going to have to android is
the classic example here there other
ways in which people are going to have
learned to monetize the language. And it
is a language we're talking about here.
It's like English. You know, you and I
don't actually pay anybody a tax for
using English. But by using English, you
and I are often able I can go and make a
speech and thereby monetize my use of
that English and give myself an income.
And it's the same thing. It's
essentially becoming a a software
language that's available for free.
And this is where I think that
the clash will ultimately arise. Now
what we may end up with, and I'm not
disputing this, is a bifocated world
with these reinforced islands that Apple
can occupy, reinforced islands that USI
can occupy,
six or seven or eight of them. But the
rest of the world which of course is
growing economically much faster than
those reinforced islands
um will be opting for something else.
And where does memory come into this? I
know there is a giant squeeze in memory.
The stocks of these companies like
Micron or SanDisk are up, you know, up
300 400 500%. It really is ridiculous.
You referenced earlier that DeepSk
memory usage or or demands are down 93%
from 100% to 7%. How is it so much more
efficient as well?
>> The MLA architecture that they've come
up with and really I'm not a geek so I'm
not getting into the specifics here
essentially allows them to pack down the
usage
of the space
far more effectively, far more
efficiently.
So much so that they don't need to crowd
the memory with a hundred. They just
crowd it with seven if crowd is the
right verb. Um the point being is that
memory suddenly becomes scalable
whereas previously it didn't. But I
fully accept
that in the US ecosystem at the moment
memory is a huge issue. In fact it's
also an issue in the Chinese ecosystem.
I'm not going to deny that. But after
MLA not to the scale that it is in the
US. which is why you're absolutely right
to reference the the memory suppliers
the those people who provide uh that
capability the AMDs of the world um
you're absolutely right to reference but
the point is that again in this
bifocated world that I've talked about
um they're finding a way of doing things
differently
and their demand on memory is nothing
like let's give it three years for MLA
to roll
properly is going to be nothing like it
is today compared to if the US does not
go down this path what it will be for
the US but I have to say I think that a
lot of because MLA is essentially
available on an opensource
platform namely Deep Seek um most of the
US the Googles and the or should I say
the neatrons and the Geminis and the
Open AIs they're going to be looking at
what this MLA is all about and most
likely incorporating it into their own
next generation software because it's
architecture, it's available, it's a way
of doing things. It's not something
that's essentially owned by the Chinese.
Not at all. Anybody can use it. So I
think that the memory crunch
which we are seeing at the moment will
continue to exist for a couple of years.
But if I had to make a forecast, I'd say
it starts ease times.
>> And so you said that companies around
the world can use this Chinese open
source architecture and that they're not
sharing their data with China. Because I
imagine a lot of companies and people in
particular would say, "Okay, I
understand if I'm using all this US
technology that all of these American
corporations have my data and they're
exploiting that and I'm not happy about
it, but it's better than the Chinese
government having it and you know using
that and there being some geopolitical
fears there. Maybe I'm wrong. Maybe I'm
wrong about that. But are are you saying
that people can use these Chinese things
and and not share the data with China?"
>> Yes. Is the is the big answer. But I'll
take it one stage further. If you've got
it on your phone, you can be offline and
still use the software. And so if you're
offline, there's no way you can be
sharing the data. And that's the
extraordinary thing about the
open-source network is that it is very
distributed. I used that term earlier.
They have this phrase translated into
English called the edge. And essentially
the connectivity of Chinese software to
the edge to my cell phone here
is incredible. And the interesting thing
about that is that
one of the problems that uh
US AI models are facing at the moment is
called data exhaustion. They're running
out of high quality data to scrape.
Wonderful verb. And they're looking to
create um their own forms of data,
artificial data, synthetic data, which
they can do. But the reliability of
synthetic data is is is questioned. I
won't say it's not worth anything but it
is questioned and hallucination
imagining things as as software
sometimes does uh is said to be more
likely to happen with synthetic data
than with real
it makes sense.
>> The point about being connected to the
edge is that the data that's coming in
from the edge is
real world everyday data.
And so what's happening is that the
extent that Chinese software is updating
itself, which it can do by virtue of it
being open source, it's updating itself
with a whole new supply of fresh air
data that's coming in from the edge.
Now the availability of that data though
there are some interesting pieces of
software being developed in the US
ecosystem at the moment that potentially
can mimic that. And so I don't say that
it's going to be possible for us closed
weight models to somewhat emulate that
idea. But as it stand at the moment, the
Chinese just do it easily. So they
basically pick up on on edge data to
refresh their models, fresh air I call
it. Um that is not accessible by and
large to the US closed wave models. And
so it sounds like you think a lot of US
closed source models in particular open
AAI are going to have a tough time.
Let's put it that you know you don't
want to use the word screwed but are
going to have some challenges. What
about Google? You said some you know
some maybe nice things about Google and
you know you said I don't want to be a
stock promoter. I think with this
interview uh no one's no one's going to
accuse you of promoting uh uh you know
American securities at least. Um but
Google I believe uses something called
uh TPUs, tensor processing units that
they invented and Google buys GPUs,
graphic uh uh processing units from
Nvidia primarily for its external cloud
customers for itself, Gemini, which it
owns and produces. It uses primarily
TPUs and perhaps exclusively TPUs as
well as maybe some CPUs as well. Um, so
it doesn't, you know, so it's made this
I use I use Gemini exclusively and I'm
extremely happy with it and many people
who, you know, know a lot more about me
say that Gemini model is is is extremely
powerful and good. Does Gemini uh does
does Gemini avoid these pitfalls of of
Moore's law and all these scaling things
that you say that Western US AI is is
plagued with?
>> Yes. in a I mean I I it's it's too sharp
an answer but essentially yes because
the TPUs are not facing the same sorts
of levels and the imperatives you've got
to get smaller and smaller and smaller.
They're basically saying no no no just
build me a chip that's a horse for this
course. It's it's it's built for
purpose. Um and it serves a purpose. It
doesn't it's not that you know
four-wheel drive Mercedes that only
drives around on city roads. It's just
maybe just a four-wheel drive vehicle
that only goes,
>> you know, in the in the rural areas or
it may just be your little hatchback
that runs around, you know, in the
local. The point being is that it is not
an all singing and all dancing Nvidia
full service chip. It's a much more
restricted
chip in terms of its capabilities, but
nevertheless, it serves the purpose that
Google and indeed I'd say that with the
trainium for Amazon, Amazon, but
recognizing Amazon is not as full
service as as Google is. Um, but
nevertheless, it's interesting to see.
And what this is doing is it's starting
to violate the most sacred
piece of uh of intellectual property
that that Nvidia has, which is CUDA.
They're their their essentially their
moat, it's called that protects the
Nvidia chip. Um, and what ultimately
gives it, dare I say, the value that it
has. what you can do now. It's a much
more limited service chip that that that
is being produced by the likes of Google
with a TPU. Um and nevertheless, um
Google is not left saying, "Oh, but we
need to be able to do that as well." Or
to the extent that they are, they might
buy um those GPUs um as an extra, but
for the purpose that they are talking
about now, that TPU serves that purpose.
It's amazing how ext.
A year ago, people said Google is going
to be dead because of AI because it's
going to replace their core business,
search. Now, not only is that probably
not true, search is is still very
dominant, but it has its own
architecture that in some ways is better
than open AI, which you know, and the
Microsoft uh universe. So, in terms of
investing implications, Michael, how are
you appro approaching this? Look, uh, if
all my universe was Wall Street,
um,
I would worry about quite a few of the
players of the so-called Max 7. I mean,
number one would be nobody we've talked
about at the moment, although they do
have some AI capabilities or potentials,
which of course Tesla. Um, but let's
leave them out of the equation. I think
that they are um when it comes to EVs
anyway um yesterday stores um of the
other big players out there I I worry
about Microsoft because I think they've
got themselves attached to what I think
is going to end up turning out to be a
debt in open
um which is very sad because I think
although I paid too much money towards
Microsoft every year to be able to speak
to you on a day like this um but I think
Microsoft has has has done well. Um, and
I think that they are showing signs that
they might be trying to diversify away
from open AI as their only area where
they're going to have exposure moving
forward. I think that they, as I
understand it, they've got a nice deal,
sweet deal through through 2032 or
something with with open AI. And I think
that they basically have decided that in
that time they better come up with
something as a precaution that is
alternative to our exposure to open AI.
But nevertheless, I think it's
potentially short to medium term bit of
a dead weight. Um I think meta is um
lost.
I don't think I think Zuckerberg is
casting around for all sorts of ideas at
the moment and I can't see them putting
together a story
and I think the fact that they've just
lost their best scientist who just
walked out of the door and you may have
read the financial times interview on
last weekend
Lee Khan uh and he basically had very
few nice things to say about what's
happening in met now he's a cheering
prizler so He is no he's difficult man
and very difficult what but he's
nevertheless genius there's no question
about it he's an absolute genius and he
had some pretty
harsh things to say about meta so I
think meta is a bit of an orphan and
they better get their act together
quickly otherwise I see meta in 3 to 5
years time being a souped-up search
engine attached to
retail options that go into Facebook and
the other members of the meta WhatsApp
and the like. I really do think I mean
that's the LLM's offering is is nowhere.
>> Yes. And I actually I believe that
um so Meta's
model is as you say open source and so
they're distributing it everywhere. So
it's not just at meta.ai but I believe
the web traffic of meta.ai AI was
something like only 10 times more than
my podcast. So my podcast gets onetenth
of the traffic and in terms of listening
time in terms of time as meta.ai
>> maybe being modest about your podcast.
So maybe that you know such a high level
of traffic that you've got. Um that's
how um you know poor old Meta can't keep
up with my point is that it's not a lot.
You know
>> it's not a lot. I I I understand where
you're coming from.
>> Yeah. Um, but no, I think I think Meta
has got it worked out and and losing
which which is what Lee Kan said in his
interview, losing their way with Llama,
which was something that was an
incredible option and then basically
having, you know, been all in Llama
until about August last year and then
suddenly going silent alto together on
it. Um, and not having anything to
replace. Um, and obviously, you know,
they haven't got a closed source unless
they buy anthropic. I mean, that's
something that they could do.
You know, Anthropic is a great LLM, but
it's a bit of an orphan and you're not
sure how they're essentially going to
pay for themselves moving forward unless
they have a a sugar daddy like Meta
behind,
>> right? And Meta Meta does have a hugely
profitable business and it's investing
in all this stuff to AI to make its own
product better rather than, you know,
serve the compute out to to uh clients
who who are other LLMs. That's why I
think like uh Microsoft and particularly
Oracle might get a little more screwed
than than Meta because Meta at the end
of the day let's say let's say you're
right Michael sorry let's say you're
right and you know Meta loses tons of
money the depreciation is is immense and
the revenue growth is simply not there
not enough and then the stock you know
goes down 80% like it did again I think
the playbook is probably pretty similar
to 2022 where it's a buying opportunity
because it's it's kind of like if
Coca-Cola was spending billions of
dollars on trying to you know go to Mars
or something it's like yeah they'll just
eventually stop losing money and they'll
do it. You know, Mark has nothing with
Oracle. Oracle's not it's not on its
balance sheet yet, but it's it's it's
committed to a quarter of a trillion of
of uh lease obligations for its data
centers.
>> If you are I didn't wasn't including
Oracle and Core Weave in our
conversation at the first instance
because they're not in the Mac 7. But if
you were asking me that given the
choice, would I buy Meta ahead of Coree
or Oracle? I would buy Meta. If that if
that's my menu, I would buy Meta. and
you're damning it with fake comparison,
if that's a phrase that we can come up
with. Um, and you're absolutely right. I
mean, if you want to focus on where the
problems in if there is a bubble, and
let's not get into that subject, but if
you were on bubble watch at the moment,
you should focus on Oracle.
>> And I mean, you said you don't want to
use the word bubble, but in the piece
you use the bubble several times. You
say bubble bubble toil and trouble and
you say you know maybe there's a bubble
in Oracle and Corey which are publicly
traded the massive the granddaddy of all
the bubbles open AI right which is
private.
>> Look I did speak about and I spoke about
the finances particularly underlying um
uh Stargate for instance. However, my
central thesis at the moment is that the
real bubble is technological, but the
financial bubble is a symptom of that
technological bubble. And the two are
interconnected.
And that's not to say if one is to
burst, which one is going to burst
first, but the relationship's almost
umbilical, so that if one does burst,
the other one will probably suffer. Um,
but my central thesis in that paper is
that the bubble is technological. Where
does that leave us with the Chinese
investable AI universe? There are
publicly traded securities. Uh Alibaba
uh is the is the producer of of Quen uh
which we referenced. Deepseek is is a
you know is a private company. There is
10 cent which interestingly is uh you
know a lot of which is is owned by uh um
Naspers you know the largest South
African company. Um, and then there's
tons of Chinese companies that I've
never heard of, most people have never
heard of that are up over 100% and I
think a lot of those companies are
suppliers to the AI infrastructure as
well. And that would certainly be what I
consider a high-risisk area. But, uh,
are do you have any bullishness? You so
you have a certain caution, let's not
call this bearishness, but certain
caution over, you know, strategists are
never bearish, they're only cautious.
Um, certain caution over US securities.
Uh what about are you are you bullish or
optimistic about Chinese securities?
>> I prefer the ones like 10 cent and
Alibaba
where there is something else involved
in the company.
Um but when METAX soared 700% on its IPO
debut
um we're seeing some unbelievable first
day pops uh that are happening. I'm
actually deep down quite concerned about
particularly the new issues that are
coming to market in the US um sorry in
China at the moment and that's both on
um Hong Kong and on things like the
Shanghai or the Shenzen indexes. So, I'm
not just confining myself to Hong Kong,
but be careful because I I think there
is
a bubble potentially performing in some
of these second tier. And I don't wish
to sound that I don't think that they're
producing some great product for some of
these second tiers. If it's chip
related, I'm probably a little less um
worried. uh if it's just software
related I'm hugely worried because uh I
think they will face the same pressures
the commoditization pressures um that I
fear you where open AI
which of course is moving into providing
uh hardware as well in the form of data
centers but nevertheless something that
is standalone and just an LLM at the
moment will be something that I would be
very very um careful about. So u uh
tencent and u Alibaba absolutely by do
potentially which is a very interesting
I mean it's the main search engine and
they're starting to develop their own AI
capabilities they're to some extent
doing what Google is doing dare I say it
it's not completely fair comparison I'm
probably um okay with at the moment but
some of the more recent issues um uh
with those sorts of levels of of first
day pops
Um, I am really quite concerned. Um, and
uh, so I I I'm not a, you know, what
whatever it takes commentator on on on
Chinese stocks.
>> That makes sense.
uh what about the more uh semiconductor
supply chain companies like ASML uh Lamb
Research Ka these companies which you
know semiconductors have historically
been a very volatile industries you know
bankruptcies and the like but more
recently you know these companies appear
to be much more dominant companies that
in the case of ASML have a literal
monopoly on certain technologies um and
are therefore extremely profitable is
this uh is your thesis also a threat to
these companies in the same way it's a
threat to Nvidia and the oracles of the
world.
>> Well, yes, but perhaps for a slightly
different reason. Um, and essentially
because and ASML has got two or three
years of uninterrupted
blue sea, calm sea ahead of it. But the
Chinese have have essentially been able
to come up with their own EUV at the
moment. And um, it's probably 2028 when
it will be commercially available and
being used to make chips in China. The
company there is called SME SME
um and recently it basically sold its
non EUV related activities
uh out and it's essentially
concentrating now on its EUV
capabilities but they uh essentially
have to what has been called China's
Manhattan project in Shenzen and have
built their own EUV which appears by all
accounts to work. Obviously, there's
going to be some teething issues, I
suppose, in in in putting it together,
but as I say, they're probably going to
have it up and running um within two or
three years. If that's the case, then
the ASML is no longer the only kid on
the rock. Um and uh and that then starts
to uh cause issues for the likes of
ASML. Uh I think with regards to other
players in the in the hardware supply
sector, you have to look at it on a
case-by case basis. Um see the market
that they operate in, see what the
potential competition might be, not just
from China, but but in the first
instance from China before you make
comment that yes, AMD for instance is
super secure. Um there are all sorts of
other players that are now starting um
to come in. I mean there's an very
interesting company out of China at the
moment called more threads interestingly
using more in the same sense of Mo's law
um and they are nicknamed
China's Nvidia now they are far from
being a serious competitor to Nvidia at
the moment but the technology that
they're working on at the moment is
pretty amazing
um and again fast forward three years
maybe four years um more threats
not take on Nvidia but it can start
impacting its margins and and I think
that that's the point that's how
commoditization takes place it doesn't
necessarily happen on day one two three
or year one two three but eventually
when there is and Nvidia is pretty much
monopolist high-end chips is pretty much
monopolist in EUV machines but when
there is another kid on the block um you
know they no longer uh are a monopolist
and they can no longer necessarily
guarantee to extract monopoly rents.
>> You write in the piece that the
hyperscalers are US data center
companies are trapped in prisoners
dilemma. No one can stop spending unless
because then the rivals will surge
ahead. So collective overinvestment
guarantees systemic collapse. Is it your
base case that these companies keep on
spending money into the ground so that
capex in 2026 is way higher than it was
in 2025 and the same way capex in 2025
was way higher than it was in 2024 and
that at some point there's going to be
as you say a systemic collapse a giant
bust where the market and and investors
and the world suddenly realizes that
these investments were to put it mildly
uh excessive. I wish I could say no.
what you've described and I didn't
mention it in the essay but has also
been referred to as what's called the
red queen dilemma from Alice and
Wonderland um from Lewis Carol where the
red queen says I have to run twice as
fast just to stay in the same position
and to some extent I when I listen to
some of the hyperscalers speaking at the
moment about their capex plans that
seems to be the logic behind uh what
they're doing that they can't stop now
because they will fall over it was
fascinating to look at what happened
after Meta released its latest set of
earnings. I think the stock went down
20%. And the reason why is that people
were saying, "Hang on a sec, this is
just becoming ridiculous." Have you seen
those capex plans? Um, and there is a
sense that at some point, and one has to
think about who are going to be the
survivors of this, some of the dare I
say it weaker players will probably fall
by the wayside. Now whether that impacts
negatively on those that continue to
operate I can't say it probably does but
they will be survivors and one can go
back to what happened in 1999 you know
Amazon saw it correct me if I'm wrong
prices fall 80% in the com bubble burst
but it was a survivor
it lived to bite another day and it
found something called Amazon Web
Services it found computer as this new
province.
I I I think that there will be
casualties along the way. I think that
there those that remain as survivors
will be negatively impacted, but they
will fight live to fight another day. Um
and it's going to be fascinating to see
which are the survivors. And to some
extent, your earlier set of questions
could have actually been uh my answers
could have been uh essentially so who do
you think the survivors are?
Um, and and I think that's what if
you're a serious investor in the US
markets at the moment, you need to start
asking yourself just as a precautionary
exercise, if nothing else. So, who are
going to be the survivors? How how are
you assessing the odds that um, you
know, if let's say someone's equally
invested in the oracles and the the US
AI of the world and the Chinese AI of
the world, public and privates, are they
going to lose? I I think you you based
on your thesis I I you probably think
they're going to lose money on their US
investments and make money on their
Chinese investments. Are they gonna make
more?
>> They're going to lose less on their
Chinese.
>> If you're talking about a broad
cross-section on both sides that
includes equivalent of Chinese corewees
and Chinese oracles, there are some
potential losers in the Chinese
>> stack to use the term that is often used
in the world of computers just as there
are potential losers in the US stack.
Um, which one is most overpriced at the
moment? The US stack. Um, but does that
mean that there are no overpriced
options available in inside China? No.
And secondary, I mean, just like I'm
moving into a different space at the
moment. If you ask me to invest in
Chinese EUVs, there are just so many of
them out there at the moment, and
they're all doing spectacularly well in
terms of their technological
capabilities, but I'm not sure they're
all survivors. So I think there's going
to be a thinning out in EUV sector and I
think there will be thinning out stroke
consolidation that's going to happen in
the AI sector too. Um but who are going
to be the survivors? Well, I I've got my
own ideas. I mentioned three of them
just now. I think 10 cent, Alibaba and
BU definitely will be survivors. But
will they sail through this without any
scars on their face?
>> No.
>> All right. Now, um I'm going to ask
another question. Not about just the
broad ecosystem, but specific companies.
Uh there's a guy named Doug. Doug owns
six stocks. He owns the US AI which is
let's say Nvidia, Oracle, and um
Microsoft, Nvidia, Oracle, Microsoft.
And then he owns three Chinese stocks,
BU, Alibaba, uh uh and 1010en. Is he
going to lose more on his US investments
that he's going to make on the Chinese
investments or you don't?
>> Such a difficult question to answer. I
really don't know what his in price was
on all of them. But if we can say
today's price is the starting price is
in price
as a sort of to indulge your question.
Um I would worry more about um the US uh
stack uh than I would about the Chinese
stack.
>> Thank you. Uh well Michael it's it's
been an enormous uh privilege to hear
your your views on this. You you've put
a lot of work into it. we will be
attaching your uh paper which can be
read uh in full on your website and you
also posted it on LinkedIn. Uh please
just tell us a little bit about some of
the work that you do at Kasc Consulting.
You were a a strategist um for an
investing firm for for a long time. Your
main focus most of the time is not AI.
Other than this, what are you focusing
on and and you know what kind of work
are you doing right I sometimes feel as
though I'm a sort of deep seek in a
sense that I'm not out to make lots of
money at the moment. Um I'm just out to
try and understand what's happening. Um
and I get invited to speak at a lot of
venues. Some which I do remotely like
I'm doing with you today. Others which I
do in person. I have to say I say no to
quite a few of the invites because I
simply don't want to crowd my life too
much. So um consider yourself honored.
No, I don't mean that. But you
understand what I'm saying.
>> No, I am. I am. Um but um I do uh speak,
I do write. Um I'm a one-man band. Um
and so uh my research goes where I want
it to go. I'm not driven by somebody who
says, "Right, I want you to look at this
subject now or I want you to look at
that subject now." Um for the moment and
for the foreseeable future, I will focus
on on AI um as it manifests itself. And
I particularly feel as though I have
something extra to add
um in understanding
um the Chinese ecosystem. Um when I
speak to really well-informed people of
the US AI ecosystem,
I'm horrified by how little they know
about the competition.
So I don't know nearly as much as they
do about the USI system, but I know a
gazillion times, not gazillion, but a
multiple times more about what's
happening in the Chinese system. And for
me, for instance, the fact that many of
them don't really know that Chinese AI
software is offered for free. I I mean
it's a simple fact but it's a powerful
one. Just tells me that they probably
don't know what's coming which is why I
suppose I do get invited to speak um to
people like yourself. Um because I'm
trying to understand what's coming.
Um and I don't think I'm going to get it
all right. I don't think I'm going to
get it even half right. But I think I'm
going to get it more right. Partly
because I've actually investigated it
seriously.
um I'm probably going to get it more
right than wrong as compared to many
western analysts at the moment. So
that's probably what I have to offer at
the moment and that is that I I spend my
days searching through and if I can
leave one piece of advice if people want
to try and acclimatize themselves get a
breast of what's happening in Chinese
tech they should subscribe to the South
China Morning Post um which has the best
tech writers uh in English in the world
partly because and many people don't
really understand it um Hong Kong is now
um as it was one stage very much um the
big brother to Shenzen. It's now the
small brother and yet the two are two
metro stops away from each other. Um and
the real action is happening in Chenzen
but also in Wangjo which is where
Alibaba's based uh and Deepseek are
based. So uh but Chen for some unknown
reason that people at the South China
Morning Post and I'm not being paid by
them to tell you this. All I'm trying to
say is if someone wants to play catchup
and trying to understand what's
happening in the Chinese uh tech space,
start reading South China first.
>> Yes, we will leave it there. Thank you
everyone for watching. Please leave a
rating and review for Monetary Matters
on Apple Podcast and Spotify and
subscribe to the Monetary Matters
YouTube channel.
>> Thank you, Jack.
Ask follow-up questions or revisit key timestamps.
The discussion centers on the accelerating advancements in AI, particularly highlighting the contrasting approaches and potential future dominance of China over the US in this field. Michael Power argues that China's open-source, open-weight AI model strategy, coupled with significant investment in smart factories and robotics, positions it to outmaneuver the US. He criticizes the US's closed-source, service-oriented, and capital-intensive AI model, suggesting it may lead to a bubble bursting due to unsustainable costs and limited long-term viability. Key themes include the economic advantages of China's AI model, the limitations of hardware miniaturization (Moore's Law), the concept of commoditization, and the critical role of energy and alternative chip architectures. The conversation also touches upon the potential for a bifurcated global AI landscape and the differing investment landscapes in the US and China.
Videos recently processed by our community