Anthropic’s $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's Influence
2505 segments
How many PRs you think are going to get
pushed to the core structural internet
in 100 days? What's the overunder
number? Cuz I'll give you a number.
>> You're going to say zero. My my answer
to that is
>> I'll say like 10,000. But it's going to
be immediately
>> if it prevents your browser history from
being released to everybody in the
world, Chamath, that may be something
that you're willing to, you know, let
100 days pass on.
>> I think you got Chimat's attention when
you said browser history.
>> What about the dickpicks?
>> Chamat is he's going to release them
himself.
We'll let your winners ride.
>> We open source it to the fans and
they've just gone crazy with it. Love
you.
>> All right, everybody. Welcome back to
the number one podcast in the world.
David Freeberg is out this week. But in
his place, the one, the only,
our fifth bestie, Brad Gersonner. I
mean, why don't you ever give me puts a
little namaste in your payday anymore?
You used to be
>> I'm going to bring back the greatest
moderator, but now it's just kind of You
know what? These guys beat me up. They
beat me up and they just beat the the
joy out of me doing this program.
>> It's because you're a Roana apologist
now.
>> No, I We'll get into it. Okay. Save it
for the Roana apologist. just
because I said like, "Hey, they've
stopped maxing and they've
started doing like some logical things."
Uh, yeah. Okay, here we go.
>> It's great to be here. Great to be here.
>> Good to have you. Good to have you here.
And of course, uh, we have David Saxs is
back. Everybody wants to hear from David
Sax. We missed you last week, bestie.
>> We didn't beat the joy out of you. We
just try to beat some of the hot air.
Turn
>> any any fluff that you can put on the
show that just involves you talking and
saying nothing is
>> that's the stuff we got.
>> Turn up. Yeah. Turn up. Okay. Yeah,
we'll cut it right out. Um we'll cut it
out and we'll just put a promo in for
the syndicate.com. Thank you. Also with
us, Jamalet is here.
>> How's your maxing going since
last week? Did you have a a a
maxing full weekend? Did you have a good
full weekend of just smoking cigars in
the back deck and not ruminating about
all the chaos you've caused in the last
20 years?
>> I think I've done generally more good
>> than than not.
>> Oh, you have. But there's been some
chaotic moments. Don't think about it.
>> You can't, bro. You can't have ups
without downs, man. It's like, what are
you there to do? Just like plate
everybody and be a loser? Are you there
to be a winner? Yes, you're in the
arena, but have you stopped going to
therapy after realizing ruminating?
>> What's up with this uh sudden interest
in maxing? Are you like the
clavvicular for maxing?
>> No, the world finally caught up with me.
That's it. What do you I mean, I've been
maxing this whole time. THEY JUST
DIDN'T HAVE A name for it, guys.
>> Wow. Okay. Eli's videos are really good.
I watched two more this week.
What take us through what's so appealing
about not ruminating, smoking a cigar,
and just living your life?
>> Because what he says actually works at
every level of society and every sort of
thing that you may want to achieve. Even
if you're trying to like climb the
rungs,
you very quickly learn that the more you
want something, the less you're going to
get it. And I think that's like his real
message is let go, live life, and just
try stuff or don't try stuff. And I
think that that detachment is really
healthy for people. I like it. I like it
a lot.
>> Who's the guy who says this? I actually
didn't know.
>> Elisha Long. Well, Eli, I think, is how
he goes by.
>> But he's fantastic. He Mark YouTube
channel.
>> Mark Andre found him
>> and he's like, "This is this guy is the
new guy.
>> Modernday philosopher. He gives you a
road map for how to live your life,
right? A new age sage.
>> What's the name of the guy? The
character's name from Dune.
>> I was into girls books. I was dating
girls.
>> He's the Lison Algib of the modern
internet.
>> This is why we need Freeberg here is to
explain these deep holes. All right,
listen. We got a lot to get to. Don't
The basic point is build something and
don't ruminate. Okay, ruminating is just
not worth it. Just everybody go for it.
>> No, just do stuff. Stop blathering in
your own head. Just do stuff.
Absolutely. All right. Listen, speaking
of doing so, Anthropic is withholding
its newest model, Mythos. I'm using the
Greek uh pronunciation, its newest
model, Mythos, uh saying it is far too
dangerous for any of us to have access
to it. According to the company, the
model autonomously found thousands of
vulnerabilities, including bugs in every
major operating system and web browser.
This uh little study they did included
20 year old exploits that had been
missed by security audits for decades.
Uh some examples, they found a 27y old
vulnerability in OpenBSD used in
firewalls and critical infrastructure.
They found a 16-year-old bug in FFmpeg
that was missed by automated tools after
5 million scans. The Linux kernel, all
kinds of uh bugs they found. They
released a hype video hyping up why they
were not going to share this model.
Here's Dario. Come on the program
anytime, brother.
>> But as a side effect of being good at
code, it's also good at cyber.
>> The model that we're experimenting with
is by and large as good as a
professional human at identifying bugs.
It's good for us because we can find
more vulnerabilities sooner and we can
fix them.
>> It has the ability to chain together
vulnerabilities. So what this means is
you find two vulnerabilities, either of
which doesn't really get you very much
independently, but this model is able to
create exploits out of three, four,
sometimes five vulnerabilities that in
sequence give you some kind of very
sophisticated end outcome.
>> All right, Brad, uh, by the way, that
set they're using there, that's the same
room those guys play Dungeons and
Dragons in every Sunday. Brad, you're
Brad, you're an investor in this
company. Is this virtue signaling or is
it reality? Is this a good move by them
to not release this model and be
thoughtful, give it to a handful of
people and just find all the bugs it can
before releasing it to the public? And
we've got a lot more issues to discuss.
>> I I actually think they deserve a ton of
credit here and let me walk you through
why, right? They the company could have
just released Mythos, broken a lot of
core things on the internet. Often times
in Silicon Valley, we say move fast and
break things. In this case, it means
just releasing the model to move further
ahead of your competition. But here the
company realized it would wreak havoc.
They ran their own vulnerability
testing. They saw that it would allow
offensive hacking and people to expose
browsers and browser history, expose
credit cards, you know, on the internet.
So, you know what I like about this is
they didn't need government to hold
their hand on this. We have plenty of
government regulations. They know it's
in the best long-term interest of the
company and the industry, you know. So,
they set up Project Glass Wing. It's an
AIdriven, you know, kind of cyber
coalition. Apple, Microsoft, Google,
Amazon, JP Morgan, 40 of the most
important companies. And their goal is
very simple. Let's spend a 100 days use
advanced AI to find and to fix and to
harden these software vulnerabilities
before hackers exploit them. Now, what I
think this represents, Jason, is a
threshold that we're crossing. Mythos
and Spud, which is going to be out from
OpenAI any day now, which is the first
Blackwell trained model at OpenAI. They
represent the beginning of what I would
call AGI models. These are models with
massive step function improvements and
intelligence. Um, and they're just too
smart to be released immediately,
you know. And by the way, there was
nothing that said that every time you
you finish a model, you got to
immediately release it GA. So they set
up this idea of sandboxing, building
defensive alliances,
you know, in order to move away from
that regime. I I think it shows, and
Saxon and I have talked about this a
lot, so I'm interested to hear what he
thinks. It shows you can trust the
industry and market forces in
coordination with the government. They
were talking to the government about
this. But they're not relying on some
top- down regulation in order to do
this. They laid out a blueprint that
seems to me very pragmatic that now that
we're at this threshold, we're going to
sandbox these things. I think that open
AAI will end up doing the same thing. I
think Google will end up doing the same
thing. It's an aggressive way to keep
the RA, you know, the pressure on and
and win the race at AI while making the
tradeoffs to protect safety. So, you
know, I think you're always going to
have to make these trade-offs. I think
in this case, it was a great move by
Dario and team and I think they deserve
a lot of credit. Sachs, when you look at
this, we had Emil Michael on the program
a couple weeks ago. It might have been
four or five weeks ago, and we had a
very thoughtful discussion about, hey,
if the government is going to have these
tools, you know, an anthropic wants to
withhold them and, you know, what is the
proper relationship there, you have to
think that the government, and I know
you don't speak for all parts of the
government. If you were just going to
run through the game theory, they must
have gone to the government and said,
"Listen, this thing is so powerful, it
can put together two or three hacks,
create a novel attack vector, and this
is incredibly dangerous. What if China
has it? And if this thing is as powerful
as Daario says it is, then this is an
offensive weapon as well for us to take
out, let's just pick, you know, uh, a
pressing issue, the North Korea's
ballistic missile program. This is
equivalent the way it's being described
as the Manhattan project perhaps. So
what are the chances two-part question
for you Sax that China already has this
and is using it and do you think Daario
is doing the right thing by regulating
themselves?
>> I think Anthropic has proven that it's
very good at two things. One is product
releases. The second is scaring people.
And we've seen a pattern in their
previous releases of at the same time
they roll out a new model or new model
card, something like that. They also
roll out some study showing really the
worst possible implication of where the
technology could lead. We saw this last
year about a year ago. They rolled out
this blackmail study where supposedly
the new model could blackmail users.
There's been a whole bunch of these
things. Actually, I went back to Grock
and I just asked, "Hey, give me examples
where Antropic has basically used scare
tactics and it's it's a pattern." Okay,
it's a pattern.
>> Okay,
>> these guys, I'm not saying it's not
sincere, but they have a proven pattern
of using fear as a way to market their
new products. And if you think back to,
again, my favorite example is this
blackmail study where they prompted the
model over 200 times to get the result
they wanted. And that result was was
clearly reverse engineered and it got
them the headlines they wanted. And I
would say the proof that it's reverse
engineered is we're now a year later.
There's a bunch of open- source models
out there that have the same level of
capability that that anthropic model
had. And have you seen any examples of
blackmail in the wild? I don't think so.
So in other words, if that study were
true in the sense of being a likely
outcome of that model, I think you would
see examples in the wild of that
behavior. And we haven't seen any of
that in the past year. Now, let's talk
about this specific example with cyber
hacking.
>> I actually think that this one is more
on the legitimate side. I mean, look,
the reason why I bring this up is
anytime Anthropic is scaring people, you
have to ask, is this a tactic? Is this
part of their Chicken Little routine? or
is it real? You know, are they crying
wolf or not? I actually would give them
credit in this case and say this is more
on the the real side. It just makes
sense, right? So that as the coding
models become more and more capable,
they're more capable of finding bugs.
That means they're more capable of
finding vulnerabilities. And like one of
their engineers said, that means they're
more capable of stringing together
multiple vulnerabilities and creating an
exploit. And so I do think that over say
the next 6 months we're going to have
this call it one-time period of catching
up where AIdriven cyber is going to be
able to detect a whole range of of bugs
that maybe have been dormant over the
past 20 years across a wide range of
systems. And so I do think that there is
real risk here. And I do think therefore
that having this pre-release period
makes a lot of sense where they're
giving the capability to all these
software companies that have existing
code bases to use the tool to detect the
vulnerabilities for themselves so they
can patch them before these capabilities
are widely available. And by the way, it
won't just be anthropic that makes these
capabilities available. We know that
like let's say the Chinese open source
models like Kimmy K2, it's about 6
months behind. So we have a window here
of maybe 6 months where we're still in
this pre-release period where I think
companies that have large code bases can
get advanced access to this model and uh
I guess open AI is going to release a
similar thing in the next few weeks. I
do think that every company or IT
department or CISO that is managing code
bases should take this seriously and use
the next few months to detect any again
like dormant bugs or vulnerabilities and
and roll out patches. If everybody does
their job and reacts the right way, then
I do not think it will be the doomsday
scenario that Anthropic is sort of
portraying. But it's one of these things
where the fear might end up being a good
thing in order to drive people to in
order to drive the correct behavior. So
>> sure,
>> I ultimately think this is going to work
out fine, but you do need everyone to
kind of pay attention, use the
capabilities,
>> fix the bugs, then we're going to get
into a big arms race between AI being
used for cyber offense and AI being used
for cyber defense, but it'll be a more
normal sort of of period. Chimath, we
have uh Daario and uh you know a number
of the participants here are taking this
super seriously. They're making a big
statement. Zach's very nuanced uh I
think take there. What's your take on
how do these companies have it both
ways? Hey, this is shouldn't be
regulated. This should be regulated. If
this is in fact a cataclysmic, oh my
god, they're going to hack everything.
What if the Chinese have this right now?
That would speak to more government
either coordination, regulation, or some
kind of relationship between the CIA,
the FBI for domestic stuff, and these
companies because there it is a non-zero
chance that the Chinese have an equal
capability here. We're assuming they're
behind, but who knows what they're doing
behind closed doors. So, what's your
take on this? Is it uh The Boy Who Cried
Wolf, or is this the real deal? Now,
>> I think it's mostly theater.
>> Okay.
In February of 2019,
when Daario was still at OpenAI,
they did the same thing with GPT2.
That was a 1.5 billion parameter model,
which sounds like a total fart in the
wind in 2026. But at that time, this 1.5
billion parameter model was supposed to
be the end of days. And it was supposed
to unleash this torrent of spam and
misinformation. And that was the big
bugaboo at the time. And so what
happened? They went through this
methodical roll out over six or nine
months. They started releasing the
smaller parameter models and then they
scaled up to the big 1.5 billion
parameter model. And at the end of it,
it was a huge nothing burger.
If you actually think that Mythos is
capable of doing what it says it can do,
two things are true. One is a very
sophisticated hacker can probably do
those things right now with Opus.
And two, if these exploits
are this easy to find,
whether you use Opus or whether you use
Mythos, the reality is you'd have to
shut down the internet for about 5 years
to patch them all. So when you see like
a large multi- trillion dollar gang,
it's a bit of theater. Why? What do you
think they can actually accomplish in 2
months? Do you actually think that if
there's these vulnerabilities, it's all
going to get fixed? Let's give them six
months. Let's give them nine months. But
the reality is that capitalism moves
forward, the funding needs moves
forward, and the need for these guys to
build adoption moves forward. And that's
going to supersede
what this is. So I do think that Sax is
right that they have figured out a very
clever go-to market muscle here and a go
to market motion that activates
hyper attention and hyper usage and so I
give them tremendous credit and I'll
maintain what I've maintained before.
Anthropic is shooting the lights out
right now. This is like Steph Curry
going bananas from every everywhere on
the court. These guys are hunking
threes.
>> It's all in that. Okay. So huge kudos to
Anthropic,
but we've seen it before. We saw it when
these folks were the principal
architects at OpenAI who are now seeing
the same playbook here. I think we'll
look back and I think what we'll say are
these two things. One is if we're really
going to patch all these security holes,
we need to shut down the internet
for some number of years, honestly,
literally years. And the second is an
advanced hacker can probably do this
today with Opus if they really wanted
to.
>> Okay. Hey Brad, I gota I'll get you in
here for the for the last word. I I'm
going to go with Yeah, maybe they did uh
Crywolf before, but based on what I see
with these models advancing and using
them and I'm using a lot of the open
source ones right now from China. I
think that this is like code red kind of
moment. This is Defcon. like we should
be taking this deadly seriously and I
think these companies got to coordinate
with the CIA and this is uh equally a
defensive as offensive opportunity. Do
you think this
>> you're asking for the nationalization of
AI now?
>> No, I actually I I I don't think it
should be nationalized. Um although I
did see people sort of insinuating that.
I think these companies need to build a
group Brad that work and coordinate with
the CIA. I assume that they're already
doing this. I'm assuming you Emil
Michael and uh you know Trump and
everybody have these people in a room
and that they've given the defcon and
said hey how can our government use this
to stop bad actors and this is already
being coordinated with the CIA and the
FBI. I am 100% certain of that that
Dario went to them and said look what we
found. This is the real deal. I'll give
you the last word on this Brad since
you're an investor in both companies and
you know them quite well. the frontier
model forum which was which was put
together in 23 um is cooperating on
anti- and adversarial distillation stuff
as we speak right they don't want to
make it easy on you know so Google and
and open AAI and and anthropic they're
coordinating on this stuff you know
there are times where I've pushed back
on anthropic because I thought it was
you know perhaps regulatory capture or
something else this is very different in
my mind right he could have easily Dario
could have easily come out and said oh
my god we passed a threshold we need to
have a government moratorum. Remember,
even our friend Elon called for a
six-month moratorium in 2023 because of
civilization risk. This guy didn't do
that. Instead, he said, "Okay, what what
should we do? I'm going to get 40 of the
leading companies together. We're going
to spend a 100 days sandboxing,
hardening the systems, and then we're
we're we're going to keep pushing
forward."
>> What do you honestly think is going to
get accomplished in a 100 days? How many
PRs you think are going to get pushed to
the core structural internet in 100
days? What's the overunder number? Cuz
I'll give you a number. You're gonna say
zero. My my answer to that is
>> I'll say like 10,000, but it's going to
be immediate.
>> But if it prevents your browser history
from being released to everybody in the
world, Chimat, that may be something
that you're willing to, you know, let a
100 days pass on.
>> I think you got Chimat's attention when
you said browser history.
>> What about the dickpicks?
>> As Chimat is, he's going to release them
himself right now. CHIMAT'S LIKE, "HEY,
CHINESE HACKERS, HERE ARE MY DICKPICKS.
Please put them out."
>> Oh my god. we have to be out there
complimenting when they're doing the
right things or relying on the market
rather than running to the nanny state
and saying do more of this. So this to
me was just an example of of a of a good
balance. I'm sure we're going to have
plenty of debates about this in the
future. But you know this is one I would
like to see more of.
>> This is why to use your word Jake I
tried to have a more nuanced take is
because we have no choice but to take
this seriously. Whether it's total
theater, whether it's fear-mongering,
and they do have a pattern around this,
we can't take the risk, right? And it
does logically make sense that as these
models become more and more capable at
coding, they're going to get better at
cyber. And there's going to be that one
time period where you're moving from
preAI to post AAI, and you need a patch
for that. So, my guess is we're going to
see a lot of patches over the next few
months. I think that that will resolve
the problem.
I think this is a case where I'm going
to give them the benefit of the doubt. I
I think that, you know, I've criticized
him in the past. I think that blackmail
study was embarrassing to the level of
being a hoax, but I think in this case,
I'm going to give him credit and say
that I think that it's legit.
>> So, it's not the anthropic hoax. This
could be legit. I, you know, looking at
>> we have no choice but to treat it that
way.
>> Of course. Yeah. I mean, even if two
things could be true at the same time,
Saxs, they could have used this tactic
before. It could be performative, like
the video with the dramatic music in the
background. It does have a little bit of
drama to it, and the way they presented
it is very dramatic, but it does make
logical sense that the one company that
made the bet on code bigger than anybody
else would be the one who would discover
this quickest. And you know in a 100
days that's a pretty good um that's a
pretty big advantage versus the hackers.
But let me think one more point there
Jimat
>> the most important thing that people
haven't talked about here is the amount
of code being pushed right now because
of these tools is 10x 100x in most
organizations. So we need to have this
type of security embedded in these new
coding tools to do it in real time.
That's the opportunity. There should be
real time correcting of this. If this
was real, they picked the wrong
companies. Meaning, there are energy
companies, folks that control nuclear
reactors. There are airplane companies
that are flying hundreds of thousands of
people in essentially manufactured
missiles of like streaming gas going at
500 miles an hour. None of those
companies were the ones that were
included in this. And so I think if you
really thought that this was end of
days,
at a minimum we can agree maybe we
should have expanded the circle a touch.
Well, maybe those are customers of the
ones they're including here. Anyway, uh
this is a really important story. We'll
obviously track it in the coming weeks
to see what turns out to be reality. And
uh Daario, do come on the program at
some point. Hey uh Brad, will you get
Dario to come on the program? I've
invited him like three times. I got his
phone number. He's ghosted me. I don't
know why.
>> Wait, he he's ignored you? I get
>> I literally got an introduction from the
number like one of the number one
venture capitalists in the world. He's
on the cap table very early. He just
won't respond. I don't know why.
>> I would tell you Daario's podcast with
Dwarkish who I think is an excellent
podcaster. I've listened to that three
or four times taken notes every time. It
is a really exceptional piece really
exceptional piece of work by by by them.
>> All right, let's keep moving. We got a
lot on the job.
>> You may once again be tarred with your
affiliation with us.
>> Poor you. I mean, I don't care.
Literally, I I've got friends on both
sides of the aisle. I have friends
>> of course you do.
>> Even JCAL.
>> Even JCAL has friends everywhere.
>> Let me ask Brad a question here just
while we're on the topic of anthropic.
There was a really interesting story or
tweet I guess you could say by the
founder of OpenClaw
that
>> Peter.
>> Peter. Yeah. What's his name? Peter
Steinber.
>> Steinberger. Steinberger.
>> Steinberger. Yeah.
renowned coder created openclaw which is
kind of the thing that launched this
whole agent era now you I guess you
could say any event he said that
anthropic was cutting off his access to
was it was to claw is that the next
topic
>> this is on the docket it's a little bit
nuanced
everybody using openclaw would take
their $200 a month subscription to
anthropic which was essentially like a
people were using more tokens and it's
an average the people from openclaw it
is very verbose and those people are
100x the usage of the average
subscriber. So he said you can't use
your 200, you have to use the API. You
move from the $200 plan to the API, add
a zero to your token use. So or more.
And so they essentially anchored
Open Claw and then 10 days later or less
they released or announced their new
agent technology which is according to
them a safer better version of OpenClaw.
So, hey, all fair in love and war and
they have basically shot a huge cannon
across the bow of openclaw.
>> Wait, can you just explain that exactly?
So, so I think you're right that they
systematically copied feature by feature
>> of open claw, incorporated that into
clawed and then the coupross was
basically cutting off open
>> oxygen. Can you just explain exactly
what they did? Okay, very simply, when
you buy a subscription to these
services, they have blended your usage
across many users. So there's, you know,
nine out of 10 users use less than the
tokens they're paying for and the top
10% use much more. When OpenClaw became
a phenomenon, the number one open source
project in history on GitHub with all of
this usage, people went crazy. And you
heard me talking about how crazy I went
for it. those people with the $200
subscriptions were using $2,000 $20,000
worth of tokens. So they said you can no
longer use your subscription to, you
know, either your professional or
enterprise subscription at $200
>> and plug that into your open claw. You
now have to go to the API and pay per
usage. So no more like
>> unlimited. If you use Anthropic's own
agent harness, are you part of the
bundled flat rate? You can assume that
that's what they'll do, which if you
were thinking on an antirust level might
be token dumping or price dumping. I'm
not saying like I'm ratting them.
>> No, it's like bundling, isn't it?
>> Well, price dumping or bundling. When
you price something under the market
price in antitrust, that would be price
dumping, right? And if you were to
bundle, it would be like the bundling
issue.
>> Critically important. You can use
openclaw via claw API and every company
has a right to set the price for its
products. It's just saying that you were
for under their current regime, they
were selling dollars for 10 cents via
OpenClaw because these were such power
users and now they're just saying we
have to price this rationally, but we're
happy to have you guys use the API. So,
>> okay. Okay. But Brad, when you use the
OpenClaw competitor that Anthropic now
offers,
>> correct?
>> Are they subsidizing that? Are you
paying?
>> We don't know yet because it's in closed
beta. So in other words, what I'm saying
is if they charge for API usage, right,
their own first party agent harness or
system, then that would be apples to
apples. But if
>> if they end up charging the bundled flat
rate, let's say, for their stuff, but
then charge the metered rate for third
party stuff, you could make a bundling
argument.
>> Sure. Sure. And you could say it's
anti-competitive assuming that Anthropic
has dominant market share in coding,
which I think most people would say they
do at this point.
>> And assuming that it's the same product,
I mean, the reason most enterprises will
probably use the Anthropic uh version of
this agentic product is because it meets
all of your security parameters, right?
So, Altimter runs, you know, a lot of
stuff on Enthropic. They're already
integrated within our our data
warehouse, our data lake, things of that
nature. So just letting openclaw loose
on the uh altimeter you know data set
would not be wise and so it's a
different fundamental product.
>> No I get that and I think that anthropic
has a huge advantage let's say cloning
open claw and just building it into
claude. I'm not denying that to me that
would be the reason why they don't need
to do price discrimination is because
there's already a very good reason
>> to use the let's call it the bundled
offering on a featured basis. But the
question I'm specifically asking is
whether they're giving themselves a
price advantage because
>> I think Brad is giving the the most
generous interpretation. You're taking a
more cynical one. I'm with you, Saxs.
I'm 100% on the cynical side. Open Claw
is so powerful. It's got so much
momentum that not only is anthropic
trying to ankle it. I believe when Sam
Waltman bought it, it was uh and he
didn't buy OpenClaw itself, he hired
Aqua hired Peter. I believe it was to
subvert the open- source project to get
Peter's next set of genius ideas inside
of OpenAI as opposed to letting them go
there. People are going to say I'm a
conspiracy theorist, but this is the
number one focus and let me just give
you a list of who is trying to kill
OpenClaw/compete with them. Obviously,
you have Anthropic, but also Perplexity
Computer launched. It's awesome. I've
been using it. Anthropic has this clawed
managed agents. They dropped that on
Wednesday, April 8th. uh yesterday uh
today's Thursday when we tape you you
guys listen on Fridays and then you have
Hermes agent that was released on
February 25th that's also open source
and very good so that's in the open
source camp Alibab is coming out with
one that's going to be based on their
Quinn model then you have Elon who said
he's got something called rock computer
coming out of macro hard which is a play
on words for Microsoft in addition to
that Amazon and Apple are preparing uh
new releases of their uh maxing
assistants Alexa and Siri that will be
less in this new version and
then nothing out of uh SAT and Microsoft
yet. So the number one goal I believe in
the large language model frontier model
space is to kill this open source
product.
>> No, I mean come on like why they're
building multi-functioning agents that
can move from answering questions to
actually doing something for you. like
you got to do that because that's what
consumers and enterprises wants. It
doesn't mean that it's about killing
OpenClaw. just this is an obvious thing
right to do it
>> but this is a giant movement to stop it
because this is the equivalent of having
an open-source Android like player in
the market and that could be incredibly
disruptive these I believe open source
is going to win the day on the large
language models and take 90% of the
token usage and I think the entire
frontier model space could be undercut
by open source and I think they realize
that SLMs the the smaller language
models that are verticalized now that
will run on you know, desktops and
laptops and is even starting to run on
the top ones. That is their biggest
competitive threat and I hope it
happens. All due respect to your
investments, Brad, I think this
technology and the interface is uh you
know, he placed bets, but I I think it's
imperative that the agent level, which
is essentially your entire life, you
don't give that to Anthropic. You don't
give that to OpenAI. That's your entire
business, your entire life. It is
foolish for you, Brad, to give your
entire business and all the knowledge
you have to anthropic through that.
unless you're just doing it to boost
your um your your investment in those
companies. But I would be very concerned
if I was you with putting all of your
knowledge that you've earned over a
lifetime into any of these large
language models.
>> All right, Jake, let me ask you. Can I
ask a question? Thank you for that
impassioned monologue. Um actually, I
want to ask my TED talk.
>> I Yes, thank you for that TED talk. Um I
have a yes no question for each of you.
>> Do you believe that anthropic has
dominant market share in coding? right
now? Yes. No,
>> no.
>> In in coding,
>> yes,
>> they had the lead, but not that they had
the lead, but not dominating.
>> I think it's a trillion dollar market,
and these guys have less than 10% of it
today. So, it's hard to make a case that
>> What percent of coding tokens do you
think that anthropic is providing the
market right now?
>> Greater than 50%.
>> Yeah, that's true.
>> Okay, that's called dominant market
share.
>> Uh, I don't know about that.
>> More than 50% of the market.
>> You got to look at what you got to look
at what the TAM is.
with the Tan,
>> right? There are a lot of people who
provide, you know, that that are in this
tiebreaker before we move on to the
next.
>> I'm not saying it's a permanent
condition,
>> but if you're telling me that today
>> Anthropic is delivering over half of the
coding tokens, that's clearly a dominant
position in the market for coding. It's
an early market. It could change, but
>> if I were representing them, David, I
would say nine months ago, everybody t
called us uh, you know, out of the game.
We were being destroyed by open AI in
three months. Now people are saying we
have dominant market position. This is
the fastest changing most competitive
market in the world. I think it would be
very hardressed to walk into, you know,
some district court make the case that
these guys have somehow already formed a
monopoly against Amazon, Google,
Microsoft, Open AI, etc.
>> Well, I'm not saying it's a it's already
a permanent monopoly, but I am just
asking about market share. And I do
think you guys all agree that
>> Shimov, go ahead.
>> They probably have 50 to 60% market
share because I think codeex is actually
quite broadly used as well.
>> But that belies the more important point
which is AI enabled coding I think is
still 5% of the broad market. So it's
kind of a nothing burger. Yes, they're
leading but they're leading in something
that isn't that big yet. Now you would
say how could it not be big? And what I
would say is because most of the stuff
that's being written is still white
sheet denovo code. And I think the ugly
truth is I don't care what model you
have, but the long horizon ability for
any of these models to actually build
enterprisegrade software is still
shiit And that's the actual lived
experience. Not for me, but when I call
on our customers, half a trillion dollar
banks, hundred billion dollar insurance
companies, none of these guys are like,
"Wow, it just works out of the box." It
doesn't work. So, most of it is still
handtuned. So, until I can honestly tell
you that we can point a model at this
with the right guard rails, which I
can't today, what I would say is it's a
small market that will become large as
these models become better.
But we are in the world where we have 50
years of accumulated tech debt as a
world. And I suspect when you enumerate
the number of lines that that
represents, it's hundreds of trillions
of lines of just pretty marginal
mediocre code to bad code. On top of
that, we have all these legacy
languages. I'll tell you one of our
customers, they have to go and get
60year-old pensioners to come into the
office to interpret cope. No, I'm not
joking. This is a
>> snowball for trend.
>> This is a hundred billion dollar a year
revenue company and that's how they
solve these problems. It's not opus just
solves it. So I I would just keep in
mind that most of the tech debt in the
world that exists 99% of it is still
poorly addressed by these models. We are
untying this Gordian knot. It's going to
take decades to do it right. So all the
breathlessness about all this other
stuff, I really think it's not where the
money is. It's not the big time stuff.
And you can tell me, "Oh yeah, it's
going to be the future." And I would
say, "Tell this business that's a
hundred billion dollars a year of
revenue and 50 million billing
relationships that all of a sudden
you're going to open claw your way to a
solution." It's Not to say
that you can't have a great chief of
staff, and not to say you can't do some
useful stuff and trickery and, you know,
have a good knowledge base. I'd like
that, too. But the core things that your
lived experience sits on today is a mess
of tech debt that will get very slowly
replaced. And that's just the reality of
life.
>> And there are competitors that are
extremely disruptive. I'll tell you
about one. We talked about Bit Tensor
Tao on this program a couple weeks ago
when we had the um Jensen interview. You
brought it up actually Chimath. There's
a there's a project that's subnet 62.
It's called Ridges AI. And what they're
doing is a competitor that is not only
open- source but anybody can contribute
to it. They spent about a million
dollars in tow like rewards and in 45
days they hit 80% of what Claude 4 is
and they did that in under 45 days. The
way that works is they give rewards for
people who and they can do this
anonymously make that coding product
which is like codeex or claude code
better. that flywheel is racing right
now with participation in the same way
Bitcoin is. So you're going to see a lot
of open- source and these crypto
open-source combinations and uh anybody
who's not investigated this, I highly
recommend you investigate this.
>> I do think you're right about one
specific thing. I would put zero,
literally the probability zero of any
important company worth anything more
than a dollar having and outsourcing
their production code to an open source
project. That'll never happen. However,
what will happen though is when you look
at the cost of training this 10 trillion
parameter model on Blackwell and when
you look in the future let's just say in
six or nine months that a 15 or 20
trillion perm model is going to get
trained on Vera Rubin I think Jason
where you are right I have zero and just
to be clear I have no investments in
this at all I'm
>> to be so super clear
>> I'm just observing because another
project other than Bit Tensor that
someone brought up to me is Venice. The
concept of opensource training and
orchestration
is a hugely disruptive idea which is the
complete orthogonal attack vector to
this idea that you have to raise tens
and tens of billions of dollars to train
your models because if the capital
markets run out of 10 and 20 billion
dollar checks to give people the only
solution is to be totally distributed. I
tend to agree with you Jason that there
is going to be at some point a very
successful open source project for
pre-training.
Absolutely. Will there never ever be an
open- source way where a real company
that has any skin in the game says here
guys re-engineer my codebase as an open
source project. Never going to happen.
>> Yeah, I I think the coding tools will.
And if you look at the history of open
source, Brad, you actually I think had a
lot of bets in this space. Linux,
Kubernetes, Apache, Postgress, like
Terraform, like these open source
projects are deep inside of enterprises.
Deep. And we're sitting here 15, 20
years ago, the same argument was made.
Nobody will ever adopt these inside the
enterprise. You got to go with Oracle,
whatever. And fair enough, many people
do. But I think this is this $29 ridges
um subscription to do this versus 200.
It's starting to take hold inside of
startups. And that's where I always look
at the tip of the spear. Startups love
to, you know, use open source products.
I think this could be the next big
thing. But listen, I I I invest in
things that have a 90% chance of going
to zero. So do your own research. No
crying in the casino.
>> Can I just make a a final few points? So
just just quickly so number one is with
respect to this market for code or code
tokens whatever you want to call it
>> it might be 5% today meaning 5% of the
codes AI generated versus human
generated I think it's going to 95% I
mean I bet any amount of money on that
the only question is when probably over
the next few years so that's point
number one point number two is it's
possible that if you're the early leader
in coding as a AI model company let's
say you have 50 to 60% of market share.
You have the most developers using it.
Therefore, you have the most access to
code bases. You might get the most
training tokens. There is a potential
flywheel there where you can see the
early market leader consolidating its
lead because it's generating the most
code tokens and it's getting access to
the most existing code. Now, I'm not
saying for sure that's going to happen.
is possible that the other guys catch
up, but I think there is a possibility
of a flywheel there and strong, I guess
you'd call it data scale effects, things
like that. So, I do believe that the
market for coding tokens could be
monopolized. Third, Anthropic's revenue
run rate, as based on what I can tell
and what's been publicly released, is
the fastest growing revenue run rate at
scale that I think we've ever seen. Uh,
we
>> perfect segue. It's the next story.
Okay, maybe
>> pull up the the tweets. But this thing
is ramping at a rate we've never seen
before.
>> We can get into that in a second. But
just one last final point
>> is I think it's pretty clear that where
we go from here is agents and coding
gives you a huge step up on agents
because you know one of the main things
that agents need to do is is write code
to be able to enable them to complete
tasks.
>> Correct. And so if it is the case that
coding is this huge market that's going
to be dominated by one or two companies
and then that leads to another huge
market which is agents. My point is just
I think all these companies need to
behave in a very clean way
>> and not engage in tactics that later the
government might say you know what that
was anti-competitive. Everyone should
just I think play fair. Do not engage in
discrimination against other people's
products. engage in fair pricing. I'm
not accusing anyone of breaking any of
the rules, but what I'm saying is that
eventually the government's going to
look at this market with the benefit of
2020 hindsight and I think everyone
should just basically, you know, keep it
>> keep your nose clean.
>> Keep it tight. Keep it tight.
>> Keep it tight. Tight is right. I think
is an excellent point. Let's talk about
the revenue ramp of Anthropic. This is
just unprecedented. Anthropic's revenue
run rate has topped 30 billion with a B.
Early 2023, they turned on revenue. They
started charging for API access. End of
2024, they're at a billion dollar run
rate. February 25, they launched Claw
Code. That was the starter pistol. Mid
2025, $4 billion run rate. End of 2025,
$9 billion run rate. Just a couple of
months later in April, $30 billion run
rate. Yes, that's right. Triple. Uh and
the way they did this is enterprise uh
customers are a major part of the spend.
Dario announced a couple of months ago
that there's over a thousand enterprises
paying over 1 million annually. This is
truly mindboggling when you think about
it because those are the most coveted
customers in the world. These are the
big fish that you just uh when people
are running enterprise software, they
they dream Slack dreamed of getting
these million-dollar customers. Uh
Salesforce dreams of getting these
million-dollar customers. Brad, you're
an investor. I guess uh Sam famously on
BG2 asked you to sell your uh OpenAI
stock back to him. You didn't. You
demired, but you're an investor in both.
How shocking is it to you to place both
of those bets and then see one of them
come from so far behind? You know, Chat
GPT has 900 million users. I don't know
if they've they've passed a billion
officially yet, but they are the Verb,
right? They're the Uber. They're the
Xerox. They're the Polaroid of AI, but
they didn't go after the enterprise.
Daario made that and Daario worked. He
was the co-founder of OpenAI. He left
and according to the New Yorker story
that came out from Ronan Farrell this
week, he was basically left because of
his disgust in working with Sam Alman.
Your thoughts?
>> Well, you know, before we go down the
OpenAI rabbit hole, let's just really
contextualize like what's going on here.
You know, check I I I have this
additional chart. you showed one, you
know, they added 4 billion of revenue in
January, 7 billion in February, 11
billion of annualized run rates, um, or
10 or 11 billion in March, just to put
in perspective, that's data bricks plus
Palanteer combined that they added in a
single month, right? So we started with
everybody at the start of the year
ringing their hands including you know
Gurley and others saying we're in a big
bubble asking whether the AI revenues
would show up to justify all of this
investment and bam you have the largest
revenue explosion in the history of
technology. So the company's plans were
to end the year at about a $30 billion
ex exit run rate. They got there by the
end of March right and I suspect that
it's continuing in April. So you have to
ask what's going on and what's the big
so what the first thing for me is that
model and product capability just hit
this threshold we talked about earlier
near AGI whatever the hell you want to
call it and everybody like alimter said
damn this is so good I have to have it
this is no longer about my IT budget
this is about labor augmentation and
labor replacement and by the way co-work
is growing even faster than Claude go at
the same stage of development
So what it showed is we have a near
infinite TAM. It turns out that the TAM
for intelligence is radically different
than anything that we've seen before.
And I think the best example of this,
right? This is millions of
self-interested parties, consumers,
enterprises, a thousand now over a
million dollars. Right? It's not that
there was some great go to market and
anthropic that all of a sudden, you
know, they snuck up and blew everybody
away. No, it was companies demanding the
product. They're getting throttled on
the product. Why? Because it's so good.
It makes them better at their business.
We are all self-interested actors. And
when millions of those people are all
making the same decision, there's a huge
tell. And the tell here is that the TAM
is as big as Daario and Sam and others
have been saying. We knew intelligence
was going to scale on the exponential.
The question was whether revenue will
scale on the exponential, and that's
what we're seeing. And remember, they're
doing this with only 1 1/2 to 2 gawatt
of compute, right? These guys are
massively compute constrained. They're
each going to be adding 3 GW of compute
this year. And so that will unlock they
would be growing even faster. But for
that, and then Jason, to your point
about the open source models that we all
want to be a part of this solution, I've
talked to a lot of big companies, 65 to
70% of their token consumption is
open-source model, right? are these
cheap Chinese and other tokens. So these
revenue ramps are happening while the
world is already using open source. This
is not frontier only. This is Frontier
plus open source. We're going to see
massive token optimization over the
course of the year. But what happens on
this Jebans paradox is the co the unit
costs right of intelligence is
plummeting. Not the cost of tokens. The
unit cost of intelligence is plummeting
because the capabilities of these models
is so much better. I look at what it
does for Altimeter day in and day out. I
talked to a major uh company yesterday.
They're on a run rate to do a hundred
million of token consumption this year
on about $5 billion in opex. They think
that we're now nearing peak employment
in their company, but that their token
their intelligence consumption, okay,
let's not call it token consumption,
right? because tokens may go up a lot,
but their intelligence consumption is
going to go up, you know, a lot. So, I
would leave you with this. We're early
to Chimas's point. We have low
penetration of the global 2000. We have
low penetration of the use cases. We
have low penetration of of within the
use cases that they're already using.
And the models are only getting better.
So I think when you look out toward the
end of the year, I would not be shocked
if you see Anthropic exiting this year
at 80 to 100 billion in revenue. And by
the way, doing it at the same time that
OpenAI, who is also on the wave, they'll
be releasing an incredible model in the
next imminently. They're going to be on
that wave and you're going to see an
inflection in their revenues as well.
>> Okay, Chimath, question one has been
answered. The question of hey, does this
stuff actually have utility? that went
from a question mark to an exclamation
point. Of course, it's got utility.
People are getting value from it. And it
might be variable. Some people get more
value than others. Number two, the
revenue ramp was a big question. Now,
that's turned into an exclamation point.
The final piece of the puzzle that
you've brought up many times is can this
be profitable? And these companies are
burning through a large amount of cash.
So, what is your take on when these
companies can get out of the J curve? We
talked about this, I think, three
episodes ago. I estimated like we're
going to be looking at $4500 billion in
investment into these data centers at a
minimum and then they have to climb out
of that to get to profitability. So what
are your thoughts on these becoming
profitable companies? Do you remember
the
investor that published this list Jason
where he put all of the terms you talk
about when one of the terms you can't
talk about is profit. It's a list where
it's like if you can't talk about free
cash flow, you talk about IBIDA. When
you can't talk about IBIDA, you talk
about
>> margina.
When you can't talk about that, you talk
about revenue. And then when you can't
talk about revenue, you talk about gross
revenue
>> bookings.
>> So you can kind of figure out,
I think, where we are in any part of any
cycle by just indexing into what does
everybody talk about. I think where we
are is we are between gross revenue and
net revenue. That's where the discussion
is.
>> Okay.
>> There was another article I think today
in I think maybe it was the information
that tried to categorize and distinguish
that anthropic presents gross, open AI
presents net. They're different. We
don't know what the various take rates
are. So they're saying that there's a
difference. If it's not true, there's
been no clarity provided by these
companies. So, at a minimum, you have
this confusion where there's the
breathless talk. Then there's people
that don't even know the difference
between actual recognized revenue and
run rate revenue and how to multi. I
mean, so we're definitely there, okay?
We can quibble about the details, but we
are not at the place where people are
like, "Oh, here's your steadystate, you
know, free cash flow margin, and here's
what your EBA does." We're never we're
we're years from that. They're gonna
have token maxing IBA like IB at the
Wii.
>> The thing that we need to understand is
how gross margin negative is this
revenue growth.
>> We don't know that and at least we don't
as outsiders.
>> Brad might know.
>> Brad may know. I I I I will tell you
think about this. What are their big
cost inputs? The number one cost input
is the cost of compute. Cost of compute.
>> Right? I just told you they only have a
gigawatt and a half of compute. and they
have that gigawatt and a half of compute
whether they have a billion in revenue
or whether they have 80 billion in
revenue. So you might actually expect to
see these companies their gross margins
are exploding higher like the fastest
increase in gross margins I've probably
seen out of any technology company. So
this is not gross margin negative you're
saying?
>> No definitely not gross margin negative.
And what I would tell you
>> so that they must be hugely profitable
then
>> well you may see accidental why I call
it accidental profitability. They may
not be able to spend this revenue fast
enough chamath on compute. And remember
it's only 2500 people. Google crossed
this revenue threshold when they had
120,000 people. These guys have 2500
people. So the only thing you can really
spend money on, right, is compute. And
they can't stand up the compute fast
enough.
>> But none of this foots to me then to be
honest because if you were on a
threshold of 90% plus gross margin,
>> I'm not saying it's there. I'm not
saying it's 90% plus. I'm just saying
it's gone from meaningfully negative 18
months ago to, you know, very very
positive. I've seen rumored out there
50% is what you're saying. The trend is
there.
>> Let me just say this.
I think if you're an incumbent, you want
the cost of compute to go down. I think
if you're not an incumbent, so
specifically, who do I mean? Meta,
Google, and SpaceX.
I think those three people who have all
three of them, well, sorry, Meta and
Google have a fortress balance sheet. I
think by the end of June, SpaceX will
also have a fortress balance sheet. What
they will want to do is they will want
to make this a compute problem because
they will control the the conditions on
the field. You already see this today.
>> Yeah.
>> Meta's models today, what people's
general reviews are it's okay, but the
one thing that people say is it's
incredibly performant. The model quality
is okay, but the performance is great,
which speaks to Meta's huge advantage.
They have a massive compute
infrastructure. So if you're if you're
not open AI and anthropic,
they'll want to make this a capital
problem because then they can win it. If
you're anthropic and open AI, you want
this thing to be as efficient as
possible.
I think where we are is very much in the
early innings. And we're bumbling around
talking about gross margins and you know
revenues. We are not at profitability.
And what is true for Facebook and what
was true for Google was irrespective of
where they got to a billion. Who g
cares? They were profitable by year
three and they never looked back. I was
there. I remember it was glorious.
>> The the cost the cost of building uh you
know AI
totally stipulate is radically higher
than the cost of building retrieval at
Google, right? Like it's just a
fundamentally more expensive problem.
But I will tell you that there's a lot
of FUD out there about negative gross
margins. I mean Jason, you started the
segment by saying they're burning
through large amounts of cash. I think
people are going to be shocked at the
burn how low the burn levels are at
these companies.
>> Anthropic or Open AI.
>> Yes. And and I would say at Open AI as
well like they're if they're on you know
if they do $50 billion this year again
just look at the number of people they
have revenue per people. It's pretty low
and the inference cost is plummeting.
Inference cost is down by 90%
year-over-year. And so just finally I
want to make respond to this point about
gross versus net uh this this tweet that
Chimath was referencing. Okay, so
there's a certain percentage, a
smallalish percentage of Anthropics
revenue, right, that they distribute
through the hyperscalers and like a lot
of arrangements, whether it's Snowflake
or Data Bricks or others, you pay a
commission, right, uh on on that. I will
just tell you that you're talking
singledigit percentage of total revenue
of these companies. So the gross versus
net thing isn't what's being reported.
like the apples for apples is pretty
easy and if you want to be conservative
on it take down Anthropic's revenue by
you know five to 10% which you know
again I don't I think it's better to
gross up OpenAI's revenue but any way
you do it I just think it's a
distraction from what's really what's
really going on here happy to
>> s you have any thoughts on this uh
massive revenue ramp
>> yeah I mean I want to go back to a point
that Brad made because I think it was
just really important and I want to just
underline it consider where we were at
the beginning of the year and What
everybody was saying is that AI was a
big bubble and the evidence they would
point to was the fact that hundreds of
billions of dollars was going into capex
that needed to be spent on these data
centers and there was no evidence of
significant revenue to justify that
spend. Where was the ROI? By the way, as
an aside, the same doomers who were
saying that AI was in a bubble were also
the ones who were saying that AI was so
powerful it's going to put us all out of
work and it's going to, you know, take
over from humanity. I mean, in other
words, they couldn't decide if AI was
too powerful or not powerful enough. But
putting aside that contradiction, they
clearly were making this case that AI
was this big bubble and that there'd be
no payoff or justification for this
massive capex that's being spent. And I
think we're starting to see here there
is justification for it. Uh we're seeing
it just in this one vertical of AI which
is coding. We're again seeing the
fastest revenue growth in history. It's
utterly unprecedented. And this is just
one category or vertical of AI. We know
that agents are coming next and the
enterprise adoption of that is going to
be absolutely massive. So, I guess what
I'm saying is that this is early proof
for I think the thing that makes Silicon
Valley special, which is we're willing
to basically bet on things that just
intuitively on a gut level we know are
the next big thing. We're not that
spreadsheet driven. Actually, Silicon
Valley believes that if you build it,
they will come and is willing to finance
that build out. And that's basically
what's been happening. Again, just the
top four hyperscalers, $350 billion of
expected capex this year on its way, I
think Jensen said 1 trillion by 2030.
So, Silicon Valley, whether it's big
companies, whether it's founders,
they're always willing to bet on this
next big thing. They're not like Wall
Street. They don't need, you know,
specialist to tell them where to go.
They know where the technology is going
and they make their bets based on that.
And I think that there is going to be a
big payoff for this. And I think it's
the thing that's going to make our
economy and the United States in general
remain extremely dynamic and in the lead
on this thing is that we are willing to
make those kinds of bets. And I think
it's going to pay off big time.
>> Yeah, clearly. Hey, um Brad, you didn't
answer my question about the vibes over
at OpenAI versus Quad. Open AI is um I
wouldn't say reeling but there's a lot
of hand ringing going on a lot of
employees leaving a lot of people who
are wondering like is our strategy the
winning strategy of like consumer first
they shut down Sora you know unwinding
the Disney deal and really trying to get
the company focused and it's kind of
like I mean listen the New Yorker story
was a bit of a rehash so I don't think
we have to go into the blowby-blow
because we covered here three years ago
but the truth is a lot of the great
founders, co-founders of OpenAI and a
lot of the great contributors are now at
Anthropic and other large language
models. And in the secondary market,
OpenAI is trading lower than the last
valuation. And Anthropic is trading
significantly above the $380 billion. So
maybe talk a little bit about this
competition, this Microsoft versus
Apple, this Google versus Facebook.
Well, let's let's start with immense
credit where credit is due. Anthropic
was literally counted out of the game
last year. Y,
>> right? And here they come over the last
12 months and and and they've kicked
OpenAI's ass over the last 90 days,
right? And what did Anthropic do?
Anthropic made choices. No multimodal,
no video, no hardware, no chips, no
building data centers. They said, "We're
just going to focus on coding and
co-work. We think that is the path to
AGI and and and and ASI." They executed
their butts off. They took the lead.
2500 people tight pulling on the ore in
the same direction. But I think you
would be seriously foolish to count out
open AI, right? And I think we're we're
we're at peak open AI FUD. And I'll tell
you, it starts with great researchers
and great models. And I think when you
see the Spud model, they're about ready
to release. I think it's going to be an
excellent model. Shows that they're
firmly on the wave. Um, if you look at
what's going on with Codeex, incredible
ramp on Codeex, fastest ramping model
with 5.4, I think 5.5 or Spud, whatever
we're going to call, it's going to be an
even faster ramp.
>> Have you seen Spud? Have you used it?
Have you gotten a preview?
>> People are using Spud, right? So, it it
is being previewed and so
>> So, you're talking to people who've used
it and what are they telling you?
>> They're telling us that it's an
incredible model on par with Mythos,
right? and that it's a a very usable
model in terms of um how it's packaged.
I will say that back to David's point
now this is the most important point I
think anybody can take away here.
This is not zero sum. The TAM of
intelligence is dramatically larger than
any TAM we've ever seen in our investing
careers over the last two decades.
Right? And if you're on the wave, which
Open AAI is, you are going to be selling
into the world's biggest TAM, they are
going to build a very big company. I'm a
buyer of the shares today.
Notwithstanding all of the vibes that
you describe, I think these companies
are firmly on the wave. They are jarred.
They are sitting there saying, "What did
we do wrong? And how do we get our mojo
back?" They want to compete. It is
embarrassing to people on the research
team and the product team over there.
So, I'm not saying there's not a real
awakening occurring there, but I think
that's what the case is. And by the way,
to Chamas's point, do not count out
Meta, right? I think Meta is absolutely
in this game. Google is absolutely in
this game. Elon is absolutely in this
game. And if you're
>> got some stuff dropping shortly that's
going to be very impressive.
>> If you're on team America, the fact that
we have five frontier models competing
against each other and David made sure
they weren't throttled by excessive
government regulation. We have mythos
come out. It's a self-imposed safe
harbor, you know, to harden our system.
It wasn't a call for moratoriums or
getting the government involved. We have
the type of competition that's causing
us to accelerate our lead against the
rest of the world. We can't take our eye
off the prize. We got to stop
adversarial distillation and we need to
make sure that we're distributing our
products around the world. But I view
this as really good for team America.
>> Well said. And here is your poly market
IPOs before 2027. Obviously SpaceX at
95% uh Cerebrus at 94% and uh hey number
five on this list 51% chance that
Anthropic goes out before the end of the
year. 44% chance that OpenAI comes out
before then. All right here is the
closing market cap for Anthropic on Poly
Market only $158,000 in volume. So,
Chimath, when you put in 400K, you're
going to really tilt this market.
78% chance that it's above 600 billion,
19% chance that it doesn't go out. So,
it's looking like this will be a decent
investment for you. Brad, what valuation
did you get into Anthropic at?
>> We first invested in I believe it was
the
uh 30 or $150 billion round.
>> So, this will be a 7x 5x for Altimeter
L, please. Congratulations. I mean, no,
listen. I I I again, there are lots of
people who were there before us and who
are on the board and who are going to do
much better than that. What' you put in?
50.
>> What' you put in?
>> No, we've got billions in both
companies. Uh
>> billions in both companies. Oh my lord.
>> I think there's this existential thing
going on in venture today. David could
talk about it as well. I mean people
can't they're extraordinarily nervous
about you look at the IGV stock index
down 30% year to date down 5% today all
software stocks plummeting right venture
capitalists are terrified to invest
money in anything other than these
frontier models and things like SpaceX
or military modernization finding
something that's out of harm's way of AI
right where you can count on the
terminal value to Chamas insights over
the last few weeks is very difficult to
do. That's why you see this crowding.
So, we've taken a barbell approach,
right? We've got a lot in what we think
are the most important companies that
are on the frontier and then we're
betting with on really small teams that
we think have very defensible businesses
in a world of uh you know, AGI. But it's
>> what happens to all these enterprise
software companies? Do they become PE
takeouts? Do they get consolidated? um
or do they just have to adopt these AI
technologies and and and solve this
problem of hey the frontier model is
just going to solve for whatever these
niche software companies do.
>> I think the market's probably being a
little too pessimistic with respect to
at least some of these software
companies. I mean, obviously, there's
going to be big differences in the
quality of the modes of these companies.
And so, look, software is going to be a
lot cheaper and easier to generate, but
I'm not sure that was the competitive
advantage of a lot of these companies.
So, there's probably a little bit of the
baby being thrown out with the bathwater
right now, and there probably are some
value buys in enterprise software. I
think the interesting question here and
we've been talking about this for a
couple of years in the pod is just where
you see the AI value capture being in
terms of layer of the stack. Remember
where we started it was really just the
chip layer of the stack was where all
the value capture was. It was basically
Nvidia was the first company to be worth
multiple trillions of dollars because of
AI. And for a while it looked like
that's where all the value capture was
going to be because OpenAI for example
was losing so much money and Anthropic
wasn't on the radar as much. Now we're
seeing wait a second um you know it's
not just the chip companies it's also
the hyperscalers are now benefiting and
now we're seeing at the model layer it
looks like Enthropic and Open AI they're
all going to be huge beneficiaries. I
think the next question is at the
application layer of the stack. Okay.
Well, now does all that value capture
just get eaten by the model companies or
are there applications that get
turbocharged? I guess you could say that
Palunteer is already one of them, right?
It's an application company that's been
turbocharged by these model
capabilities. Who else will be a big
beneficiary? Is it again, is it all
going to be at the model layer or will
you see an explosion of value at the
application layer? I'm hoping obviously
that it'll be at all layers of the
stack. PC beneficiaries. But to me,
that's a really interesting question
right now.
>> Yeah. What happens to Salesforce,
HubSpot, you know, Oracle, right down
the line? David, uh, Chimati, your
thoughts here, uh, on the the layers
here and where the value is captured.
>> It's too early to tell.
>> Too early to tell, right? And energy we
kind of put into sort of data center as
well, but that's obviously been a clear
winner. Little housekeeping here.
Liquidity, put a little Tiffany in here.
uh producer Nick D is sold out. There's
a wait list of hundreds of people, but
it is what it is, folks. If you snooze,
you lose and top tier speakers are
coming. Uh it's going to be great. We'll
get a an update from But I think Brad,
you're going to be joining us again.
Yes. For liquidity.
>> I have an update.
>> That's probably not your headliner,
though. I'm probably not your headliner.
>> No, but you always score so high. Every
event you've spoken at, you've been
either number one, two, I don't think
you've ever dropped to three. Go ahead,
Sham. Make your announcement here.
>> Nat sent me an article from Wikipedia
about penile links when you guys are
talking about
>> breaking news.
>> Showing me showing me that I'm in the
large category. Top 5%. She highlighted
it.
>> Top 5%. Okay. And that's with Is that
with Nano Banana or without? Is that
>> She just texted dummy. It's clogged. My
apologies. Clogged.
>> Oh.
>> All right. This is why Jamath isn't
afraid of the cyber is because nothing's
going to come out that's more
embarrassing than what he says himself
on the box.
>> He's like Bezos. When Bezos got hacked,
HE'S LIKE, "GUYS, I GOT HACKED."
>> SO, I saw the agenda for this thing.
It's incredible. Congrats to you guys. I
mean, like the uh like just the fun of
being in Napa, all the poker, all the
the dining experience. This is five star
all the It looks really
>> six-star. It's a man level because
Chimath
>> was, I dare I say, belligerent in his
demands. He said, "This has to be
six-star or I will not show up." Jake
Al. I said, "Okay, boss, get to work."
And uh, Chimath, what do you got any? No
mids. This is all elite. And for the
hundreds of people who are on the wait
list, I am sorry, but we have a capacity
issue. We'll try to get you in for next
year. But Chim, give us some updates
here. You have any updates that you want
to share? because you are running
programming for liquidity 2026 up in
Yon.
>> Look, it's going really well.
Really excited to hear all of these
great folks speak. I think the next two
will release today. Brad Gersonner and
Thomas Leaf of COTU
>> of CO2. That's a great get.
>> We also have I think three people
confirmed for their best ideas pitch.
Really interesting folks. They each run
between one and six or seven billion
>> awesome
>> superstar compounders early in their
career.
>> This is a new zone chamat.
>> It's great. So right now we have Bill
Aman, we have Andre Carpathy, we have
Dan Loe, we have Thomas Lefont, we have
Brad Gersner, we have Sarah Frier and
more to come. We will announce more.
>> There might be one or two surprises. Jay
Cal
>> and a couple and a couple of surprises.
>> Yeah, we we don't announce all the
speakers. Jay Cal's got a couple of
surprises coming. And if you didn't get
in to liquidity, apologies. You're on
the wait list. We are going to be
hosting the fifth
annual all-in summit in Los Angeles
September 13th to the 15th of Sax. You
going to come to that?
>> Allin.com/events.
>> Sax, you should come to that.
>> I've been advised that I can attend
business. I can be in the state for
business reasons.
>> Okay, there you go. Then we'll see you
at liquidity and the summit. Correct.
That's that's big news. Now we just got
a bunch of Sachs stands who are racing.
Uh and now we're going to get Sachs at
This is what happens every year behind
the scenes.
>> Sachs at the last minute says, "Oh, I
have four speakers and I have 72 people
who need tickets and then the whole team
has to like do a fire drill 48 hours
before the event." Okay, here we go,
guys. We're going to go to the third
rail here. We got to catch up on the
Iran war. Here's the latest. Two weeks
into a ceasefire have started just two
days ago at the taping of this VP JD
Vance, friend of the pod is a and some
special consultants Wikoff and friend of
the pod Jared Kushner are headed to
Islamabad, the capital of Pakistan for
talks this very weekend. So while you're
listening to this event, they are going
to be working on the peace deal. Easter
Sunday, Trump posted a truth stating,
"Open the straight, you crazy
bastards, or you're going to be living
in hell. Just watch." Praise be to
Allah. On Tuesday morning, Trump posted
uh a another threat on social media. A
whole civilization will die tonight.
Never to be brought back again. I don't
want that to happen, but it probably
will. Tweets were obviously discussed uh
a lot over the last week. He gave him an
8:00 p.m. deadline.
At 6:30 p.m. POTUS announced on Truth
Social that he had agreed. President
Trump had agreed to a two-week ceasefire
if Iran opens the straight. He also
said, "Hey, listen. We got the straight.
Maybe there'll be a toll booth, but
we'll take the majority of the toll and
we'll split it with Iran." Here's the
quote. We received a 10-point proposal
from Iran, and we believe it's a
workable it is a workable basis on which
to negotiate. And apparently Netanyahu
took the ceasefire to mean level Lebanon
dropping 160 bombs in 10 minutes
yesterday. Saxs, uh, you were out last
week. Everybody wants to know your
position on the war. I'll hand it off to
you. What are your thoughts on how on
the two ceasefire and everything that's
occurred up until this point?
>> Well, look, I have to preface what I'm
about to say, which is I'm not part of
the foreign policy team at the White
House. And the last time I commented on
the war on this show, it somehow made
international headlines that Trump
advisor says XYZ.
And I'm not a Trump adviser on this
issue. I think that'd be a fair headline
to write if it was a technology issue,
but this is not. So whatever I say is
just my personal opinion, but then the
media is going to somehow portray it or
attribute it
>> to the White House or try and create an
issue out of it. So, I feel like I'm
limited in what I can say except that to
say that I think it's terrific that we
have the ceasefire. I think it's great
that there's going to be this meeting in
Islamabad to hammer it out. And I think
what the president's accomplished so far
with the ceasefire is it's a great thing
because what happens with these wars is
they take on a life of their own,
meaning they tend to go up the
escalation ladder, right? And there's a
lot of podcasts that are discussing the
so-called escalation trap and supposedly
there are stages to this based on
historical patterns. And so I think it's
actually very hard to pull out of these
things and I give the president
tremendous credit for negotiating the
ceasefire that we've achieved so far and
then sending the team to hopefully work
this out.
>> Brad, actually my first trip to the
Middle East was when you and I uh maybe
four years ago when Thank you for taking
me. What is your take on where we're at
here? I think we're just wrapped up week
six of this and we're going into week
seven.
>> First, on March 4th, I tweeted the Trump
doctrine in Iran. Massively destroy all
military capa capabilities. Kill the
people building lethal weapons to use
against us and get out. Reserve the
right to do it again if needed. Zero
efforts to build Misonian democracy.
Iran's going to have to build what comes
next. And I think what the market has
said right if you look back at last year
on tariffs Jason the top tobottom draw
down was about 15% on the NASDAQ
intraday is down 22%.
Okay, the draw down in this period over
Iran was only down about 5 to 7% on S&P
and NASDAQ, right? So, the market has
said, listen, re trust Trump at his
words. He said he's not going to get
into an entangled war here. I think he
terrifies the hell out of people with
his tweets about, you know, destroying
civilization and all this other stuff.
But I think people, even though they
don't like to hear it, they've resolved
for themselves that when he says he's
going to get out, he will in fact get
out. Of course, there was a lot of hand
ringing, but if you look at the markets
today, we basically bounced all the way
back from where we were pre-Iran on both
the S&P and the NASDAQ. If in fact we
land the plane, if JD lands the plane,
and by the way, on Lebanon, yes, they
were bombing yesterday, but Netanyahu
has now said that you're going to have
direct government talks between Israel
and Lebanon. So, if the if we land the
plane on these two things, I think it's
off to the races in the market. By the
way, while everybody's focused on Iran,
stay tuned. I think we're getting close
to a deal on Ukraine, Russia, right?
Venezuela is, you know, kind of going
seemingly very well. I think there's
also going to be news on Cuba. You could
envision a world there's risk to the
downside. Certainly, I will stipulate,
but you also have to pay attention to
the risk to the upside. If you land the
plane on those things heading into
America 250 July 4th, the market could
really take off.
>> All right. Well, let's uh maybe uplevel
this a little bit and talk about why
we're in this war to begin with. And
that's the big discussion amongst both
sides of the aisle. On Tuesday, the New
York Times dropped an inside the room
piece on how President Trump made the
decision
according to this report, if it's true.
I know some people don't uh subscribe to
the New York Times anymore or think it's
fake news, but how Trump decided to
basically follow Netanyahu into this
war. On February 11th, Netanyahu met
with Trump at the White House where he
gave him a four-part pitch on attacking
Iran. JD Vance, according to the story,
if it's true, disclaimer, disclaimer,
warned Trump that the war could cause
regional chaos and break apart Trump's
MAGA 2.0, the Trump 2.0 coalition we
talked about here, the big tent. And
that's turned out actually to be true.
There's been a bunch of hand ringing
from Megan, Kelly, Tucker, Carlson,
right on down the line. Rubio was
anti-regime change, but he was largely
ambivalent, according to this story
about the bombing campaign. Susie Wilds,
chief of staff, said she had concerns
about gas prices before the midterms.
Pretty good uh advice there. And General
Dan Kaine, chairman of the Joint Chiefs
of Staff, said this of Netanyahu's
pitch. Quote, "Sir, this is, in my
experience, standard operating procedure
for the Israelis. They oversell and
their plans are not always
well-developed. They know they need us
and that's why they're hard selling. If
you put this together with Rubio's
walked back comments at the start of the
war, we knew, this is quote from Rubio,
we knew there was going to be an Israeli
action. We knew that would precipitate
an attack against American forces and
that's why we did it. I had Josh Shapiro
on the All-In interview show and um uh
he talked a lot about this. There is a
big underpinning here, Chimath, that the
United States foreign policy is being
driven by Netanyahu. Every Jewish
American person I've talked to feels
Netanyahu is not doing Jewish American
and Jewish the Jewish diaspora any
favors here by his approach to these
wars. What are your thoughts on why we
got into this and how we get out of it?
>> I mean, the person that decides is the
president of the United States. some
foreign leader isn't
getting to call the shots in the United
States. I think very practically
speaking,
the markets are effectively
pricing in that this was a small blip
for whatever people think. That's just
what the best prediction market that we
have is telling us. I think that's
important to acknowledge that we're
probably in the endgame here. And the
second thing to acknowledge is if I was
Israel, I would really be concerned
that unless I help find an offramp
quickly, the risk that Israel loses
America as a predictably steadfast ally
could go down. And I think that that's
problematic for Israel
>> far more than is problematic for the
United States.
>> So all of that kind of tells me that we
will find an offramp. A because I think
economically it makes sense and then B
geopolitically I think Israel will want
to make sure that this doesn't burn
a long-standing relationship.
>> Yeah, that that seems to me to be the
major issue here is Americans basically
do not want to be in this war. Americans
do not want a forest policy being
influenced to the extent they believe.
I'm not putting my belief in here.
Americans believe we are being dragged
into this by Israel and that Israel has
too much or Netanyahu specifically has
far too much influence. And then people
believe the anti-semitism that's
occurring here. Josh Shapiro gave me a
lot of push back on this. Uh but all the
Jewish Americans I talked to say
Netanyahu is causing with his actions in
Gaza, Lebanon, Iran. Uh he's gone too
far and it's causing the anti-semitism
we're experiencing uh today. So you can
make your own decisions about that. Any
final thoughts here, Brad, on
the American foreign policy being
influenced too much by Israel?
>> No, it's the discussion. I
>> I mean, listen, um kind of like Sax said
earlier, um I think that we will
ultimately be judged by the outcomes,
right? And that everybody is an armchair
pundit today on, you know, uh the the
the approach that we're taking in these
two different places. I think we could
be on the verge of a massive
transformation of the Gulf States. You
went there with me, Jason. Saudi,
Qataris, Kuwaitis, Emiratis. I've talked
to a lot of them this week. I think
they're very hopeful and optimistic. I
think you could bring Iran into the
fold. But listen, I'm an optimist on all
of this stuff. I I just want to remind
people, doing nothing in Iran had
tremendous risks. Doing nothing in
Venezuela had tremendous risks. So, it's
not as though this was uh, you know,
something that I I I I think wasn't well
calculated, but I think we have to let
the cards be played and and and then let
history be the judge. But I think
there's uh a risk in both directions,
but I'm going to remain optimistic.
>> All right. You uh said in the Gaza
situation, we should have a wide birth
for criticism of Israel and Netanyahu.
What are your thoughts on this belief
here in the United States now in this
discussion that Israel is having far too
much influence over the United States
foreign policy?
Well, I noticed in my feed today that
Naftali Bennett, who's a major Israeli
politician who was a former prime
minister, tweeted polling that showed
that Israel was becoming very unpopular
in the US and he was expressing concern
about that and expressing
the need to to basically address that or
fix that. So, I think you're starting to
see Israeli politicians raising that as
an issue. And I think that's probably a
good thing. Yeah, there it is. And it's
really cool actually how X now just
automatically translates things from
foreign languages, in this case, Hebrew,
and it puts it in your feed. So, yeah.
So, here's Naftali Bennett, former prime
minister, saying, "This is a serious
situation. There's a lot of work ahead
of us to fix everything." Now,
obviously, this is not Netanyahu. this
is one of his um political opponents.
But
yeah, I mean this is something for
Israel to consider and think about and I
think that they would improve their
popularity uh if they got behind the
ceasefire and I have no indication that
they won't but that would certainly be a
good place to start. I have to say just
as an aside, this auto translate feature
has done more for understanding across
borders than anything I've ever seen.
And it is the most impressive tech
feature I I've seen released in years.
Putting AI and large language models
aside for people who don't know what's
happening because of Grock being really
good at doing auto translate. They've
taken the pockets of the best of what's
happening in Japan, what's happening in
Israel, what's happening in France, and
they're surfacing it auto translated.
Then when you reply as an American to
somebody in Japan, they see it autorated
as well, which has led to people who
don't speak the same language engaging
on X in a very nuanced, fun, interesting
way. And that for as a truth mechanism
is just absolutely extraordinary. I
think this is going to have such a
profound effect. Maybe Elon and the X
team should get like a Nobel Peace Prize
award for this. I think it's going to
change. I mean, I hate to be hyperbolic,
but have you been using this feature,
Chamat? Has it been coming up in your
feed? And which language is up in your
feed right now?
>> English. Okay. So, you're not part of
the translation thing. Brad, has this
hit your feed yet? And and which regions
are you?
>> Definitely. Definitely see it in on the
Middle East stuff. Um, and uh, you know,
I've seen on Chinese, I've seen it on on
the Russian Japanese
>> super helpful.
>> Let me tell you, bass Japanese is a
whole another level of beast.
>> Whoa. Man, base Japanese makes like
Fentes and Alex Jones seem tame. They're
like, look at this group of people.
Insert whatever group of immigrants you
like. And they're like, this is
unacceptable behavior. This is not
Japanese culture. These people need to
be get the hell out of Japan. It is
wild, folks. And if you don't have an X
account, you are missing out. Go to
X.com and sign up for this reason only
because you think about the velocity.
Like journalists are not even taking the
time to translate and cover what's going
on in those areas. And this is happening
automatically in real time.
>> So you start thinking about what
happened in Ukraine. If you had people
Russia and Ukraine doing this and having
conversations with each other, it would
be wild.
>> You're like a such a good hype man. The
problem is you hype buttered bread the
same way you hype a nuclear reactor. And
so it's hard to really tell, you know,
what you're really hyping because your
level of excitement, the intonation is
exactly the same.
>> Yo, man, there's nothing better than a
slice of great toast. I mean, if this is
very this in in a way, it is like sliced
bread. It's very simple, but it is so
powerful in the experience.
>> This has been
>> It is true. X is better today than it's
ever been. And remember, they have 70%
fewer employees than they had the day
Elon walked into the building. And so if
there were ever a debate
>> about this, like, and I remember
everybody saying, "Oh, it's going to tip
over. Oh, it's going to be a crappy
experience."
>> The fact of the matter, here's we are a
few years later, 70% fewer employees,
and every other company in Silicon
Valley is looking at that. I think for a
lot of these tech companies, we've hit
peak employment. We're going to create a
tremendous number of new jobs, but for
the existing jobs, these companies are
all realizing they can do more with
less.
>> Nikita Beer just tweeted that they're
about to go ham on these bot accounts
that auto reply.
>> Yes.
>> Those those literally ruined my feed.
>> That's why I went to subscriber mode in
my replies and it's it's worked out
great.
>> Yeah. No, shout out to him and um to
Chris Saka who was in tears at what
happened to Twitter. You
>> It's gonna be okay, Chris. Sorry. No
more tears.
You only let subscribers respond to your
tweets. I
>> I do 50/50. Sometimes I'll just let it
rip and get chaos. And then other times
I have 2,000 paid subscribers. I give
all the money to charity, like 30 grand
a year. And it's just wonderful to get
to know the same 2,000 people out of my
million followers. It's kind of like
having this little subset. So sometimes
I'm like, I don't have time to deal with
a 100red or 200 or 300 replies.
>> You have a million. That's incredible. I
mean, it's just
>> I mean, you have two million. I think
Sax must have a million, right? You have
a million, right, Sax? Only only
>> Brad. How many you have now? You're
getting popular.
>> You built a couple.
>> Got a couple hund.
>> What's your Oh, your alt cap altca.
>> I'm at 1.4 million. What are you at,
Jacob? Have I surpassed you?
>> I think you have. I'm like 1.1.
>> What it cost me to get my real name,
Jason?
>> Uh, I know a guy. Find out.
>> You're 1.1. Yeah, I made it to 1.4. I
don't know how that happened exactly.
>> Just having the number one podcast in
the world. Uh, another amazing episode
of the number one pop and Chimath has
two million, but that's only because he
engages he has just incredible moments
of uh engaging with his haters. Oh my
god, the the the replies that Chimath
sometimes drops are so great. I love
Chimoth goes
>> I light them up. I light them up.
>> He lights them up. Then you had somebody
who was like, "Oh my god, I was in the
casino and you told me to bet black, so
you bet black, so I bet black and I lost
my money." And so you're responsible and
then you paid for the kids college. He
has two young girls and so I I funded
their college accounts.
>> I thought that was hilarious. Just as
>> obviously I'm very happy for him and his
two daughters. I'm even more happy at
how much it'll anger all these other
goofball dorks living in their mom's
basement.
>> Yes.
>> Who'd literally have no take they take
no responsibility for their lives.
And uh they should enjoy those Hot
Pockets. By the way, for those folks in
their mom's basement, the Hot Pockets
and the Fish Sticks are ready and you
get one more hour of Xbox from mom.
>> All right, listen. We missed you,
Freeberg, but this is the best episode
in two years. Uh
>> Freeird at the end of the show.
>> And we will see you all at the liquidity
summit except for the 400 people on the
wait list who aren't going to get in.
>> We got an email from the guys at Athena
because we were just
>> Oh my god. the they they're they're
going to hire like 500 new Athena
assistants.
>> Yes, they had a thousand people after
last week when we mentioned how much we
love Athena.
>> Go to Athena.com.
>> But that's amazing. Those are like 500
hardworking men and women who are like
working
>> in the Philippines.
>> Sax have great jobs.
>> Sax, I'm going to get you a couple
Athena assistants as a birthday present.
That's what I'm going to get.
>> You're going to love this, Sax. H Athena
assistants are the best. Congratulations
to my friends over there. All right,
everybody. We'll see you next time. Love
you boys on
>> tonight favorite podcast.
>> Let your winners ride.
>> Rain
and
>> we open source it to the fans and
they've just gone crazy with it.
>> Love you queen of
winners.
>> Besties are gone.
That is my dog taking out your
driveways.
>> Oh man, my appetiter will meet.
>> We should all just get a room and just
have one big huge orgy cuz they're all
just useless. It's like this like sexual
tension that we just need to release
somehow.
>> That's going to be good. We need to get
merch.
I'm going all in.
Ask follow-up questions or revisit key timestamps.
In this episode of the All-In Podcast, the besties are joined by guest Brad Gerstner to discuss Anthropic's decision to withhold its 'Mythos' model due to cybersecurity risks. They analyze the unprecedented revenue growth of Anthropic, reaching a $30 billion run rate, and debate the sustainability of AI margins. The group also covers geopolitical developments including a ceasefire in the Iran conflict, the influence of Netanyahu on US foreign policy, and the impact of X's new auto-translation feature on global discourse.
Videos recently processed by our community