HomeVideos

Anthropic’s $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's Influence

Now Playing

Anthropic’s $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's Influence

Transcript

2505 segments

0:00

How many PRs you think are going to get

0:01

pushed to the core structural internet

0:03

in 100 days? What's the overunder

0:05

number? Cuz I'll give you a number.

0:06

>> You're going to say zero. My my answer

0:08

to that is

0:09

>> I'll say like 10,000. But it's going to

0:11

be immediately

0:11

>> if it prevents your browser history from

0:13

being released to everybody in the

0:15

world, Chamath, that may be something

0:17

that you're willing to, you know, let

0:18

100 days pass on.

0:19

>> I think you got Chimat's attention when

0:20

you said browser history.

0:21

>> What about the dickpicks?

0:25

>> Chamat is he's going to release them

0:27

himself.

0:30

We'll let your winners ride.

0:38

>> We open source it to the fans and

0:40

they've just gone crazy with it. Love

0:41

you.

0:46

>> All right, everybody. Welcome back to

0:47

the number one podcast in the world.

0:48

David Freeberg is out this week. But in

0:51

his place, the one, the only,

0:54

our fifth bestie, Brad Gersonner. I

0:56

mean, why don't you ever give me puts a

0:58

little namaste in your payday anymore?

1:00

You used to be

1:02

>> I'm going to bring back the greatest

1:03

moderator, but now it's just kind of You

1:06

know what? These guys beat me up. They

1:08

beat me up and they just beat the the

1:10

joy out of me doing this program.

1:13

>> It's because you're a Roana apologist

1:15

now.

1:16

>> No, I We'll get into it. Okay. Save it

1:18

for the Roana apologist. just

1:21

because I said like, "Hey, they've

1:23

stopped maxing and they've

1:25

started doing like some logical things."

1:28

Uh, yeah. Okay, here we go.

1:30

>> It's great to be here. Great to be here.

1:31

>> Good to have you. Good to have you here.

1:33

And of course, uh, we have David Saxs is

1:37

back. Everybody wants to hear from David

1:38

Sax. We missed you last week, bestie.

1:40

>> We didn't beat the joy out of you. We

1:42

just try to beat some of the hot air.

1:44

Turn

1:45

>> any any fluff that you can put on the

1:47

show that just involves you talking and

1:50

saying nothing is

1:52

>> that's the stuff we got.

1:53

>> Turn up. Yeah. Turn up. Okay. Yeah,

1:55

we'll cut it right out. Um we'll cut it

1:58

out and we'll just put a promo in for

1:59

the syndicate.com. Thank you. Also with

2:01

us, Jamalet is here.

2:04

>> How's your maxing going since

2:06

last week? Did you have a a a

2:08

maxing full weekend? Did you have a good

2:10

full weekend of just smoking cigars in

2:12

the back deck and not ruminating about

2:14

all the chaos you've caused in the last

2:16

20 years?

2:17

>> I think I've done generally more good

2:20

>> than than not.

2:22

>> Oh, you have. But there's been some

2:24

chaotic moments. Don't think about it.

2:26

>> You can't, bro. You can't have ups

2:28

without downs, man. It's like, what are

2:30

you there to do? Just like plate

2:31

everybody and be a loser? Are you there

2:33

to be a winner? Yes, you're in the

2:35

arena, but have you stopped going to

2:37

therapy after realizing ruminating?

2:40

>> What's up with this uh sudden interest

2:42

in maxing? Are you like the

2:44

clavvicular for maxing?

2:47

>> No, the world finally caught up with me.

2:49

That's it. What do you I mean, I've been

2:50

maxing this whole time. THEY JUST

2:52

DIDN'T HAVE A name for it, guys.

2:54

>> Wow. Okay. Eli's videos are really good.

2:56

I watched two more this week.

2:58

What take us through what's so appealing

3:01

about not ruminating, smoking a cigar,

3:03

and just living your life?

3:05

>> Because what he says actually works at

3:07

every level of society and every sort of

3:10

thing that you may want to achieve. Even

3:12

if you're trying to like climb the

3:14

rungs,

3:17

you very quickly learn that the more you

3:19

want something, the less you're going to

3:20

get it. And I think that's like his real

3:23

message is let go, live life, and just

3:27

try stuff or don't try stuff. And I

3:29

think that that detachment is really

3:31

healthy for people. I like it. I like it

3:34

a lot.

3:35

>> Who's the guy who says this? I actually

3:37

didn't know.

3:37

>> Elisha Long. Well, Eli, I think, is how

3:40

he goes by.

3:41

>> But he's fantastic. He Mark YouTube

3:43

channel.

3:44

>> Mark Andre found him

3:46

>> and he's like, "This is this guy is the

3:48

new guy.

3:49

>> Modernday philosopher. He gives you a

3:51

road map for how to live your life,

3:52

right? A new age sage.

3:54

>> What's the name of the guy? The

3:55

character's name from Dune.

3:57

>> I was into girls books. I was dating

4:00

girls.

4:01

>> He's the Lison Algib of the modern

4:03

internet.

4:04

>> This is why we need Freeberg here is to

4:06

explain these deep holes. All right,

4:09

listen. We got a lot to get to. Don't

4:10

The basic point is build something and

4:12

don't ruminate. Okay, ruminating is just

4:14

not worth it. Just everybody go for it.

4:16

>> No, just do stuff. Stop blathering in

4:18

your own head. Just do stuff.

4:19

Absolutely. All right. Listen, speaking

4:21

of doing so, Anthropic is withholding

4:23

its newest model, Mythos. I'm using the

4:26

Greek uh pronunciation, its newest

4:28

model, Mythos, uh saying it is far too

4:31

dangerous for any of us to have access

4:33

to it. According to the company, the

4:35

model autonomously found thousands of

4:37

vulnerabilities, including bugs in every

4:39

major operating system and web browser.

4:42

This uh little study they did included

4:45

20 year old exploits that had been

4:47

missed by security audits for decades.

4:49

Uh some examples, they found a 27y old

4:51

vulnerability in OpenBSD used in

4:54

firewalls and critical infrastructure.

4:55

They found a 16-year-old bug in FFmpeg

4:59

that was missed by automated tools after

5:01

5 million scans. The Linux kernel, all

5:05

kinds of uh bugs they found. They

5:07

released a hype video hyping up why they

5:11

were not going to share this model.

5:13

Here's Dario. Come on the program

5:15

anytime, brother.

5:16

>> But as a side effect of being good at

5:17

code, it's also good at cyber.

5:19

>> The model that we're experimenting with

5:22

is by and large as good as a

5:25

professional human at identifying bugs.

5:28

It's good for us because we can find

5:30

more vulnerabilities sooner and we can

5:32

fix them.

5:33

>> It has the ability to chain together

5:35

vulnerabilities. So what this means is

5:37

you find two vulnerabilities, either of

5:39

which doesn't really get you very much

5:40

independently, but this model is able to

5:43

create exploits out of three, four,

5:45

sometimes five vulnerabilities that in

5:47

sequence give you some kind of very

5:49

sophisticated end outcome.

5:50

>> All right, Brad, uh, by the way, that

5:51

set they're using there, that's the same

5:53

room those guys play Dungeons and

5:54

Dragons in every Sunday. Brad, you're

5:58

Brad, you're an investor in this

6:00

company. Is this virtue signaling or is

6:03

it reality? Is this a good move by them

6:06

to not release this model and be

6:08

thoughtful, give it to a handful of

6:09

people and just find all the bugs it can

6:12

before releasing it to the public? And

6:14

we've got a lot more issues to discuss.

6:16

>> I I actually think they deserve a ton of

6:18

credit here and let me walk you through

6:20

why, right? They the company could have

6:22

just released Mythos, broken a lot of

6:24

core things on the internet. Often times

6:26

in Silicon Valley, we say move fast and

6:28

break things. In this case, it means

6:30

just releasing the model to move further

6:31

ahead of your competition. But here the

6:33

company realized it would wreak havoc.

6:35

They ran their own vulnerability

6:37

testing. They saw that it would allow

6:39

offensive hacking and people to expose

6:41

browsers and browser history, expose

6:43

credit cards, you know, on the internet.

6:46

So, you know what I like about this is

6:48

they didn't need government to hold

6:50

their hand on this. We have plenty of

6:52

government regulations. They know it's

6:54

in the best long-term interest of the

6:56

company and the industry, you know. So,

6:58

they set up Project Glass Wing. It's an

7:00

AIdriven, you know, kind of cyber

7:02

coalition. Apple, Microsoft, Google,

7:05

Amazon, JP Morgan, 40 of the most

7:07

important companies. And their goal is

7:09

very simple. Let's spend a 100 days use

7:11

advanced AI to find and to fix and to

7:14

harden these software vulnerabilities

7:17

before hackers exploit them. Now, what I

7:20

think this represents, Jason, is a

7:22

threshold that we're crossing. Mythos

7:25

and Spud, which is going to be out from

7:27

OpenAI any day now, which is the first

7:30

Blackwell trained model at OpenAI. They

7:33

represent the beginning of what I would

7:35

call AGI models. These are models with

7:38

massive step function improvements and

7:40

intelligence. Um, and they're just too

7:43

smart to be released immediately,

7:45

you know. And by the way, there was

7:47

nothing that said that every time you

7:49

you finish a model, you got to

7:51

immediately release it GA. So they set

7:53

up this idea of sandboxing, building

7:55

defensive alliances,

7:57

you know, in order to move away from

7:59

that regime. I I think it shows, and

8:01

Saxon and I have talked about this a

8:03

lot, so I'm interested to hear what he

8:04

thinks. It shows you can trust the

8:07

industry and market forces in

8:10

coordination with the government. They

8:11

were talking to the government about

8:12

this. But they're not relying on some

8:15

top- down regulation in order to do

8:18

this. They laid out a blueprint that

8:19

seems to me very pragmatic that now that

8:22

we're at this threshold, we're going to

8:24

sandbox these things. I think that open

8:26

AAI will end up doing the same thing. I

8:28

think Google will end up doing the same

8:30

thing. It's an aggressive way to keep

8:32

the RA, you know, the pressure on and

8:34

and win the race at AI while making the

8:37

tradeoffs to protect safety. So, you

8:39

know, I think you're always going to

8:40

have to make these trade-offs. I think

8:42

in this case, it was a great move by

8:44

Dario and team and I think they deserve

8:46

a lot of credit. Sachs, when you look at

8:48

this, we had Emil Michael on the program

8:50

a couple weeks ago. It might have been

8:51

four or five weeks ago, and we had a

8:53

very thoughtful discussion about, hey,

8:55

if the government is going to have these

8:57

tools, you know, an anthropic wants to

8:59

withhold them and, you know, what is the

9:02

proper relationship there, you have to

9:04

think that the government, and I know

9:06

you don't speak for all parts of the

9:08

government. If you were just going to

9:10

run through the game theory, they must

9:11

have gone to the government and said,

9:12

"Listen, this thing is so powerful, it

9:14

can put together two or three hacks,

9:16

create a novel attack vector, and this

9:19

is incredibly dangerous. What if China

9:21

has it? And if this thing is as powerful

9:23

as Daario says it is, then this is an

9:26

offensive weapon as well for us to take

9:28

out, let's just pick, you know, uh, a

9:31

pressing issue, the North Korea's

9:34

ballistic missile program. This is

9:36

equivalent the way it's being described

9:38

as the Manhattan project perhaps. So

9:41

what are the chances two-part question

9:43

for you Sax that China already has this

9:45

and is using it and do you think Daario

9:48

is doing the right thing by regulating

9:50

themselves?

9:51

>> I think Anthropic has proven that it's

9:54

very good at two things. One is product

9:57

releases. The second is scaring people.

10:00

And we've seen a pattern in their

10:02

previous releases of at the same time

10:05

they roll out a new model or new model

10:07

card, something like that. They also

10:09

roll out some study showing really the

10:12

worst possible implication of where the

10:15

technology could lead. We saw this last

10:17

year about a year ago. They rolled out

10:19

this blackmail study where supposedly

10:22

the new model could blackmail users.

10:25

There's been a whole bunch of these

10:26

things. Actually, I went back to Grock

10:28

and I just asked, "Hey, give me examples

10:30

where Antropic has basically used scare

10:32

tactics and it's it's a pattern." Okay,

10:35

it's a pattern.

10:36

>> Okay,

10:37

>> these guys, I'm not saying it's not

10:40

sincere, but they have a proven pattern

10:42

of using fear as a way to market their

10:45

new products. And if you think back to,

10:48

again, my favorite example is this

10:50

blackmail study where they prompted the

10:53

model over 200 times to get the result

10:55

they wanted. And that result was was

10:58

clearly reverse engineered and it got

11:00

them the headlines they wanted. And I

11:02

would say the proof that it's reverse

11:04

engineered is we're now a year later.

11:06

There's a bunch of open- source models

11:08

out there that have the same level of

11:10

capability that that anthropic model

11:13

had. And have you seen any examples of

11:15

blackmail in the wild? I don't think so.

11:18

So in other words, if that study were

11:22

true in the sense of being a likely

11:23

outcome of that model, I think you would

11:25

see examples in the wild of that

11:27

behavior. And we haven't seen any of

11:29

that in the past year. Now, let's talk

11:31

about this specific example with cyber

11:33

hacking.

11:34

>> I actually think that this one is more

11:37

on the legitimate side. I mean, look,

11:39

the reason why I bring this up is

11:41

anytime Anthropic is scaring people, you

11:42

have to ask, is this a tactic? Is this

11:45

part of their Chicken Little routine? or

11:47

is it real? You know, are they crying

11:49

wolf or not? I actually would give them

11:50

credit in this case and say this is more

11:53

on the the real side. It just makes

11:55

sense, right? So that as the coding

11:58

models become more and more capable,

11:59

they're more capable of finding bugs.

12:01

That means they're more capable of

12:02

finding vulnerabilities. And like one of

12:04

their engineers said, that means they're

12:05

more capable of stringing together

12:06

multiple vulnerabilities and creating an

12:08

exploit. And so I do think that over say

12:11

the next 6 months we're going to have

12:13

this call it one-time period of catching

12:16

up where AIdriven cyber is going to be

12:20

able to detect a whole range of of bugs

12:22

that maybe have been dormant over the

12:24

past 20 years across a wide range of

12:27

systems. And so I do think that there is

12:30

real risk here. And I do think therefore

12:33

that having this pre-release period

12:35

makes a lot of sense where they're

12:36

giving the capability to all these

12:39

software companies that have existing

12:40

code bases to use the tool to detect the

12:43

vulnerabilities for themselves so they

12:45

can patch them before these capabilities

12:47

are widely available. And by the way, it

12:49

won't just be anthropic that makes these

12:52

capabilities available. We know that

12:54

like let's say the Chinese open source

12:55

models like Kimmy K2, it's about 6

12:58

months behind. So we have a window here

13:00

of maybe 6 months where we're still in

13:03

this pre-release period where I think

13:06

companies that have large code bases can

13:09

get advanced access to this model and uh

13:12

I guess open AI is going to release a

13:14

similar thing in the next few weeks. I

13:16

do think that every company or IT

13:18

department or CISO that is managing code

13:23

bases should take this seriously and use

13:26

the next few months to detect any again

13:29

like dormant bugs or vulnerabilities and

13:32

and roll out patches. If everybody does

13:34

their job and reacts the right way, then

13:36

I do not think it will be the doomsday

13:39

scenario that Anthropic is sort of

13:41

portraying. But it's one of these things

13:43

where the fear might end up being a good

13:45

thing in order to drive people to in

13:49

order to drive the correct behavior. So

13:51

>> sure,

13:51

>> I ultimately think this is going to work

13:52

out fine, but you do need everyone to

13:55

kind of pay attention, use the

13:56

capabilities,

13:58

>> fix the bugs, then we're going to get

13:59

into a big arms race between AI being

14:02

used for cyber offense and AI being used

14:04

for cyber defense, but it'll be a more

14:06

normal sort of of period. Chimath, we

14:09

have uh Daario and uh you know a number

14:12

of the participants here are taking this

14:14

super seriously. They're making a big

14:16

statement. Zach's very nuanced uh I

14:19

think take there. What's your take on

14:21

how do these companies have it both

14:23

ways? Hey, this is shouldn't be

14:26

regulated. This should be regulated. If

14:28

this is in fact a cataclysmic, oh my

14:31

god, they're going to hack everything.

14:33

What if the Chinese have this right now?

14:35

That would speak to more government

14:37

either coordination, regulation, or some

14:39

kind of relationship between the CIA,

14:42

the FBI for domestic stuff, and these

14:45

companies because there it is a non-zero

14:48

chance that the Chinese have an equal

14:50

capability here. We're assuming they're

14:52

behind, but who knows what they're doing

14:54

behind closed doors. So, what's your

14:55

take on this? Is it uh The Boy Who Cried

14:58

Wolf, or is this the real deal? Now,

15:00

>> I think it's mostly theater.

15:03

>> Okay.

15:04

In February of 2019,

15:07

when Daario was still at OpenAI,

15:10

they did the same thing with GPT2.

15:14

That was a 1.5 billion parameter model,

15:17

which sounds like a total fart in the

15:19

wind in 2026. But at that time, this 1.5

15:24

billion parameter model was supposed to

15:26

be the end of days. And it was supposed

15:28

to unleash this torrent of spam and

15:30

misinformation. And that was the big

15:32

bugaboo at the time. And so what

15:34

happened? They went through this

15:35

methodical roll out over six or nine

15:37

months. They started releasing the

15:39

smaller parameter models and then they

15:41

scaled up to the big 1.5 billion

15:42

parameter model. And at the end of it,

15:44

it was a huge nothing burger.

15:47

If you actually think that Mythos is

15:49

capable of doing what it says it can do,

15:52

two things are true. One is a very

15:55

sophisticated hacker can probably do

15:57

those things right now with Opus.

16:00

And two, if these exploits

16:04

are this easy to find,

16:07

whether you use Opus or whether you use

16:09

Mythos, the reality is you'd have to

16:10

shut down the internet for about 5 years

16:12

to patch them all. So when you see like

16:15

a large multi- trillion dollar gang,

16:20

it's a bit of theater. Why? What do you

16:22

think they can actually accomplish in 2

16:24

months? Do you actually think that if

16:26

there's these vulnerabilities, it's all

16:28

going to get fixed? Let's give them six

16:30

months. Let's give them nine months. But

16:33

the reality is that capitalism moves

16:35

forward, the funding needs moves

16:37

forward, and the need for these guys to

16:40

build adoption moves forward. And that's

16:43

going to supersede

16:45

what this is. So I do think that Sax is

16:47

right that they have figured out a very

16:50

clever go-to market muscle here and a go

16:54

to market motion that activates

16:57

hyper attention and hyper usage and so I

17:00

give them tremendous credit and I'll

17:02

maintain what I've maintained before.

17:04

Anthropic is shooting the lights out

17:06

right now. This is like Steph Curry

17:08

going bananas from every everywhere on

17:10

the court. These guys are hunking

17:12

threes.

17:13

>> It's all in that. Okay. So huge kudos to

17:16

Anthropic,

17:18

but we've seen it before. We saw it when

17:21

these folks were the principal

17:23

architects at OpenAI who are now seeing

17:25

the same playbook here. I think we'll

17:27

look back and I think what we'll say are

17:30

these two things. One is if we're really

17:31

going to patch all these security holes,

17:33

we need to shut down the internet

17:35

for some number of years, honestly,

17:37

literally years. And the second is an

17:40

advanced hacker can probably do this

17:41

today with Opus if they really wanted

17:44

to.

17:46

>> Okay. Hey Brad, I gota I'll get you in

17:47

here for the for the last word. I I'm

17:49

going to go with Yeah, maybe they did uh

17:52

Crywolf before, but based on what I see

17:55

with these models advancing and using

17:57

them and I'm using a lot of the open

17:58

source ones right now from China. I

18:00

think that this is like code red kind of

18:03

moment. This is Defcon. like we should

18:05

be taking this deadly seriously and I

18:07

think these companies got to coordinate

18:08

with the CIA and this is uh equally a

18:11

defensive as offensive opportunity. Do

18:14

you think this

18:15

>> you're asking for the nationalization of

18:16

AI now?

18:17

>> No, I actually I I I don't think it

18:20

should be nationalized. Um although I

18:21

did see people sort of insinuating that.

18:24

I think these companies need to build a

18:26

group Brad that work and coordinate with

18:28

the CIA. I assume that they're already

18:30

doing this. I'm assuming you Emil

18:32

Michael and uh you know Trump and

18:35

everybody have these people in a room

18:36

and that they've given the defcon and

18:39

said hey how can our government use this

18:41

to stop bad actors and this is already

18:44

being coordinated with the CIA and the

18:45

FBI. I am 100% certain of that that

18:48

Dario went to them and said look what we

18:50

found. This is the real deal. I'll give

18:52

you the last word on this Brad since

18:53

you're an investor in both companies and

18:55

you know them quite well. the frontier

18:56

model forum which was which was put

18:58

together in 23 um is cooperating on

19:01

anti- and adversarial distillation stuff

19:04

as we speak right they don't want to

19:06

make it easy on you know so Google and

19:08

and open AAI and and anthropic they're

19:11

coordinating on this stuff you know

19:13

there are times where I've pushed back

19:14

on anthropic because I thought it was

19:16

you know perhaps regulatory capture or

19:18

something else this is very different in

19:20

my mind right he could have easily Dario

19:22

could have easily come out and said oh

19:24

my god we passed a threshold we need to

19:25

have a government moratorum. Remember,

19:28

even our friend Elon called for a

19:29

six-month moratorium in 2023 because of

19:32

civilization risk. This guy didn't do

19:34

that. Instead, he said, "Okay, what what

19:37

should we do? I'm going to get 40 of the

19:38

leading companies together. We're going

19:40

to spend a 100 days sandboxing,

19:42

hardening the systems, and then we're

19:43

we're we're going to keep pushing

19:44

forward."

19:45

>> What do you honestly think is going to

19:46

get accomplished in a 100 days? How many

19:48

PRs you think are going to get pushed to

19:50

the core structural internet in 100

19:52

days? What's the overunder number? Cuz

19:54

I'll give you a number. You're gonna say

19:55

zero. My my answer to that is

19:57

>> I'll say like 10,000, but it's going to

19:59

be immediate.

20:00

>> But if it prevents your browser history

20:02

from being released to everybody in the

20:03

world, Chimat, that may be something

20:05

that you're willing to, you know, let a

20:07

100 days pass on.

20:08

>> I think you got Chimat's attention when

20:09

you said browser history.

20:10

>> What about the dickpicks?

20:14

>> As Chimat is, he's going to release them

20:16

himself right now. CHIMAT'S LIKE, "HEY,

20:18

CHINESE HACKERS, HERE ARE MY DICKPICKS.

20:20

Please put them out."

20:21

>> Oh my god. we have to be out there

20:24

complimenting when they're doing the

20:25

right things or relying on the market

20:26

rather than running to the nanny state

20:28

and saying do more of this. So this to

20:30

me was just an example of of a of a good

20:32

balance. I'm sure we're going to have

20:34

plenty of debates about this in the

20:35

future. But you know this is one I would

20:37

like to see more of.

20:38

>> This is why to use your word Jake I

20:40

tried to have a more nuanced take is

20:42

because we have no choice but to take

20:44

this seriously. Whether it's total

20:46

theater, whether it's fear-mongering,

20:49

and they do have a pattern around this,

20:51

we can't take the risk, right? And it

20:53

does logically make sense that as these

20:56

models become more and more capable at

20:58

coding, they're going to get better at

20:59

cyber. And there's going to be that one

21:02

time period where you're moving from

21:04

preAI to post AAI, and you need a patch

21:06

for that. So, my guess is we're going to

21:08

see a lot of patches over the next few

21:10

months. I think that that will resolve

21:12

the problem.

21:14

I think this is a case where I'm going

21:16

to give them the benefit of the doubt. I

21:18

I think that, you know, I've criticized

21:20

him in the past. I think that blackmail

21:22

study was embarrassing to the level of

21:25

being a hoax, but I think in this case,

21:28

I'm going to give him credit and say

21:29

that I think that it's legit.

21:31

>> So, it's not the anthropic hoax. This

21:33

could be legit. I, you know, looking at

21:36

>> we have no choice but to treat it that

21:37

way.

21:38

>> Of course. Yeah. I mean, even if two

21:41

things could be true at the same time,

21:42

Saxs, they could have used this tactic

21:44

before. It could be performative, like

21:46

the video with the dramatic music in the

21:48

background. It does have a little bit of

21:50

drama to it, and the way they presented

21:53

it is very dramatic, but it does make

21:55

logical sense that the one company that

21:59

made the bet on code bigger than anybody

22:02

else would be the one who would discover

22:04

this quickest. And you know in a 100

22:05

days that's a pretty good um that's a

22:08

pretty big advantage versus the hackers.

22:10

But let me think one more point there

22:11

Jimat

22:12

>> the most important thing that people

22:14

haven't talked about here is the amount

22:17

of code being pushed right now because

22:18

of these tools is 10x 100x in most

22:21

organizations. So we need to have this

22:24

type of security embedded in these new

22:26

coding tools to do it in real time.

22:29

That's the opportunity. There should be

22:31

real time correcting of this. If this

22:33

was real, they picked the wrong

22:34

companies. Meaning, there are energy

22:38

companies, folks that control nuclear

22:41

reactors. There are airplane companies

22:44

that are flying hundreds of thousands of

22:45

people in essentially manufactured

22:48

missiles of like streaming gas going at

22:52

500 miles an hour. None of those

22:54

companies were the ones that were

22:55

included in this. And so I think if you

22:58

really thought that this was end of

23:00

days,

23:02

at a minimum we can agree maybe we

23:04

should have expanded the circle a touch.

23:07

Well, maybe those are customers of the

23:09

ones they're including here. Anyway, uh

23:11

this is a really important story. We'll

23:12

obviously track it in the coming weeks

23:14

to see what turns out to be reality. And

23:17

uh Daario, do come on the program at

23:19

some point. Hey uh Brad, will you get

23:20

Dario to come on the program? I've

23:21

invited him like three times. I got his

23:23

phone number. He's ghosted me. I don't

23:24

know why.

23:24

>> Wait, he he's ignored you? I get

23:27

>> I literally got an introduction from the

23:29

number like one of the number one

23:30

venture capitalists in the world. He's

23:31

on the cap table very early. He just

23:33

won't respond. I don't know why.

23:35

>> I would tell you Daario's podcast with

23:38

Dwarkish who I think is an excellent

23:39

podcaster. I've listened to that three

23:41

or four times taken notes every time. It

23:44

is a really exceptional piece really

23:46

exceptional piece of work by by by them.

23:48

>> All right, let's keep moving. We got a

23:50

lot on the job.

23:50

>> You may once again be tarred with your

23:52

affiliation with us.

23:54

>> Poor you. I mean, I don't care.

23:56

Literally, I I've got friends on both

23:58

sides of the aisle. I have friends

24:00

>> of course you do.

24:01

>> Even JCAL.

24:02

>> Even JCAL has friends everywhere.

24:04

>> Let me ask Brad a question here just

24:06

while we're on the topic of anthropic.

24:07

There was a really interesting story or

24:10

tweet I guess you could say by the

24:11

founder of OpenClaw

24:13

that

24:14

>> Peter.

24:14

>> Peter. Yeah. What's his name? Peter

24:16

Steinber.

24:17

>> Steinberger. Steinberger.

24:18

>> Steinberger. Yeah.

24:19

renowned coder created openclaw which is

24:22

kind of the thing that launched this

24:24

whole agent era now you I guess you

24:25

could say any event he said that

24:28

anthropic was cutting off his access to

24:31

was it was to claw is that the next

24:33

topic

24:34

>> this is on the docket it's a little bit

24:36

nuanced

24:37

everybody using openclaw would take

24:39

their $200 a month subscription to

24:41

anthropic which was essentially like a

24:43

people were using more tokens and it's

24:45

an average the people from openclaw it

24:48

is very verbose and those people are

24:51

100x the usage of the average

24:53

subscriber. So he said you can't use

24:56

your 200, you have to use the API. You

24:58

move from the $200 plan to the API, add

25:00

a zero to your token use. So or more.

25:03

And so they essentially anchored

25:06

Open Claw and then 10 days later or less

25:11

they released or announced their new

25:12

agent technology which is according to

25:14

them a safer better version of OpenClaw.

25:17

So, hey, all fair in love and war and

25:20

they have basically shot a huge cannon

25:23

across the bow of openclaw.

25:25

>> Wait, can you just explain that exactly?

25:27

So, so I think you're right that they

25:29

systematically copied feature by feature

25:32

>> of open claw, incorporated that into

25:34

clawed and then the coupross was

25:36

basically cutting off open

25:38

>> oxygen. Can you just explain exactly

25:40

what they did? Okay, very simply, when

25:43

you buy a subscription to these

25:44

services, they have blended your usage

25:48

across many users. So there's, you know,

25:50

nine out of 10 users use less than the

25:52

tokens they're paying for and the top

25:54

10% use much more. When OpenClaw became

25:58

a phenomenon, the number one open source

26:00

project in history on GitHub with all of

26:02

this usage, people went crazy. And you

26:05

heard me talking about how crazy I went

26:06

for it. those people with the $200

26:08

subscriptions were using $2,000 $20,000

26:11

worth of tokens. So they said you can no

26:14

longer use your subscription to, you

26:16

know, either your professional or

26:18

enterprise subscription at $200

26:20

>> and plug that into your open claw. You

26:22

now have to go to the API and pay per

26:24

usage. So no more like

26:27

>> unlimited. If you use Anthropic's own

26:29

agent harness, are you part of the

26:31

bundled flat rate? You can assume that

26:34

that's what they'll do, which if you

26:35

were thinking on an antirust level might

26:38

be token dumping or price dumping. I'm

26:40

not saying like I'm ratting them.

26:42

>> No, it's like bundling, isn't it?

26:43

>> Well, price dumping or bundling. When

26:45

you price something under the market

26:47

price in antitrust, that would be price

26:49

dumping, right? And if you were to

26:51

bundle, it would be like the bundling

26:53

issue.

26:54

>> Critically important. You can use

26:56

openclaw via claw API and every company

26:58

has a right to set the price for its

27:00

products. It's just saying that you were

27:02

for under their current regime, they

27:04

were selling dollars for 10 cents via

27:06

OpenClaw because these were such power

27:08

users and now they're just saying we

27:10

have to price this rationally, but we're

27:11

happy to have you guys use the API. So,

27:13

>> okay. Okay. But Brad, when you use the

27:16

OpenClaw competitor that Anthropic now

27:18

offers,

27:19

>> correct?

27:20

>> Are they subsidizing that? Are you

27:22

paying?

27:22

>> We don't know yet because it's in closed

27:24

beta. So in other words, what I'm saying

27:25

is if they charge for API usage, right,

27:29

their own first party agent harness or

27:32

system, then that would be apples to

27:33

apples. But if

27:35

>> if they end up charging the bundled flat

27:38

rate, let's say, for their stuff, but

27:41

then charge the metered rate for third

27:43

party stuff, you could make a bundling

27:45

argument.

27:46

>> Sure. Sure. And you could say it's

27:47

anti-competitive assuming that Anthropic

27:50

has dominant market share in coding,

27:52

which I think most people would say they

27:53

do at this point.

27:54

>> And assuming that it's the same product,

27:56

I mean, the reason most enterprises will

27:58

probably use the Anthropic uh version of

28:03

this agentic product is because it meets

28:05

all of your security parameters, right?

28:07

So, Altimter runs, you know, a lot of

28:10

stuff on Enthropic. They're already

28:12

integrated within our our data

28:13

warehouse, our data lake, things of that

28:15

nature. So just letting openclaw loose

28:18

on the uh altimeter you know data set

28:20

would not be wise and so it's a

28:22

different fundamental product.

28:24

>> No I get that and I think that anthropic

28:26

has a huge advantage let's say cloning

28:28

open claw and just building it into

28:30

claude. I'm not denying that to me that

28:33

would be the reason why they don't need

28:34

to do price discrimination is because

28:36

there's already a very good reason

28:38

>> to use the let's call it the bundled

28:40

offering on a featured basis. But the

28:42

question I'm specifically asking is

28:44

whether they're giving themselves a

28:46

price advantage because

28:48

>> I think Brad is giving the the most

28:50

generous interpretation. You're taking a

28:52

more cynical one. I'm with you, Saxs.

28:53

I'm 100% on the cynical side. Open Claw

28:56

is so powerful. It's got so much

28:58

momentum that not only is anthropic

29:02

trying to ankle it. I believe when Sam

29:04

Waltman bought it, it was uh and he

29:06

didn't buy OpenClaw itself, he hired

29:08

Aqua hired Peter. I believe it was to

29:10

subvert the open- source project to get

29:12

Peter's next set of genius ideas inside

29:15

of OpenAI as opposed to letting them go

29:17

there. People are going to say I'm a

29:19

conspiracy theorist, but this is the

29:21

number one focus and let me just give

29:23

you a list of who is trying to kill

29:25

OpenClaw/compete with them. Obviously,

29:28

you have Anthropic, but also Perplexity

29:31

Computer launched. It's awesome. I've

29:33

been using it. Anthropic has this clawed

29:36

managed agents. They dropped that on

29:38

Wednesday, April 8th. uh yesterday uh

29:40

today's Thursday when we tape you you

29:41

guys listen on Fridays and then you have

29:44

Hermes agent that was released on

29:47

February 25th that's also open source

29:49

and very good so that's in the open

29:50

source camp Alibab is coming out with

29:52

one that's going to be based on their

29:54

Quinn model then you have Elon who said

29:57

he's got something called rock computer

30:00

coming out of macro hard which is a play

30:01

on words for Microsoft in addition to

30:04

that Amazon and Apple are preparing uh

30:06

new releases of their uh maxing

30:09

assistants Alexa and Siri that will be

30:11

less in this new version and

30:14

then nothing out of uh SAT and Microsoft

30:16

yet. So the number one goal I believe in

30:20

the large language model frontier model

30:23

space is to kill this open source

30:25

product.

30:26

>> No, I mean come on like why they're

30:29

building multi-functioning agents that

30:31

can move from answering questions to

30:33

actually doing something for you. like

30:35

you got to do that because that's what

30:37

consumers and enterprises wants. It

30:39

doesn't mean that it's about killing

30:40

OpenClaw. just this is an obvious thing

30:43

right to do it

30:44

>> but this is a giant movement to stop it

30:47

because this is the equivalent of having

30:49

an open-source Android like player in

30:51

the market and that could be incredibly

30:53

disruptive these I believe open source

30:55

is going to win the day on the large

30:56

language models and take 90% of the

30:58

token usage and I think the entire

31:00

frontier model space could be undercut

31:02

by open source and I think they realize

31:04

that SLMs the the smaller language

31:06

models that are verticalized now that

31:09

will run on you know, desktops and

31:11

laptops and is even starting to run on

31:13

the top ones. That is their biggest

31:15

competitive threat and I hope it

31:16

happens. All due respect to your

31:18

investments, Brad, I think this

31:20

technology and the interface is uh you

31:22

know, he placed bets, but I I think it's

31:24

imperative that the agent level, which

31:26

is essentially your entire life, you

31:29

don't give that to Anthropic. You don't

31:31

give that to OpenAI. That's your entire

31:33

business, your entire life. It is

31:34

foolish for you, Brad, to give your

31:36

entire business and all the knowledge

31:38

you have to anthropic through that.

31:40

unless you're just doing it to boost

31:41

your um your your investment in those

31:43

companies. But I would be very concerned

31:45

if I was you with putting all of your

31:48

knowledge that you've earned over a

31:49

lifetime into any of these large

31:51

language models.

31:52

>> All right, Jake, let me ask you. Can I

31:54

ask a question? Thank you for that

31:55

impassioned monologue. Um actually, I

31:58

want to ask my TED talk.

31:59

>> I Yes, thank you for that TED talk. Um I

32:02

have a yes no question for each of you.

32:06

>> Do you believe that anthropic has

32:07

dominant market share in coding? right

32:10

now? Yes. No,

32:13

>> no.

32:14

>> In in coding,

32:15

>> yes,

32:15

>> they had the lead, but not that they had

32:16

the lead, but not dominating.

32:18

>> I think it's a trillion dollar market,

32:19

and these guys have less than 10% of it

32:21

today. So, it's hard to make a case that

32:24

>> What percent of coding tokens do you

32:26

think that anthropic is providing the

32:28

market right now?

32:29

>> Greater than 50%.

32:30

>> Yeah, that's true.

32:31

>> Okay, that's called dominant market

32:32

share.

32:33

>> Uh, I don't know about that.

32:35

>> More than 50% of the market.

32:36

>> You got to look at what you got to look

32:37

at what the TAM is.

32:39

with the Tan,

32:41

>> right? There are a lot of people who

32:42

provide, you know, that that are in this

32:46

tiebreaker before we move on to the

32:47

next.

32:47

>> I'm not saying it's a permanent

32:48

condition,

32:49

>> but if you're telling me that today

32:52

>> Anthropic is delivering over half of the

32:55

coding tokens, that's clearly a dominant

32:58

position in the market for coding. It's

32:59

an early market. It could change, but

33:01

>> if I were representing them, David, I

33:03

would say nine months ago, everybody t

33:05

called us uh, you know, out of the game.

33:08

We were being destroyed by open AI in

33:10

three months. Now people are saying we

33:11

have dominant market position. This is

33:13

the fastest changing most competitive

33:16

market in the world. I think it would be

33:17

very hardressed to walk into, you know,

33:20

some district court make the case that

33:21

these guys have somehow already formed a

33:23

monopoly against Amazon, Google,

33:26

Microsoft, Open AI, etc.

33:29

>> Well, I'm not saying it's a it's already

33:30

a permanent monopoly, but I am just

33:33

asking about market share. And I do

33:34

think you guys all agree that

33:36

>> Shimov, go ahead.

33:37

>> They probably have 50 to 60% market

33:40

share because I think codeex is actually

33:42

quite broadly used as well.

33:45

>> But that belies the more important point

33:47

which is AI enabled coding I think is

33:51

still 5% of the broad market. So it's

33:53

kind of a nothing burger. Yes, they're

33:55

leading but they're leading in something

33:57

that isn't that big yet. Now you would

33:59

say how could it not be big? And what I

34:01

would say is because most of the stuff

34:03

that's being written is still white

34:05

sheet denovo code. And I think the ugly

34:09

truth is I don't care what model you

34:11

have, but the long horizon ability for

34:14

any of these models to actually build

34:16

enterprisegrade software is still

34:20

shiit And that's the actual lived

34:24

experience. Not for me, but when I call

34:26

on our customers, half a trillion dollar

34:29

banks, hundred billion dollar insurance

34:31

companies, none of these guys are like,

34:32

"Wow, it just works out of the box." It

34:34

doesn't work. So, most of it is still

34:38

handtuned. So, until I can honestly tell

34:41

you that we can point a model at this

34:44

with the right guard rails, which I

34:46

can't today, what I would say is it's a

34:48

small market that will become large as

34:52

these models become better.

34:54

But we are in the world where we have 50

34:57

years of accumulated tech debt as a

35:00

world. And I suspect when you enumerate

35:02

the number of lines that that

35:04

represents, it's hundreds of trillions

35:06

of lines of just pretty marginal

35:08

mediocre code to bad code. On top of

35:12

that, we have all these legacy

35:13

languages. I'll tell you one of our

35:15

customers, they have to go and get

35:17

60year-old pensioners to come into the

35:19

office to interpret cope. No, I'm not

35:22

joking. This is a

35:22

>> snowball for trend.

35:24

>> This is a hundred billion dollar a year

35:26

revenue company and that's how they

35:29

solve these problems. It's not opus just

35:31

solves it. So I I would just keep in

35:34

mind that most of the tech debt in the

35:36

world that exists 99% of it is still

35:39

poorly addressed by these models. We are

35:42

untying this Gordian knot. It's going to

35:44

take decades to do it right. So all the

35:46

breathlessness about all this other

35:48

stuff, I really think it's not where the

35:49

money is. It's not the big time stuff.

35:51

And you can tell me, "Oh yeah, it's

35:53

going to be the future." And I would

35:54

say, "Tell this business that's a

35:56

hundred billion dollars a year of

35:58

revenue and 50 million billing

35:59

relationships that all of a sudden

36:01

you're going to open claw your way to a

36:02

solution." It's Not to say

36:05

that you can't have a great chief of

36:07

staff, and not to say you can't do some

36:09

useful stuff and trickery and, you know,

36:11

have a good knowledge base. I'd like

36:13

that, too. But the core things that your

36:17

lived experience sits on today is a mess

36:20

of tech debt that will get very slowly

36:22

replaced. And that's just the reality of

36:24

life.

36:25

>> And there are competitors that are

36:27

extremely disruptive. I'll tell you

36:28

about one. We talked about Bit Tensor

36:30

Tao on this program a couple weeks ago

36:32

when we had the um Jensen interview. You

36:34

brought it up actually Chimath. There's

36:35

a there's a project that's subnet 62.

36:38

It's called Ridges AI. And what they're

36:41

doing is a competitor that is not only

36:44

open- source but anybody can contribute

36:46

to it. They spent about a million

36:48

dollars in tow like rewards and in 45

36:50

days they hit 80% of what Claude 4 is

36:54

and they did that in under 45 days. The

36:56

way that works is they give rewards for

36:58

people who and they can do this

37:01

anonymously make that coding product

37:03

which is like codeex or claude code

37:05

better. that flywheel is racing right

37:09

now with participation in the same way

37:10

Bitcoin is. So you're going to see a lot

37:13

of open- source and these crypto

37:15

open-source combinations and uh anybody

37:18

who's not investigated this, I highly

37:21

recommend you investigate this.

37:23

>> I do think you're right about one

37:24

specific thing. I would put zero,

37:26

literally the probability zero of any

37:29

important company worth anything more

37:31

than a dollar having and outsourcing

37:34

their production code to an open source

37:36

project. That'll never happen. However,

37:37

what will happen though is when you look

37:40

at the cost of training this 10 trillion

37:43

parameter model on Blackwell and when

37:47

you look in the future let's just say in

37:50

six or nine months that a 15 or 20

37:52

trillion perm model is going to get

37:54

trained on Vera Rubin I think Jason

37:56

where you are right I have zero and just

37:59

to be clear I have no investments in

38:01

this at all I'm

38:03

>> to be so super clear

38:04

>> I'm just observing because another

38:06

project other than Bit Tensor that

38:07

someone brought up to me is Venice. The

38:09

concept of opensource training and

38:12

orchestration

38:14

is a hugely disruptive idea which is the

38:17

complete orthogonal attack vector to

38:20

this idea that you have to raise tens

38:22

and tens of billions of dollars to train

38:24

your models because if the capital

38:26

markets run out of 10 and 20 billion

38:29

dollar checks to give people the only

38:32

solution is to be totally distributed. I

38:34

tend to agree with you Jason that there

38:35

is going to be at some point a very

38:38

successful open source project for

38:40

pre-training.

38:42

Absolutely. Will there never ever be an

38:45

open- source way where a real company

38:47

that has any skin in the game says here

38:49

guys re-engineer my codebase as an open

38:51

source project. Never going to happen.

38:53

>> Yeah, I I think the coding tools will.

38:55

And if you look at the history of open

38:56

source, Brad, you actually I think had a

38:58

lot of bets in this space. Linux,

39:00

Kubernetes, Apache, Postgress, like

39:03

Terraform, like these open source

39:05

projects are deep inside of enterprises.

39:07

Deep. And we're sitting here 15, 20

39:10

years ago, the same argument was made.

39:12

Nobody will ever adopt these inside the

39:13

enterprise. You got to go with Oracle,

39:15

whatever. And fair enough, many people

39:17

do. But I think this is this $29 ridges

39:22

um subscription to do this versus 200.

39:24

It's starting to take hold inside of

39:27

startups. And that's where I always look

39:29

at the tip of the spear. Startups love

39:30

to, you know, use open source products.

39:33

I think this could be the next big

39:34

thing. But listen, I I I invest in

39:37

things that have a 90% chance of going

39:39

to zero. So do your own research. No

39:41

crying in the casino.

39:43

>> Can I just make a a final few points? So

39:46

just just quickly so number one is with

39:49

respect to this market for code or code

39:51

tokens whatever you want to call it

39:53

>> it might be 5% today meaning 5% of the

39:56

codes AI generated versus human

39:57

generated I think it's going to 95% I

40:00

mean I bet any amount of money on that

40:02

the only question is when probably over

40:04

the next few years so that's point

40:06

number one point number two is it's

40:08

possible that if you're the early leader

40:11

in coding as a AI model company let's

40:15

say you have 50 to 60% of market share.

40:17

You have the most developers using it.

40:19

Therefore, you have the most access to

40:21

code bases. You might get the most

40:24

training tokens. There is a potential

40:26

flywheel there where you can see the

40:29

early market leader consolidating its

40:30

lead because it's generating the most

40:32

code tokens and it's getting access to

40:34

the most existing code. Now, I'm not

40:36

saying for sure that's going to happen.

40:38

is possible that the other guys catch

40:39

up, but I think there is a possibility

40:42

of a flywheel there and strong, I guess

40:45

you'd call it data scale effects, things

40:47

like that. So, I do believe that the

40:49

market for coding tokens could be

40:51

monopolized. Third, Anthropic's revenue

40:55

run rate, as based on what I can tell

40:56

and what's been publicly released, is

40:58

the fastest growing revenue run rate at

41:01

scale that I think we've ever seen. Uh,

41:03

we

41:04

>> perfect segue. It's the next story.

41:05

Okay, maybe

41:06

>> pull up the the tweets. But this thing

41:09

is ramping at a rate we've never seen

41:10

before.

41:12

>> We can get into that in a second. But

41:13

just one last final point

41:15

>> is I think it's pretty clear that where

41:17

we go from here is agents and coding

41:21

gives you a huge step up on agents

41:23

because you know one of the main things

41:25

that agents need to do is is write code

41:27

to be able to enable them to complete

41:29

tasks.

41:30

>> Correct. And so if it is the case that

41:34

coding is this huge market that's going

41:36

to be dominated by one or two companies

41:40

and then that leads to another huge

41:42

market which is agents. My point is just

41:44

I think all these companies need to

41:46

behave in a very clean way

41:49

>> and not engage in tactics that later the

41:52

government might say you know what that

41:53

was anti-competitive. Everyone should

41:55

just I think play fair. Do not engage in

41:57

discrimination against other people's

41:59

products. engage in fair pricing. I'm

42:02

not accusing anyone of breaking any of

42:03

the rules, but what I'm saying is that

42:05

eventually the government's going to

42:07

look at this market with the benefit of

42:09

2020 hindsight and I think everyone

42:11

should just basically, you know, keep it

42:14

>> keep your nose clean.

42:15

>> Keep it tight. Keep it tight.

42:17

>> Keep it tight. Tight is right. I think

42:19

is an excellent point. Let's talk about

42:21

the revenue ramp of Anthropic. This is

42:24

just unprecedented. Anthropic's revenue

42:27

run rate has topped 30 billion with a B.

42:31

Early 2023, they turned on revenue. They

42:33

started charging for API access. End of

42:35

2024, they're at a billion dollar run

42:37

rate. February 25, they launched Claw

42:40

Code. That was the starter pistol. Mid

42:42

2025, $4 billion run rate. End of 2025,

42:46

$9 billion run rate. Just a couple of

42:49

months later in April, $30 billion run

42:52

rate. Yes, that's right. Triple. Uh and

42:54

the way they did this is enterprise uh

42:57

customers are a major part of the spend.

43:00

Dario announced a couple of months ago

43:01

that there's over a thousand enterprises

43:04

paying over 1 million annually. This is

43:07

truly mindboggling when you think about

43:09

it because those are the most coveted

43:12

customers in the world. These are the

43:13

big fish that you just uh when people

43:16

are running enterprise software, they

43:18

they dream Slack dreamed of getting

43:20

these million-dollar customers. Uh

43:21

Salesforce dreams of getting these

43:23

million-dollar customers. Brad, you're

43:24

an investor. I guess uh Sam famously on

43:27

BG2 asked you to sell your uh OpenAI

43:30

stock back to him. You didn't. You

43:32

demired, but you're an investor in both.

43:36

How shocking is it to you to place both

43:39

of those bets and then see one of them

43:42

come from so far behind? You know, Chat

43:44

GPT has 900 million users. I don't know

43:46

if they've they've passed a billion

43:48

officially yet, but they are the Verb,

43:50

right? They're the Uber. They're the

43:52

Xerox. They're the Polaroid of AI, but

43:56

they didn't go after the enterprise.

43:58

Daario made that and Daario worked. He

44:00

was the co-founder of OpenAI. He left

44:03

and according to the New Yorker story

44:04

that came out from Ronan Farrell this

44:06

week, he was basically left because of

44:08

his disgust in working with Sam Alman.

44:13

Your thoughts?

44:14

>> Well, you know, before we go down the

44:15

OpenAI rabbit hole, let's just really

44:17

contextualize like what's going on here.

44:20

You know, check I I I have this

44:21

additional chart. you showed one, you

44:23

know, they added 4 billion of revenue in

44:25

January, 7 billion in February, 11

44:28

billion of annualized run rates, um, or

44:31

10 or 11 billion in March, just to put

44:33

in perspective, that's data bricks plus

44:35

Palanteer combined that they added in a

44:38

single month, right? So we started with

44:41

everybody at the start of the year

44:42

ringing their hands including you know

44:45

Gurley and others saying we're in a big

44:46

bubble asking whether the AI revenues

44:49

would show up to justify all of this

44:51

investment and bam you have the largest

44:53

revenue explosion in the history of

44:55

technology. So the company's plans were

44:58

to end the year at about a $30 billion

45:01

ex exit run rate. They got there by the

45:04

end of March right and I suspect that

45:06

it's continuing in April. So you have to

45:08

ask what's going on and what's the big

45:10

so what the first thing for me is that

45:13

model and product capability just hit

45:15

this threshold we talked about earlier

45:17

near AGI whatever the hell you want to

45:19

call it and everybody like alimter said

45:22

damn this is so good I have to have it

45:24

this is no longer about my IT budget

45:26

this is about labor augmentation and

45:28

labor replacement and by the way co-work

45:31

is growing even faster than Claude go at

45:35

the same stage of development

45:38

So what it showed is we have a near

45:40

infinite TAM. It turns out that the TAM

45:42

for intelligence is radically different

45:44

than anything that we've seen before.

45:47

And I think the best example of this,

45:49

right? This is millions of

45:51

self-interested parties, consumers,

45:54

enterprises, a thousand now over a

45:57

million dollars. Right? It's not that

45:59

there was some great go to market and

46:01

anthropic that all of a sudden, you

46:02

know, they snuck up and blew everybody

46:04

away. No, it was companies demanding the

46:06

product. They're getting throttled on

46:08

the product. Why? Because it's so good.

46:10

It makes them better at their business.

46:12

We are all self-interested actors. And

46:14

when millions of those people are all

46:16

making the same decision, there's a huge

46:18

tell. And the tell here is that the TAM

46:21

is as big as Daario and Sam and others

46:24

have been saying. We knew intelligence

46:26

was going to scale on the exponential.

46:28

The question was whether revenue will

46:30

scale on the exponential, and that's

46:32

what we're seeing. And remember, they're

46:33

doing this with only 1 1/2 to 2 gawatt

46:37

of compute, right? These guys are

46:39

massively compute constrained. They're

46:41

each going to be adding 3 GW of compute

46:43

this year. And so that will unlock they

46:46

would be growing even faster. But for

46:49

that, and then Jason, to your point

46:50

about the open source models that we all

46:53

want to be a part of this solution, I've

46:55

talked to a lot of big companies, 65 to

46:57

70% of their token consumption is

47:00

open-source model, right? are these

47:02

cheap Chinese and other tokens. So these

47:05

revenue ramps are happening while the

47:07

world is already using open source. This

47:10

is not frontier only. This is Frontier

47:12

plus open source. We're going to see

47:14

massive token optimization over the

47:16

course of the year. But what happens on

47:18

this Jebans paradox is the co the unit

47:21

costs right of intelligence is

47:23

plummeting. Not the cost of tokens. The

47:26

unit cost of intelligence is plummeting

47:28

because the capabilities of these models

47:30

is so much better. I look at what it

47:32

does for Altimeter day in and day out. I

47:34

talked to a major uh company yesterday.

47:37

They're on a run rate to do a hundred

47:39

million of token consumption this year

47:41

on about $5 billion in opex. They think

47:44

that we're now nearing peak employment

47:46

in their company, but that their token

47:48

their intelligence consumption, okay,

47:50

let's not call it token consumption,

47:52

right? because tokens may go up a lot,

47:54

but their intelligence consumption is

47:56

going to go up, you know, a lot. So, I

47:59

would leave you with this. We're early

48:01

to Chimas's point. We have low

48:04

penetration of the global 2000. We have

48:06

low penetration of the use cases. We

48:09

have low penetration of of within the

48:12

use cases that they're already using.

48:14

And the models are only getting better.

48:15

So I think when you look out toward the

48:17

end of the year, I would not be shocked

48:20

if you see Anthropic exiting this year

48:23

at 80 to 100 billion in revenue. And by

48:26

the way, doing it at the same time that

48:28

OpenAI, who is also on the wave, they'll

48:30

be releasing an incredible model in the

48:32

next imminently. They're going to be on

48:35

that wave and you're going to see an

48:36

inflection in their revenues as well.

48:38

>> Okay, Chimath, question one has been

48:41

answered. The question of hey, does this

48:43

stuff actually have utility? that went

48:45

from a question mark to an exclamation

48:46

point. Of course, it's got utility.

48:48

People are getting value from it. And it

48:49

might be variable. Some people get more

48:51

value than others. Number two, the

48:52

revenue ramp was a big question. Now,

48:54

that's turned into an exclamation point.

48:56

The final piece of the puzzle that

48:58

you've brought up many times is can this

48:59

be profitable? And these companies are

49:02

burning through a large amount of cash.

49:05

So, what is your take on when these

49:07

companies can get out of the J curve? We

49:09

talked about this, I think, three

49:10

episodes ago. I estimated like we're

49:12

going to be looking at $4500 billion in

49:15

investment into these data centers at a

49:17

minimum and then they have to climb out

49:20

of that to get to profitability. So what

49:22

are your thoughts on these becoming

49:25

profitable companies? Do you remember

49:27

the

49:29

investor that published this list Jason

49:31

where he put all of the terms you talk

49:35

about when one of the terms you can't

49:36

talk about is profit. It's a list where

49:38

it's like if you can't talk about free

49:41

cash flow, you talk about IBIDA. When

49:43

you can't talk about IBIDA, you talk

49:44

about

49:46

>> margina.

49:47

When you can't talk about that, you talk

49:49

about revenue. And then when you can't

49:50

talk about revenue, you talk about gross

49:53

revenue

49:53

>> bookings.

49:54

>> So you can kind of figure out,

49:58

I think, where we are in any part of any

50:01

cycle by just indexing into what does

50:04

everybody talk about. I think where we

50:07

are is we are between gross revenue and

50:11

net revenue. That's where the discussion

50:12

is.

50:13

>> Okay.

50:14

>> There was another article I think today

50:16

in I think maybe it was the information

50:18

that tried to categorize and distinguish

50:21

that anthropic presents gross, open AI

50:24

presents net. They're different. We

50:27

don't know what the various take rates

50:29

are. So they're saying that there's a

50:32

difference. If it's not true, there's

50:33

been no clarity provided by these

50:35

companies. So, at a minimum, you have

50:37

this confusion where there's the

50:39

breathless talk. Then there's people

50:40

that don't even know the difference

50:41

between actual recognized revenue and

50:44

run rate revenue and how to multi. I

50:46

mean, so we're definitely there, okay?

50:48

We can quibble about the details, but we

50:50

are not at the place where people are

50:51

like, "Oh, here's your steadystate, you

50:53

know, free cash flow margin, and here's

50:55

what your EBA does." We're never we're

50:56

we're years from that. They're gonna

50:58

have token maxing IBA like IB at the

51:02

Wii.

51:02

>> The thing that we need to understand is

51:04

how gross margin negative is this

51:06

revenue growth.

51:07

>> We don't know that and at least we don't

51:10

as outsiders.

51:11

>> Brad might know.

51:12

>> Brad may know. I I I I will tell you

51:15

think about this. What are their big

51:16

cost inputs? The number one cost input

51:19

is the cost of compute. Cost of compute.

51:21

>> Right? I just told you they only have a

51:23

gigawatt and a half of compute. and they

51:25

have that gigawatt and a half of compute

51:26

whether they have a billion in revenue

51:29

or whether they have 80 billion in

51:30

revenue. So you might actually expect to

51:33

see these companies their gross margins

51:35

are exploding higher like the fastest

51:37

increase in gross margins I've probably

51:39

seen out of any technology company. So

51:41

this is not gross margin negative you're

51:42

saying?

51:43

>> No definitely not gross margin negative.

51:45

And what I would tell you

51:46

>> so that they must be hugely profitable

51:48

then

51:48

>> well you may see accidental why I call

51:51

it accidental profitability. They may

51:53

not be able to spend this revenue fast

51:55

enough chamath on compute. And remember

51:58

it's only 2500 people. Google crossed

52:01

this revenue threshold when they had

52:03

120,000 people. These guys have 2500

52:07

people. So the only thing you can really

52:08

spend money on, right, is compute. And

52:10

they can't stand up the compute fast

52:12

enough.

52:13

>> But none of this foots to me then to be

52:15

honest because if you were on a

52:17

threshold of 90% plus gross margin,

52:20

>> I'm not saying it's there. I'm not

52:22

saying it's 90% plus. I'm just saying

52:24

it's gone from meaningfully negative 18

52:27

months ago to, you know, very very

52:29

positive. I've seen rumored out there

52:31

50% is what you're saying. The trend is

52:34

there.

52:34

>> Let me just say this.

52:37

I think if you're an incumbent, you want

52:40

the cost of compute to go down. I think

52:42

if you're not an incumbent, so

52:44

specifically, who do I mean? Meta,

52:47

Google, and SpaceX.

52:51

I think those three people who have all

52:53

three of them, well, sorry, Meta and

52:55

Google have a fortress balance sheet. I

52:57

think by the end of June, SpaceX will

52:59

also have a fortress balance sheet. What

53:02

they will want to do is they will want

53:03

to make this a compute problem because

53:05

they will control the the conditions on

53:07

the field. You already see this today.

53:09

>> Yeah.

53:10

>> Meta's models today, what people's

53:12

general reviews are it's okay, but the

53:15

one thing that people say is it's

53:16

incredibly performant. The model quality

53:19

is okay, but the performance is great,

53:21

which speaks to Meta's huge advantage.

53:23

They have a massive compute

53:24

infrastructure. So if you're if you're

53:25

not open AI and anthropic,

53:28

they'll want to make this a capital

53:29

problem because then they can win it. If

53:31

you're anthropic and open AI, you want

53:33

this thing to be as efficient as

53:35

possible.

53:36

I think where we are is very much in the

53:38

early innings. And we're bumbling around

53:40

talking about gross margins and you know

53:42

revenues. We are not at profitability.

53:44

And what is true for Facebook and what

53:47

was true for Google was irrespective of

53:50

where they got to a billion. Who g

53:52

cares? They were profitable by year

53:54

three and they never looked back. I was

53:57

there. I remember it was glorious.

53:59

>> The the cost the cost of building uh you

54:03

know AI

54:05

totally stipulate is radically higher

54:07

than the cost of building retrieval at

54:08

Google, right? Like it's just a

54:10

fundamentally more expensive problem.

54:12

But I will tell you that there's a lot

54:14

of FUD out there about negative gross

54:16

margins. I mean Jason, you started the

54:17

segment by saying they're burning

54:19

through large amounts of cash. I think

54:21

people are going to be shocked at the

54:22

burn how low the burn levels are at

54:24

these companies.

54:25

>> Anthropic or Open AI.

54:27

>> Yes. And and I would say at Open AI as

54:28

well like they're if they're on you know

54:30

if they do $50 billion this year again

54:33

just look at the number of people they

54:34

have revenue per people. It's pretty low

54:37

and the inference cost is plummeting.

54:38

Inference cost is down by 90%

54:40

year-over-year. And so just finally I

54:43

want to make respond to this point about

54:45

gross versus net uh this this tweet that

54:48

Chimath was referencing. Okay, so

54:50

there's a certain percentage, a

54:52

smallalish percentage of Anthropics

54:54

revenue, right, that they distribute

54:55

through the hyperscalers and like a lot

54:57

of arrangements, whether it's Snowflake

54:59

or Data Bricks or others, you pay a

55:00

commission, right, uh on on that. I will

55:04

just tell you that you're talking

55:05

singledigit percentage of total revenue

55:07

of these companies. So the gross versus

55:09

net thing isn't what's being reported.

55:11

like the apples for apples is pretty

55:13

easy and if you want to be conservative

55:14

on it take down Anthropic's revenue by

55:17

you know five to 10% which you know

55:19

again I don't I think it's better to

55:21

gross up OpenAI's revenue but any way

55:22

you do it I just think it's a

55:24

distraction from what's really what's

55:26

really going on here happy to

55:27

>> s you have any thoughts on this uh

55:29

massive revenue ramp

55:31

>> yeah I mean I want to go back to a point

55:33

that Brad made because I think it was

55:35

just really important and I want to just

55:37

underline it consider where we were at

55:39

the beginning of the year and What

55:41

everybody was saying is that AI was a

55:43

big bubble and the evidence they would

55:46

point to was the fact that hundreds of

55:48

billions of dollars was going into capex

55:51

that needed to be spent on these data

55:53

centers and there was no evidence of

55:55

significant revenue to justify that

55:57

spend. Where was the ROI? By the way, as

56:00

an aside, the same doomers who were

56:02

saying that AI was in a bubble were also

56:05

the ones who were saying that AI was so

56:06

powerful it's going to put us all out of

56:08

work and it's going to, you know, take

56:10

over from humanity. I mean, in other

56:12

words, they couldn't decide if AI was

56:14

too powerful or not powerful enough. But

56:17

putting aside that contradiction, they

56:19

clearly were making this case that AI

56:22

was this big bubble and that there'd be

56:23

no payoff or justification for this

56:27

massive capex that's being spent. And I

56:29

think we're starting to see here there

56:31

is justification for it. Uh we're seeing

56:33

it just in this one vertical of AI which

56:36

is coding. We're again seeing the

56:38

fastest revenue growth in history. It's

56:40

utterly unprecedented. And this is just

56:42

one category or vertical of AI. We know

56:47

that agents are coming next and the

56:49

enterprise adoption of that is going to

56:51

be absolutely massive. So, I guess what

56:54

I'm saying is that this is early proof

56:56

for I think the thing that makes Silicon

57:00

Valley special, which is we're willing

57:02

to basically bet on things that just

57:05

intuitively on a gut level we know are

57:08

the next big thing. We're not that

57:09

spreadsheet driven. Actually, Silicon

57:11

Valley believes that if you build it,

57:13

they will come and is willing to finance

57:15

that build out. And that's basically

57:17

what's been happening. Again, just the

57:19

top four hyperscalers, $350 billion of

57:22

expected capex this year on its way, I

57:24

think Jensen said 1 trillion by 2030.

57:27

So, Silicon Valley, whether it's big

57:29

companies, whether it's founders,

57:30

they're always willing to bet on this

57:32

next big thing. They're not like Wall

57:34

Street. They don't need, you know,

57:35

specialist to tell them where to go.

57:38

They know where the technology is going

57:40

and they make their bets based on that.

57:42

And I think that there is going to be a

57:44

big payoff for this. And I think it's

57:47

the thing that's going to make our

57:48

economy and the United States in general

57:51

remain extremely dynamic and in the lead

57:53

on this thing is that we are willing to

57:55

make those kinds of bets. And I think

57:57

it's going to pay off big time.

57:59

>> Yeah, clearly. Hey, um Brad, you didn't

58:02

answer my question about the vibes over

58:04

at OpenAI versus Quad. Open AI is um I

58:09

wouldn't say reeling but there's a lot

58:11

of hand ringing going on a lot of

58:13

employees leaving a lot of people who

58:15

are wondering like is our strategy the

58:18

winning strategy of like consumer first

58:20

they shut down Sora you know unwinding

58:23

the Disney deal and really trying to get

58:26

the company focused and it's kind of

58:27

like I mean listen the New Yorker story

58:29

was a bit of a rehash so I don't think

58:30

we have to go into the blowby-blow

58:32

because we covered here three years ago

58:35

but the truth is a lot of the great

58:38

founders, co-founders of OpenAI and a

58:40

lot of the great contributors are now at

58:43

Anthropic and other large language

58:45

models. And in the secondary market,

58:48

OpenAI is trading lower than the last

58:50

valuation. And Anthropic is trading

58:53

significantly above the $380 billion. So

58:57

maybe talk a little bit about this

58:58

competition, this Microsoft versus

59:01

Apple, this Google versus Facebook.

59:03

Well, let's let's start with immense

59:06

credit where credit is due. Anthropic

59:08

was literally counted out of the game

59:09

last year. Y,

59:10

>> right? And here they come over the last

59:12

12 months and and and they've kicked

59:14

OpenAI's ass over the last 90 days,

59:17

right? And what did Anthropic do?

59:19

Anthropic made choices. No multimodal,

59:22

no video, no hardware, no chips, no

59:24

building data centers. They said, "We're

59:26

just going to focus on coding and

59:27

co-work. We think that is the path to

59:29

AGI and and and and ASI." They executed

59:32

their butts off. They took the lead.

59:34

2500 people tight pulling on the ore in

59:38

the same direction. But I think you

59:40

would be seriously foolish to count out

59:43

open AI, right? And I think we're we're

59:45

we're at peak open AI FUD. And I'll tell

59:47

you, it starts with great researchers

59:50

and great models. And I think when you

59:51

see the Spud model, they're about ready

59:53

to release. I think it's going to be an

59:55

excellent model. Shows that they're

59:57

firmly on the wave. Um, if you look at

60:00

what's going on with Codeex, incredible

60:02

ramp on Codeex, fastest ramping model

60:05

with 5.4, I think 5.5 or Spud, whatever

60:08

we're going to call, it's going to be an

60:09

even faster ramp.

60:10

>> Have you seen Spud? Have you used it?

60:12

Have you gotten a preview?

60:13

>> People are using Spud, right? So, it it

60:16

is being previewed and so

60:17

>> So, you're talking to people who've used

60:19

it and what are they telling you?

60:21

>> They're telling us that it's an

60:22

incredible model on par with Mythos,

60:24

right? and that it's a a very usable

60:27

model in terms of um how it's packaged.

60:30

I will say that back to David's point

60:34

now this is the most important point I

60:35

think anybody can take away here.

60:38

This is not zero sum. The TAM of

60:42

intelligence is dramatically larger than

60:44

any TAM we've ever seen in our investing

60:47

careers over the last two decades.

60:48

Right? And if you're on the wave, which

60:50

Open AAI is, you are going to be selling

60:53

into the world's biggest TAM, they are

60:55

going to build a very big company. I'm a

60:58

buyer of the shares today.

60:59

Notwithstanding all of the vibes that

61:01

you describe, I think these companies

61:03

are firmly on the wave. They are jarred.

61:07

They are sitting there saying, "What did

61:08

we do wrong? And how do we get our mojo

61:10

back?" They want to compete. It is

61:12

embarrassing to people on the research

61:14

team and the product team over there.

61:15

So, I'm not saying there's not a real

61:18

awakening occurring there, but I think

61:20

that's what the case is. And by the way,

61:22

to Chamas's point, do not count out

61:24

Meta, right? I think Meta is absolutely

61:26

in this game. Google is absolutely in

61:28

this game. Elon is absolutely in this

61:30

game. And if you're

61:31

>> got some stuff dropping shortly that's

61:33

going to be very impressive.

61:34

>> If you're on team America, the fact that

61:36

we have five frontier models competing

61:39

against each other and David made sure

61:41

they weren't throttled by excessive

61:43

government regulation. We have mythos

61:45

come out. It's a self-imposed safe

61:48

harbor, you know, to harden our system.

61:50

It wasn't a call for moratoriums or

61:52

getting the government involved. We have

61:54

the type of competition that's causing

61:55

us to accelerate our lead against the

61:58

rest of the world. We can't take our eye

62:00

off the prize. We got to stop

62:01

adversarial distillation and we need to

62:03

make sure that we're distributing our

62:05

products around the world. But I view

62:07

this as really good for team America.

62:09

>> Well said. And here is your poly market

62:12

IPOs before 2027. Obviously SpaceX at

62:16

95% uh Cerebrus at 94% and uh hey number

62:21

five on this list 51% chance that

62:24

Anthropic goes out before the end of the

62:26

year. 44% chance that OpenAI comes out

62:29

before then. All right here is the

62:32

closing market cap for Anthropic on Poly

62:36

Market only $158,000 in volume. So,

62:39

Chimath, when you put in 400K, you're

62:42

going to really tilt this market.

62:44

78% chance that it's above 600 billion,

62:48

19% chance that it doesn't go out. So,

62:51

it's looking like this will be a decent

62:53

investment for you. Brad, what valuation

62:55

did you get into Anthropic at?

62:57

>> We first invested in I believe it was

62:59

the

63:01

uh 30 or $150 billion round.

63:04

>> So, this will be a 7x 5x for Altimeter

63:07

L, please. Congratulations. I mean, no,

63:09

listen. I I I again, there are lots of

63:11

people who were there before us and who

63:13

are on the board and who are going to do

63:14

much better than that. What' you put in?

63:16

50.

63:17

>> What' you put in?

63:18

>> No, we've got billions in both

63:19

companies. Uh

63:20

>> billions in both companies. Oh my lord.

63:24

>> I think there's this existential thing

63:26

going on in venture today. David could

63:29

talk about it as well. I mean people

63:31

can't they're extraordinarily nervous

63:33

about you look at the IGV stock index

63:36

down 30% year to date down 5% today all

63:40

software stocks plummeting right venture

63:43

capitalists are terrified to invest

63:46

money in anything other than these

63:48

frontier models and things like SpaceX

63:51

or military modernization finding

63:53

something that's out of harm's way of AI

63:56

right where you can count on the

63:57

terminal value to Chamas insights over

64:00

the last few weeks is very difficult to

64:02

do. That's why you see this crowding.

64:03

So, we've taken a barbell approach,

64:05

right? We've got a lot in what we think

64:07

are the most important companies that

64:09

are on the frontier and then we're

64:10

betting with on really small teams that

64:12

we think have very defensible businesses

64:15

in a world of uh you know, AGI. But it's

64:18

>> what happens to all these enterprise

64:19

software companies? Do they become PE

64:21

takeouts? Do they get consolidated? um

64:24

or do they just have to adopt these AI

64:27

technologies and and and solve this

64:30

problem of hey the frontier model is

64:32

just going to solve for whatever these

64:34

niche software companies do.

64:36

>> I think the market's probably being a

64:38

little too pessimistic with respect to

64:40

at least some of these software

64:41

companies. I mean, obviously, there's

64:42

going to be big differences in the

64:45

quality of the modes of these companies.

64:48

And so, look, software is going to be a

64:51

lot cheaper and easier to generate, but

64:53

I'm not sure that was the competitive

64:55

advantage of a lot of these companies.

64:57

So, there's probably a little bit of the

64:59

baby being thrown out with the bathwater

65:00

right now, and there probably are some

65:02

value buys in enterprise software. I

65:05

think the interesting question here and

65:06

we've been talking about this for a

65:08

couple of years in the pod is just where

65:10

you see the AI value capture being in

65:13

terms of layer of the stack. Remember

65:15

where we started it was really just the

65:17

chip layer of the stack was where all

65:19

the value capture was. It was basically

65:20

Nvidia was the first company to be worth

65:23

multiple trillions of dollars because of

65:25

AI. And for a while it looked like

65:27

that's where all the value capture was

65:30

going to be because OpenAI for example

65:31

was losing so much money and Anthropic

65:33

wasn't on the radar as much. Now we're

65:36

seeing wait a second um you know it's

65:38

not just the chip companies it's also

65:40

the hyperscalers are now benefiting and

65:42

now we're seeing at the model layer it

65:44

looks like Enthropic and Open AI they're

65:47

all going to be huge beneficiaries. I

65:49

think the next question is at the

65:50

application layer of the stack. Okay.

65:52

Well, now does all that value capture

65:54

just get eaten by the model companies or

65:56

are there applications that get

65:58

turbocharged? I guess you could say that

66:00

Palunteer is already one of them, right?

66:02

It's an application company that's been

66:04

turbocharged by these model

66:06

capabilities. Who else will be a big

66:08

beneficiary? Is it again, is it all

66:10

going to be at the model layer or will

66:12

you see an explosion of value at the

66:14

application layer? I'm hoping obviously

66:16

that it'll be at all layers of the

66:18

stack. PC beneficiaries. But to me,

66:21

that's a really interesting question

66:22

right now.

66:23

>> Yeah. What happens to Salesforce,

66:24

HubSpot, you know, Oracle, right down

66:26

the line? David, uh, Chimati, your

66:28

thoughts here, uh, on the the layers

66:31

here and where the value is captured.

66:34

>> It's too early to tell.

66:35

>> Too early to tell, right? And energy we

66:37

kind of put into sort of data center as

66:40

well, but that's obviously been a clear

66:41

winner. Little housekeeping here.

66:43

Liquidity, put a little Tiffany in here.

66:45

uh producer Nick D is sold out. There's

66:49

a wait list of hundreds of people, but

66:51

it is what it is, folks. If you snooze,

66:53

you lose and top tier speakers are

66:56

coming. Uh it's going to be great. We'll

66:58

get a an update from But I think Brad,

67:00

you're going to be joining us again.

67:01

Yes. For liquidity.

67:02

>> I have an update.

67:03

>> That's probably not your headliner,

67:05

though. I'm probably not your headliner.

67:06

>> No, but you always score so high. Every

67:08

event you've spoken at, you've been

67:09

either number one, two, I don't think

67:11

you've ever dropped to three. Go ahead,

67:14

Sham. Make your announcement here.

67:17

>> Nat sent me an article from Wikipedia

67:19

about penile links when you guys are

67:21

talking about

67:21

>> breaking news.

67:22

>> Showing me showing me that I'm in the

67:24

large category. Top 5%. She highlighted

67:27

it.

67:27

>> Top 5%. Okay. And that's with Is that

67:30

with Nano Banana or without? Is that

67:34

>> She just texted dummy. It's clogged. My

67:35

apologies. Clogged.

67:36

>> Oh.

67:38

>> All right. This is why Jamath isn't

67:40

afraid of the cyber is because nothing's

67:42

going to come out that's more

67:42

embarrassing than what he says himself

67:44

on the box.

67:44

>> He's like Bezos. When Bezos got hacked,

67:46

HE'S LIKE, "GUYS, I GOT HACKED."

67:49

>> SO, I saw the agenda for this thing.

67:51

It's incredible. Congrats to you guys. I

67:52

mean, like the uh like just the fun of

67:55

being in Napa, all the poker, all the

67:57

the dining experience. This is five star

68:00

all the It looks really

68:01

>> six-star. It's a man level because

68:03

Chimath

68:04

>> was, I dare I say, belligerent in his

68:08

demands. He said, "This has to be

68:10

six-star or I will not show up." Jake

68:12

Al. I said, "Okay, boss, get to work."

68:15

And uh, Chimath, what do you got any? No

68:17

mids. This is all elite. And for the

68:20

hundreds of people who are on the wait

68:21

list, I am sorry, but we have a capacity

68:22

issue. We'll try to get you in for next

68:24

year. But Chim, give us some updates

68:26

here. You have any updates that you want

68:27

to share? because you are running

68:28

programming for liquidity 2026 up in

68:31

Yon.

68:32

>> Look, it's going really well.

68:35

Really excited to hear all of these

68:36

great folks speak. I think the next two

68:38

will release today. Brad Gersonner and

68:42

Thomas Leaf of COTU

68:44

>> of CO2. That's a great get.

68:46

>> We also have I think three people

68:47

confirmed for their best ideas pitch.

68:49

Really interesting folks. They each run

68:52

between one and six or seven billion

68:55

>> awesome

68:56

>> superstar compounders early in their

68:58

career.

68:59

>> This is a new zone chamat.

69:00

>> It's great. So right now we have Bill

69:02

Aman, we have Andre Carpathy, we have

69:05

Dan Loe, we have Thomas Lefont, we have

69:07

Brad Gersner, we have Sarah Frier and

69:10

more to come. We will announce more.

69:12

>> There might be one or two surprises. Jay

69:13

Cal

69:14

>> and a couple and a couple of surprises.

69:17

>> Yeah, we we don't announce all the

69:18

speakers. Jay Cal's got a couple of

69:20

surprises coming. And if you didn't get

69:22

in to liquidity, apologies. You're on

69:25

the wait list. We are going to be

69:26

hosting the fifth

69:30

annual all-in summit in Los Angeles

69:33

September 13th to the 15th of Sax. You

69:36

going to come to that?

69:37

>> Allin.com/events.

69:40

>> Sax, you should come to that.

69:41

>> I've been advised that I can attend

69:43

business. I can be in the state for

69:44

business reasons.

69:46

>> Okay, there you go. Then we'll see you

69:47

at liquidity and the summit. Correct.

69:49

That's that's big news. Now we just got

69:51

a bunch of Sachs stands who are racing.

69:53

Uh and now we're going to get Sachs at

69:55

This is what happens every year behind

69:57

the scenes.

69:58

>> Sachs at the last minute says, "Oh, I

70:00

have four speakers and I have 72 people

70:02

who need tickets and then the whole team

70:04

has to like do a fire drill 48 hours

70:07

before the event." Okay, here we go,

70:08

guys. We're going to go to the third

70:09

rail here. We got to catch up on the

70:11

Iran war. Here's the latest. Two weeks

70:14

into a ceasefire have started just two

70:17

days ago at the taping of this VP JD

70:20

Vance, friend of the pod is a and some

70:23

special consultants Wikoff and friend of

70:25

the pod Jared Kushner are headed to

70:28

Islamabad, the capital of Pakistan for

70:31

talks this very weekend. So while you're

70:33

listening to this event, they are going

70:34

to be working on the peace deal. Easter

70:36

Sunday, Trump posted a truth stating,

70:39

"Open the straight, you crazy

70:42

bastards, or you're going to be living

70:43

in hell. Just watch." Praise be to

70:45

Allah. On Tuesday morning, Trump posted

70:48

uh a another threat on social media. A

70:50

whole civilization will die tonight.

70:53

Never to be brought back again. I don't

70:55

want that to happen, but it probably

70:56

will. Tweets were obviously discussed uh

70:59

a lot over the last week. He gave him an

71:01

8:00 p.m. deadline.

71:03

At 6:30 p.m. POTUS announced on Truth

71:05

Social that he had agreed. President

71:08

Trump had agreed to a two-week ceasefire

71:11

if Iran opens the straight. He also

71:13

said, "Hey, listen. We got the straight.

71:15

Maybe there'll be a toll booth, but

71:16

we'll take the majority of the toll and

71:18

we'll split it with Iran." Here's the

71:20

quote. We received a 10-point proposal

71:22

from Iran, and we believe it's a

71:24

workable it is a workable basis on which

71:28

to negotiate. And apparently Netanyahu

71:31

took the ceasefire to mean level Lebanon

71:33

dropping 160 bombs in 10 minutes

71:36

yesterday. Saxs, uh, you were out last

71:38

week. Everybody wants to know your

71:39

position on the war. I'll hand it off to

71:42

you. What are your thoughts on how on

71:44

the two ceasefire and everything that's

71:45

occurred up until this point?

71:47

>> Well, look, I have to preface what I'm

71:49

about to say, which is I'm not part of

71:52

the foreign policy team at the White

71:54

House. And the last time I commented on

71:57

the war on this show, it somehow made

71:59

international headlines that Trump

72:01

advisor says XYZ.

72:05

And I'm not a Trump adviser on this

72:07

issue. I think that'd be a fair headline

72:09

to write if it was a technology issue,

72:11

but this is not. So whatever I say is

72:14

just my personal opinion, but then the

72:16

media is going to somehow portray it or

72:17

attribute it

72:18

>> to the White House or try and create an

72:20

issue out of it. So, I feel like I'm

72:22

limited in what I can say except that to

72:25

say that I think it's terrific that we

72:28

have the ceasefire. I think it's great

72:30

that there's going to be this meeting in

72:33

Islamabad to hammer it out. And I think

72:37

what the president's accomplished so far

72:38

with the ceasefire is it's a great thing

72:41

because what happens with these wars is

72:43

they take on a life of their own,

72:45

meaning they tend to go up the

72:47

escalation ladder, right? And there's a

72:48

lot of podcasts that are discussing the

72:50

so-called escalation trap and supposedly

72:52

there are stages to this based on

72:54

historical patterns. And so I think it's

72:56

actually very hard to pull out of these

72:58

things and I give the president

72:59

tremendous credit for negotiating the

73:02

ceasefire that we've achieved so far and

73:04

then sending the team to hopefully work

73:06

this out.

73:07

>> Brad, actually my first trip to the

73:08

Middle East was when you and I uh maybe

73:10

four years ago when Thank you for taking

73:12

me. What is your take on where we're at

73:14

here? I think we're just wrapped up week

73:16

six of this and we're going into week

73:17

seven.

73:17

>> First, on March 4th, I tweeted the Trump

73:20

doctrine in Iran. Massively destroy all

73:23

military capa capabilities. Kill the

73:25

people building lethal weapons to use

73:27

against us and get out. Reserve the

73:29

right to do it again if needed. Zero

73:31

efforts to build Misonian democracy.

73:33

Iran's going to have to build what comes

73:35

next. And I think what the market has

73:38

said right if you look back at last year

73:40

on tariffs Jason the top tobottom draw

73:42

down was about 15% on the NASDAQ

73:46

intraday is down 22%.

73:48

Okay, the draw down in this period over

73:51

Iran was only down about 5 to 7% on S&P

73:55

and NASDAQ, right? So, the market has

73:58

said, listen, re trust Trump at his

74:01

words. He said he's not going to get

74:02

into an entangled war here. I think he

74:05

terrifies the hell out of people with

74:06

his tweets about, you know, destroying

74:08

civilization and all this other stuff.

74:10

But I think people, even though they

74:12

don't like to hear it, they've resolved

74:14

for themselves that when he says he's

74:16

going to get out, he will in fact get

74:17

out. Of course, there was a lot of hand

74:18

ringing, but if you look at the markets

74:20

today, we basically bounced all the way

74:22

back from where we were pre-Iran on both

74:26

the S&P and the NASDAQ. If in fact we

74:29

land the plane, if JD lands the plane,

74:31

and by the way, on Lebanon, yes, they

74:33

were bombing yesterday, but Netanyahu

74:34

has now said that you're going to have

74:36

direct government talks between Israel

74:37

and Lebanon. So, if the if we land the

74:41

plane on these two things, I think it's

74:42

off to the races in the market. By the

74:44

way, while everybody's focused on Iran,

74:47

stay tuned. I think we're getting close

74:48

to a deal on Ukraine, Russia, right?

74:51

Venezuela is, you know, kind of going

74:53

seemingly very well. I think there's

74:55

also going to be news on Cuba. You could

74:58

envision a world there's risk to the

74:59

downside. Certainly, I will stipulate,

75:02

but you also have to pay attention to

75:03

the risk to the upside. If you land the

75:05

plane on those things heading into

75:07

America 250 July 4th, the market could

75:10

really take off.

75:11

>> All right. Well, let's uh maybe uplevel

75:13

this a little bit and talk about why

75:15

we're in this war to begin with. And

75:17

that's the big discussion amongst both

75:19

sides of the aisle. On Tuesday, the New

75:20

York Times dropped an inside the room

75:23

piece on how President Trump made the

75:26

decision

75:28

according to this report, if it's true.

75:30

I know some people don't uh subscribe to

75:32

the New York Times anymore or think it's

75:33

fake news, but how Trump decided to

75:36

basically follow Netanyahu into this

75:38

war. On February 11th, Netanyahu met

75:40

with Trump at the White House where he

75:41

gave him a four-part pitch on attacking

75:44

Iran. JD Vance, according to the story,

75:46

if it's true, disclaimer, disclaimer,

75:48

warned Trump that the war could cause

75:50

regional chaos and break apart Trump's

75:53

MAGA 2.0, the Trump 2.0 coalition we

75:56

talked about here, the big tent. And

75:57

that's turned out actually to be true.

75:59

There's been a bunch of hand ringing

76:00

from Megan, Kelly, Tucker, Carlson,

76:02

right on down the line. Rubio was

76:04

anti-regime change, but he was largely

76:07

ambivalent, according to this story

76:08

about the bombing campaign. Susie Wilds,

76:12

chief of staff, said she had concerns

76:13

about gas prices before the midterms.

76:15

Pretty good uh advice there. And General

76:18

Dan Kaine, chairman of the Joint Chiefs

76:20

of Staff, said this of Netanyahu's

76:23

pitch. Quote, "Sir, this is, in my

76:24

experience, standard operating procedure

76:26

for the Israelis. They oversell and

76:28

their plans are not always

76:30

well-developed. They know they need us

76:31

and that's why they're hard selling. If

76:33

you put this together with Rubio's

76:35

walked back comments at the start of the

76:38

war, we knew, this is quote from Rubio,

76:41

we knew there was going to be an Israeli

76:44

action. We knew that would precipitate

76:47

an attack against American forces and

76:49

that's why we did it. I had Josh Shapiro

76:52

on the All-In interview show and um uh

76:56

he talked a lot about this. There is a

76:58

big underpinning here, Chimath, that the

77:01

United States foreign policy is being

77:03

driven by Netanyahu. Every Jewish

77:07

American person I've talked to feels

77:08

Netanyahu is not doing Jewish American

77:11

and Jewish the Jewish diaspora any

77:13

favors here by his approach to these

77:15

wars. What are your thoughts on why we

77:17

got into this and how we get out of it?

77:22

>> I mean, the person that decides is the

77:24

president of the United States. some

77:26

foreign leader isn't

77:29

getting to call the shots in the United

77:31

States. I think very practically

77:33

speaking,

77:35

the markets are effectively

77:38

pricing in that this was a small blip

77:42

for whatever people think. That's just

77:44

what the best prediction market that we

77:47

have is telling us. I think that's

77:48

important to acknowledge that we're

77:51

probably in the endgame here. And the

77:53

second thing to acknowledge is if I was

77:55

Israel, I would really be concerned

77:59

that unless I help find an offramp

78:02

quickly, the risk that Israel loses

78:05

America as a predictably steadfast ally

78:08

could go down. And I think that that's

78:09

problematic for Israel

78:11

>> far more than is problematic for the

78:12

United States.

78:13

>> So all of that kind of tells me that we

78:16

will find an offramp. A because I think

78:19

economically it makes sense and then B

78:22

geopolitically I think Israel will want

78:23

to make sure that this doesn't burn

78:26

a long-standing relationship.

78:29

>> Yeah, that that seems to me to be the

78:32

major issue here is Americans basically

78:34

do not want to be in this war. Americans

78:37

do not want a forest policy being

78:39

influenced to the extent they believe.

78:42

I'm not putting my belief in here.

78:43

Americans believe we are being dragged

78:46

into this by Israel and that Israel has

78:48

too much or Netanyahu specifically has

78:50

far too much influence. And then people

78:52

believe the anti-semitism that's

78:53

occurring here. Josh Shapiro gave me a

78:55

lot of push back on this. Uh but all the

78:58

Jewish Americans I talked to say

78:59

Netanyahu is causing with his actions in

79:02

Gaza, Lebanon, Iran. Uh he's gone too

79:05

far and it's causing the anti-semitism

79:07

we're experiencing uh today. So you can

79:09

make your own decisions about that. Any

79:11

final thoughts here, Brad, on

79:14

the American foreign policy being

79:16

influenced too much by Israel?

79:18

>> No, it's the discussion. I

79:20

>> I mean, listen, um kind of like Sax said

79:24

earlier, um I think that we will

79:27

ultimately be judged by the outcomes,

79:31

right? And that everybody is an armchair

79:33

pundit today on, you know, uh the the

79:37

the approach that we're taking in these

79:39

two different places. I think we could

79:41

be on the verge of a massive

79:44

transformation of the Gulf States. You

79:46

went there with me, Jason. Saudi,

79:48

Qataris, Kuwaitis, Emiratis. I've talked

79:50

to a lot of them this week. I think

79:52

they're very hopeful and optimistic. I

79:54

think you could bring Iran into the

79:56

fold. But listen, I'm an optimist on all

79:58

of this stuff. I I just want to remind

80:00

people, doing nothing in Iran had

80:04

tremendous risks. Doing nothing in

80:07

Venezuela had tremendous risks. So, it's

80:10

not as though this was uh, you know,

80:13

something that I I I I think wasn't well

80:16

calculated, but I think we have to let

80:19

the cards be played and and and then let

80:21

history be the judge. But I think

80:22

there's uh a risk in both directions,

80:25

but I'm going to remain optimistic.

80:26

>> All right. You uh said in the Gaza

80:28

situation, we should have a wide birth

80:30

for criticism of Israel and Netanyahu.

80:32

What are your thoughts on this belief

80:35

here in the United States now in this

80:37

discussion that Israel is having far too

80:38

much influence over the United States

80:40

foreign policy?

80:42

Well, I noticed in my feed today that

80:44

Naftali Bennett, who's a major Israeli

80:47

politician who was a former prime

80:49

minister, tweeted polling that showed

80:52

that Israel was becoming very unpopular

80:55

in the US and he was expressing concern

80:57

about that and expressing

81:01

the need to to basically address that or

81:03

fix that. So, I think you're starting to

81:05

see Israeli politicians raising that as

81:08

an issue. And I think that's probably a

81:11

good thing. Yeah, there it is. And it's

81:14

really cool actually how X now just

81:16

automatically translates things from

81:18

foreign languages, in this case, Hebrew,

81:20

and it puts it in your feed. So, yeah.

81:22

So, here's Naftali Bennett, former prime

81:23

minister, saying, "This is a serious

81:24

situation. There's a lot of work ahead

81:26

of us to fix everything." Now,

81:29

obviously, this is not Netanyahu. this

81:31

is one of his um political opponents.

81:33

But

81:34

yeah, I mean this is something for

81:36

Israel to consider and think about and I

81:39

think that they would improve their

81:41

popularity uh if they got behind the

81:43

ceasefire and I have no indication that

81:45

they won't but that would certainly be a

81:48

good place to start. I have to say just

81:50

as an aside, this auto translate feature

81:52

has done more for understanding across

81:56

borders than anything I've ever seen.

81:58

And it is the most impressive tech

82:02

feature I I've seen released in years.

82:04

Putting AI and large language models

82:06

aside for people who don't know what's

82:08

happening because of Grock being really

82:11

good at doing auto translate. They've

82:12

taken the pockets of the best of what's

82:16

happening in Japan, what's happening in

82:17

Israel, what's happening in France, and

82:19

they're surfacing it auto translated.

82:21

Then when you reply as an American to

82:23

somebody in Japan, they see it autorated

82:26

as well, which has led to people who

82:28

don't speak the same language engaging

82:31

on X in a very nuanced, fun, interesting

82:36

way. And that for as a truth mechanism

82:40

is just absolutely extraordinary. I

82:41

think this is going to have such a

82:42

profound effect. Maybe Elon and the X

82:45

team should get like a Nobel Peace Prize

82:47

award for this. I think it's going to

82:48

change. I mean, I hate to be hyperbolic,

82:50

but have you been using this feature,

82:52

Chamat? Has it been coming up in your

82:53

feed? And which language is up in your

82:56

feed right now?

82:58

>> English. Okay. So, you're not part of

83:01

the translation thing. Brad, has this

83:02

hit your feed yet? And and which regions

83:04

are you?

83:05

>> Definitely. Definitely see it in on the

83:07

Middle East stuff. Um, and uh, you know,

83:10

I've seen on Chinese, I've seen it on on

83:12

the Russian Japanese

83:14

>> super helpful.

83:14

>> Let me tell you, bass Japanese is a

83:16

whole another level of beast.

83:18

>> Whoa. Man, base Japanese makes like

83:22

Fentes and Alex Jones seem tame. They're

83:25

like, look at this group of people.

83:27

Insert whatever group of immigrants you

83:29

like. And they're like, this is

83:30

unacceptable behavior. This is not

83:32

Japanese culture. These people need to

83:34

be get the hell out of Japan. It is

83:38

wild, folks. And if you don't have an X

83:40

account, you are missing out. Go to

83:42

X.com and sign up for this reason only

83:45

because you think about the velocity.

83:47

Like journalists are not even taking the

83:48

time to translate and cover what's going

83:50

on in those areas. And this is happening

83:52

automatically in real time.

83:55

>> So you start thinking about what

83:56

happened in Ukraine. If you had people

83:58

Russia and Ukraine doing this and having

84:00

conversations with each other, it would

84:02

be wild.

84:03

>> You're like a such a good hype man. The

84:05

problem is you hype buttered bread the

84:07

same way you hype a nuclear reactor. And

84:08

so it's hard to really tell, you know,

84:10

what you're really hyping because your

84:12

level of excitement, the intonation is

84:14

exactly the same.

84:15

>> Yo, man, there's nothing better than a

84:17

slice of great toast. I mean, if this is

84:18

very this in in a way, it is like sliced

84:21

bread. It's very simple, but it is so

84:23

powerful in the experience.

84:25

>> This has been

84:26

>> It is true. X is better today than it's

84:28

ever been. And remember, they have 70%

84:30

fewer employees than they had the day

84:33

Elon walked into the building. And so if

84:35

there were ever a debate

84:37

>> about this, like, and I remember

84:38

everybody saying, "Oh, it's going to tip

84:40

over. Oh, it's going to be a crappy

84:41

experience."

84:43

>> The fact of the matter, here's we are a

84:45

few years later, 70% fewer employees,

84:48

and every other company in Silicon

84:49

Valley is looking at that. I think for a

84:51

lot of these tech companies, we've hit

84:53

peak employment. We're going to create a

84:55

tremendous number of new jobs, but for

84:58

the existing jobs, these companies are

85:00

all realizing they can do more with

85:01

less.

85:01

>> Nikita Beer just tweeted that they're

85:03

about to go ham on these bot accounts

85:06

that auto reply.

85:07

>> Yes.

85:08

>> Those those literally ruined my feed.

85:10

>> That's why I went to subscriber mode in

85:12

my replies and it's it's worked out

85:13

great.

85:14

>> Yeah. No, shout out to him and um to

85:16

Chris Saka who was in tears at what

85:18

happened to Twitter. You

85:20

>> It's gonna be okay, Chris. Sorry. No

85:23

more tears.

85:24

You only let subscribers respond to your

85:26

tweets. I

85:26

>> I do 50/50. Sometimes I'll just let it

85:29

rip and get chaos. And then other times

85:31

I have 2,000 paid subscribers. I give

85:33

all the money to charity, like 30 grand

85:34

a year. And it's just wonderful to get

85:36

to know the same 2,000 people out of my

85:38

million followers. It's kind of like

85:40

having this little subset. So sometimes

85:41

I'm like, I don't have time to deal with

85:43

a 100red or 200 or 300 replies.

85:45

>> You have a million. That's incredible. I

85:48

mean, it's just

85:49

>> I mean, you have two million. I think

85:50

Sax must have a million, right? You have

85:52

a million, right, Sax? Only only

85:53

>> Brad. How many you have now? You're

85:54

getting popular.

85:56

>> You built a couple.

85:57

>> Got a couple hund.

85:59

>> What's your Oh, your alt cap altca.

86:01

>> I'm at 1.4 million. What are you at,

86:03

Jacob? Have I surpassed you?

86:06

>> I think you have. I'm like 1.1.

86:07

>> What it cost me to get my real name,

86:09

Jason?

86:10

>> Uh, I know a guy. Find out.

86:13

>> You're 1.1. Yeah, I made it to 1.4. I

86:15

don't know how that happened exactly.

86:17

>> Just having the number one podcast in

86:19

the world. Uh, another amazing episode

86:22

of the number one pop and Chimath has

86:26

two million, but that's only because he

86:28

engages he has just incredible moments

86:32

of uh engaging with his haters. Oh my

86:35

god, the the the replies that Chimath

86:38

sometimes drops are so great. I love

86:40

Chimoth goes

86:41

>> I light them up. I light them up.

86:43

>> He lights them up. Then you had somebody

86:45

who was like, "Oh my god, I was in the

86:47

casino and you told me to bet black, so

86:48

you bet black, so I bet black and I lost

86:50

my money." And so you're responsible and

86:53

then you paid for the kids college. He

86:54

has two young girls and so I I funded

86:57

their college accounts.

86:58

>> I thought that was hilarious. Just as

87:00

>> obviously I'm very happy for him and his

87:02

two daughters. I'm even more happy at

87:04

how much it'll anger all these other

87:06

goofball dorks living in their mom's

87:08

basement.

87:09

>> Yes.

87:09

>> Who'd literally have no take they take

87:11

no responsibility for their lives.

87:14

And uh they should enjoy those Hot

87:16

Pockets. By the way, for those folks in

87:17

their mom's basement, the Hot Pockets

87:19

and the Fish Sticks are ready and you

87:21

get one more hour of Xbox from mom.

87:23

>> All right, listen. We missed you,

87:24

Freeberg, but this is the best episode

87:26

in two years. Uh

87:29

>> Freeird at the end of the show.

87:32

>> And we will see you all at the liquidity

87:34

summit except for the 400 people on the

87:36

wait list who aren't going to get in.

87:37

>> We got an email from the guys at Athena

87:39

because we were just

87:40

>> Oh my god. the they they're they're

87:43

going to hire like 500 new Athena

87:45

assistants.

87:46

>> Yes, they had a thousand people after

87:48

last week when we mentioned how much we

87:50

love Athena.

87:51

>> Go to Athena.com.

87:52

>> But that's amazing. Those are like 500

87:54

hardworking men and women who are like

87:56

working

87:57

>> in the Philippines.

87:59

>> Sax have great jobs.

88:00

>> Sax, I'm going to get you a couple

88:01

Athena assistants as a birthday present.

88:03

That's what I'm going to get.

88:04

>> You're going to love this, Sax. H Athena

88:06

assistants are the best. Congratulations

88:08

to my friends over there. All right,

88:10

everybody. We'll see you next time. Love

88:11

you boys on

88:12

>> tonight favorite podcast.

88:19

>> Let your winners ride.

88:21

>> Rain

88:26

and

88:26

>> we open source it to the fans and

88:28

they've just gone crazy with it.

88:30

>> Love you queen of

88:33

winners.

88:39

>> Besties are gone.

88:41

That is my dog taking out your

88:43

driveways.

88:46

>> Oh man, my appetiter will meet.

88:49

>> We should all just get a room and just

88:50

have one big huge orgy cuz they're all

88:52

just useless. It's like this like sexual

88:54

tension that we just need to release

88:55

somehow.

89:02

>> That's going to be good. We need to get

89:03

merch.

89:12

I'm going all in.

Interactive Summary

In this episode of the All-In Podcast, the besties are joined by guest Brad Gerstner to discuss Anthropic's decision to withhold its 'Mythos' model due to cybersecurity risks. They analyze the unprecedented revenue growth of Anthropic, reaching a $30 billion run rate, and debate the sustainability of AI margins. The group also covers geopolitical developments including a ceasefire in the Iran conflict, the influence of Netanyahu on US foreign policy, and the impact of X's new auto-translation feature on global discourse.

Suggested questions

5 ready-made prompts