HomeVideos

$380 Billion Gone? Anthropic Refuses to Bend the Knee

Now Playing

$380 Billion Gone? Anthropic Refuses to Bend the Knee

Transcript

341 segments

0:00

Dude, we are living in the idiocracy

0:02

version of the AI timeline. Trump

0:05

ordered agencies to stop using Anthropic

0:07

with a six-month rampdown period, and

0:09

Hegsth has moved to label it a supply

0:12

chain risk. This is not normal vendor

0:14

drama. This isn't this isn't normal at

0:16

all. This is precedent setting AI power

0:19

politics.

0:23

Here's the sequence of events. Then

0:25

we'll talk about what the heck they're

0:26

even arguing about. Apparently,

0:27

Department of War and Anthropic, as well

0:29

as other major AI vendors, have been in

0:31

discussions about using their LLMs for

0:35

national defense. It goes south when

0:37

they hit a stumbling block. So, Dario

0:39

Amod posts a press release through

0:41

anthropic site stating what they will

0:44

and won't do. And this escalates quite

0:46

fast. Trump through official White House

0:49

comms as well as Truth Social says,

0:52

"Effective immediately, federal agencies

0:54

stop use. we will be observing a

0:56

six-month rampdown of use of anthropic.

0:59

Then the Pentagon moves to declare

1:01

anthropic a supply chain risk to

1:03

national security. They also make some

1:05

bold claims about who can and can't use

1:08

anthropic even outside of government

1:10

specific business. And then Anthropic

1:12

issues a statement to that saying

1:13

actually your claim is toothless and

1:15

legally you can't do it. This is the

1:17

inflection point because this is going

1:18

to determine who the keys of AI power

1:21

are handed to for the future. And don't

1:24

get me wrong, I am an anthropics court

1:27

on this and I'll explain why, but

1:28

ultimately I believe AI should be in the

1:30

hands of the people. I'm a big proponent

1:32

of running local models. I think there

1:35

needs to be more focus on that. That's

1:36

the reason that Open AI was founded in

1:38

the first place. But let's park that

1:40

because it's right now anthropic versus

1:43

the Pentagon. This all kicked off when

1:45

the Pentagon asked for models for quote

1:48

all legal purposes in the Department of

1:50

Defense. I haven't heard as of this

1:53

point any dispute coming from OpenAI,

1:56

from XAI, from any of the other major

1:59

model providers. We don't know if they

2:02

are complicit with this, if they're in

2:04

agreement, or if they will be contesting

2:05

this as well, but we haven't heard

2:07

anything from them. So, now would be the

2:08

time to speak up. Still haven't heard

2:10

anything, though. It's 8:00 p.m. on

2:13

Friday night in Phoenix and uh and still

2:16

nothing. Anthropic in their press

2:17

release says all legal, all lawful

2:20

purposes. Sure, fine. All good with

2:22

that. Uh, hold up. Two exceptions,

2:25

though. We don't want you using it for

2:27

mass domestic surveillance of US

2:29

citizens, and we also don't want you to

2:32

fully automate killchain decisions. And

2:35

again, I'm a proponent of local models

2:37

of roll your own. I'm very libertarian,

2:38

but at least in this circumstance,

2:41

anthropics response seems non-emotional.

2:44

It seems very wellreasoned. And I think

2:46

those are two noble things to abstain

2:48

from. Like I I would have great moral

2:51

qualms about participating in the mass

2:54

surveillance of US citizens

2:56

domestically. That seems like something

2:58

that we should do everything in our

2:59

power to prevent against. That actually

3:01

seems like something that the government

3:02

should prevent against that they haven't

3:04

done a very good job preventing against.

3:06

So it's actually good for wants to see a

3:07

corporation saying we're we're not going

3:10

to surveil US citizens. We're not going

3:11

to do that. That's not within our moral

3:14

framework. Fully autonomous weaponry is

3:16

a bit of a thornier one. And that could

3:18

be a whole YouTube video on its own. I

3:20

won't get into it, but I see the

3:22

reasoning from Anthropic here that they

3:24

don't want to be involved in these

3:26

missionritical killchain decisions. And

3:28

I can see a very solid argument for it

3:30

because essentially what you're doing is

3:32

you are trusting an algorithm to make a

3:34

decision about a human life. Ethically,

3:37

I don't know as a society where we

3:39

should stand on that. I know where I

3:41

stand on that, but I don't know where

3:42

we'll end up. So, I am very much in

3:44

Anthropic's corner on this debate. If

3:47

you fervently disagree with me, I want

3:49

to hear about why in the comments. Let's

3:51

have a rational discussion about it.

3:53

This is something that we should all be

3:55

talking about to figure out where the

3:56

line is for our society. This degrades

3:59

pretty quickly into a legal argument.

4:03

And usually I wouldn't bore you with too

4:05

much of it and I'm not going to bore you

4:06

with the details, but it's a very

4:08

interesting legal case shaping up

4:10

because you have on one hand the

4:12

Pentagon which has like an army of of

4:15

lawyers. At least like oneif of the

4:16

Pentagon is probably lawyers. That's

4:18

like one side, one whole side of the

4:20

Pentagon. They probably got the lawyers

4:22

in there. It's probably like the bottom

4:23

right side of the Pentagon. That's where

4:25

all the lawyers sit in a in a big long

4:27

row with each other. They have a lot of

4:29

lawyers and those lawyers have worked in

4:32

department of defense type law for a

4:35

very long time. They have a lot of

4:36

experience there. Right? On the other

4:38

hand, you have probably some of the most

4:40

cracked lawyers in the world in

4:42

anthropic because they have an insane

4:45

amount of money to throw at people. An

4:47

insane amount of money. They're they're

4:48

attracting better lawyers than the

4:50

Department of Defense with their

4:51

salaries. And they also have AI,

4:54

probably better AI that's available to

4:57

peeons like you and I behind the scenes

4:58

that they can just set loose on the law

5:01

books and find technicalities and

5:02

loopholes and all sorts of craziness

5:05

like that. And yeah, like yeah, the

5:06

Department of War does not have that.

5:08

I'm not a lawyer. I really cannot tell

5:10

who's in the right or wrong here, but I

5:12

can tell you what they're specifically

5:14

debating about because that's the

5:15

interesting point here. Pete Haggsath

5:18

and the Department of War contest that

5:21

if uh software is labeled a national

5:25

security supply chain risk like they are

5:27

moving to declare anthropic. What that

5:30

means is that if your company is working

5:33

with the Department of War on anything

5:35

even business totally unrelated to the

5:38

US government outside of that you're no

5:39

longer allowed to use anthropics. So

5:41

it's not just like a ban while you're

5:43

working on stuff that you make for the

5:44

Department of War. It's a total ban for

5:46

your entire company. That would be like

5:49

if you built refrigerators. You have a

5:51

big refrigerator business. You sell

5:52

them, you know, through Home Depot,

5:54

Lowe's. You're well known in the space,

5:56

but Department of War calls up and

5:57

they're like, "We need this particular

6:00

heat sink that you manufacture for your

6:02

refrigerators. We need it." You're like,

6:04

"Okay, fine. Great. Great. I want to

6:05

support the American war fighter. I'm

6:07

going to quint quintuple the price

6:08

because I know you can pay for it with

6:10

that defense budget, but yeah, I'll sell

6:11

you as many as you need." The catch

6:13

there is you use Anthropic for your

6:14

business everywhere. Like most

6:16

businesses are using that or open AAI

6:18

now. You're using it all over the place.

6:20

You're using it to aggregate customer

6:21

feedback. You're using it as a support

6:23

chatbot and you're using it to design

6:26

these heat sinks. So Anthropic's

6:29

argument is that you can no longer use

6:31

it to design the heat sinks for the US

6:33

government. That's that's clear. It's

6:34

out of the question. Anthropics argument

6:36

goes on to say, "You are still well

6:38

within your rights to use it to design

6:41

your refrigerators and for any other

6:42

business uses outside of what you're

6:44

doing for the Department of War." Hegath

6:46

and the Department of War contest that

6:47

no, you cannot use it for your entire

6:50

business, refrigerators, what, whatever.

6:52

You can no longer use Anthropic at all.

6:54

It'll be interesting to see how that one

6:56

shapes up, but both of them are assuming

6:58

full authority on this question already.

7:00

The latest press release from Anthropic

7:02

says, you know, hey, if you're

7:03

concerned, if you're doing some business

7:05

with Department of War, you can still

7:06

use it outside of that. And if you're

7:08

concerned, you can call our sales or our

7:09

legal team. Uh they're here to help you.

7:11

Uh so both of them are doubling down on

7:13

this, and we'll see who blinks first.

7:15

The capitalist craziness on top of all

7:18

of this, if this was not already a

7:19

nuttiest situation, this is maybe the

7:21

weirdest part of it. Anthropic is a

7:23

company. It's got to make money, right?

7:26

OpenAI is a company. It's got to make

7:28

money. You got to be wondering if Daario

7:31

and Sam are on the phone saying, you

7:34

know, blood packed, like we're going to

7:37

not allow our LLMs to be used for this

7:39

thing and that thing. Or if they're just

7:41

like, hey, business comes first, baby.

7:43

And you know, I'm maybe Sam Alman's

7:45

saying, I'm going to totally compromise

7:46

the morals of my LLM. We'll let them use

7:48

it for whatever. We do not care as long

7:50

as they have the money to pay for it.

7:52

Not saying that Sam Alman said that, but

7:54

I'm saying as an example. And uh all of

7:56

a sudden, I mean, you know how big the

7:58

defense budget is. It's massive. If you

7:59

don't know, go look it up. I mean, it is

8:01

it's bigger than dwarfs any other

8:02

spending in the US. It's preposterous.

8:04

And so, you hope you hope that people

8:07

will take a stand about what they're

8:08

designing and what they're building and

8:09

say, you know what, like this is this is

8:11

my boundary and you have to respect my

8:13

boundary. And that's exactly what

8:15

Anthropic is doing. I don't know if

8:16

OpenAI is doing it, but you have to

8:18

think if you're Daario, if you're the

8:21

board of directors, if you're an

8:23

influencer in that space, you got to be

8:26

thinking like if OpenAI keeps doing

8:28

business with them, that could mean

8:29

we're ruined in terms of profit. Like

8:31

we're going to bring in a lot less money

8:33

if the other AI companies cave and start

8:35

working with the government. It's a

8:37

gross angle to think about, but

8:38

undoubtedly this is going on when

8:39

there's that much money at stake. The

8:41

administration is largely framing up

8:44

this argument on the basis of

8:47

patriotism, morality, ideology, and

8:50

military readiness. All of which think

8:53

things I agree with. Anthropic is

8:55

framing this uh quite intelligently, I

8:57

might add, on model reliability and

9:00

constitutional rights. The model

9:01

reliability we want to put a little

9:02

asterisk to and as a clarification

9:04

there. They're saying they are by their

9:06

own admission saying we do not trust our

9:09

own models enough to say that yeah sure

9:12

you can use it to determine whether to

9:14

blow that guy up or not. We don't we

9:16

don't think our models are good enough

9:17

which I think is probably a pretty

9:19

responsible take if you've used the

9:20

models. It's concerning to me that the

9:23

Department of War wants full access to

9:25

these. This is not a tech company. Uh

9:29

Pete Hgsth I don't think he's ever been

9:31

a programmer. I haven't really looked

9:32

into his history, but he doesn't strike

9:34

me as the type that spends a lot of time

9:37

with computers or books for that matter.

9:40

And Anthropic is caught a stray bullet

9:42

on this one. Uh they're going to be made

9:45

an example of whichever way this goes

9:47

down, but this is going to set the

9:48

template for all AI defense projects

9:51

moving forward. So this is far far far

9:54

bigger than Anthropic. And the precedent

9:56

that is going to be set is can private

9:58

model providers like Anthropic, like

10:00

Open AI, can they define what you can do

10:04

with the model, what the government can

10:05

do with the model? You, you and I don't

10:07

get models without safeguards cuz we're

10:09

we're too dumb. You know, they got to

10:10

protect us from ourselves. This is

10:12

absolutely insane timing for this to

10:14

really come to a head before the weekend

10:16

because we're going to have to wait

10:18

until Monday to see does anyone file

10:20

anything in court on Monday, later in

10:22

the week. But that's what's going to be

10:23

next is there's going to be some court

10:24

filings, some legal proceedings that

10:26

kick off. But what I'd be really

10:27

interested to know is if you have a

10:29

company, right, that's working like that

10:31

refrigerator company. They're working a

10:33

little bit with the Department of War.

10:35

You know, I wonder, do they say like,

10:37

"Okay, we're we're terrified of of legal

10:39

proceedings, so we're just we got to

10:41

drop anthropic on Monday. We're not

10:42

using it uh anymore at the company on

10:44

Monday." Or do they say, "F it. We ball.

10:48

This is this model is tops. It is Opus

10:51

4.6 six is amazing. We are not going to

10:54

stop using it for a refrigerator

10:55

business. I don't know what's going to

10:57

happen there. My hunch is the former

10:59

because there's a lot of money coming

11:01

out of the Department of War and people

11:03

will tend to go with that unfortunately.

11:05

If you got a source there, let me know.

11:07

I would I would love to know how that's

11:08

actually uh hitting the market on Monday

11:11

and hitting corporations. But I know I'm

11:13

glad I'm not a lawyer at one of these

11:14

companies because uh I am not going to

11:16

work this weekend and they are going to

11:17

work a lot this weekend. I'll be

11:19

tracking all of that. Rest assured,

11:21

nobody is going to change their terms of

11:24

services without me catching it and

11:26

reporting it on the channel. We're going

11:28

to have all of the coverage of this

11:29

moving forward. If you haven't already,

11:31

subscribe to the channel, click the bell

11:33

to be notified when new videos drop,

11:35

when new news drops on this, and also

11:37

sign up for the newsletter. You want the

11:38

facts and figures behind this, sign up

11:39

for the newsletter, get that in your

11:41

inbox. Thank you for watching.

Interactive Summary

The video discusses an unprecedented situation where the Trump administration ordered federal agencies to cease using Anthropic's AI models, declaring the company a supply chain risk. This decision followed Anthropic's refusal to allow its LLMs to be used for mass domestic surveillance of US citizens or for fully automating kill-chain decisions in national defense. While other major AI vendors have remained silent, Anthropic argues its stance is based on moral principles and the inherent unreliability of current AI models for such critical tasks. The Department of War asserts that any company working with them, even on unrelated business, must completely stop using Anthropic. Anthropic, however, counters that the ban should only apply to government-related work. This conflict is rapidly escalating into a major legal battle that will set a critical precedent for whether private AI model providers can dictate how the government uses their technology.

Suggested questions

6 ready-made prompts