HomeVideos

Anthropic BANNED explained..

Now Playing

Anthropic BANNED explained..

Transcript

193 segments

0:00

One of the biggest questions is whether

0:01

companies should be allowed to create

0:03

and enforce their own rules around AI

0:06

independently of government regulation.

0:09

We have seen a similar event in Iron Man

0:11

2 where Senator Stern asks Tony Stark to

0:13

turn over his Iron Man suit to the US

0:16

government. And he responds with, "I've

0:18

successfully privatized world peace.

0:20

What more do you want?" Similarly,

0:22

Anthropic has their own powerful

0:24

technology that the US government is

0:26

asking for full military purposes and

0:29

they want unfathered access to Claude,

0:31

which means Anthropic needs to walk back

0:33

on the very thing that they built their

0:36

business around, which is AI safety.

0:38

Now, Anthropic has their own flaws in

0:40

their business model, but they are the

0:42

closest thing to AI safety that we have

0:44

compared to other AI frontier labs. And

0:46

the recent power struggle between

0:48

Department of Defense Secretary Pete

0:50

Hacksth and Anthropic CEO Dario Amade

0:53

sets a huge precedent on how exactly the

0:56

relationship between private companies

0:58

and the US government should look like.

1:00

Welcome to Caleb Bright's Code where

1:02

every second counts. Quick shout out to

1:03

Zo. More on them later. Now, out of the

1:05

15 executive branches, the Department of

1:08

Defense has the biggest in human capital

1:10

and they're the biggest employer in the

1:12

US. And it's this department that

1:14

granted $200 million to other four

1:17

frontier labs in the US back in July

1:20

2025, totaling up to $800 million

1:22

combined. But Anthropic has been working

1:25

with the US government outside of the

1:27

Department of Defense since 2024, where

1:30

after they released Sonnet 3 for the

1:32

first time, they provisioned Claw 3,

1:34

Haiku, and Sonnet for the government

1:36

through AWS dedicated cloud called

1:39

GovCloud. Enthropic also partnered up

1:41

with Palunteer in November that year to

1:44

provide the US intelligence and defense

1:46

agency with a system that can help

1:48

deploy CLA to process information

1:51

faster. And in August 2025, they also

1:53

gave one-year access to all branches of

1:56

the US government to help AI adoption

1:59

across the government. As you can see,

2:01

Enthropic had already rooted themselves

2:03

deep into the government system by

2:05

providing claude through Palunteer

2:08

GovCloud with AWS and more. Now, fast

2:10

forward to today. Even though four other

2:12

Frontier Labs also got the same $200

2:15

million deal from Department of Defense

2:17

back in July 2025, Enthropic has

2:20

specifically been given an ultimatum by

2:23

Pete Hackath on February 24th to

2:26

essentially drop their safety guard rail

2:28

on the model. And the consequences for

2:30

not following this would result in three

2:33

potential outcomes. First, be labeled as

2:35

a supply chain risk, which would cut all

2:38

military contracts with anthropic.

2:40

Second, be forced to compel by enacting

2:42

the Defense Production Act, which is a

2:45

draconian measure. Or third, terminate

2:47

the $200 million contract that were

2:49

given in the first place. Now, you might

2:51

have noticed by now that the size of the

2:53

contract is only 1% of Enthropic's

2:57

entire revenue for the year. So, this is

3:00

already more about the principle than

3:02

about the money per se. While in

3:05

principle, Enthropic and the Department

3:07

of Defense both would agree that there

3:09

are lines we shouldn't cross when it

3:11

comes to how AI should be used in such

3:14

cases like mass surveillance and fully

3:16

autonomous weapons. This is more about

3:19

who gets to make the final authority to

3:22

draw those lines. And depending on the

3:23

polling, American people also have

3:26

differing views on this very question.

3:28

Whether private companies should have

3:30

the ability to set their own ethical

3:32

rules around their product and what

3:34

finally happened between Anthropic and

3:37

Department of Defense is quite crazy.

3:39

And here's a quick word from Zo.

3:41

OpenCloud showed the world what a

3:43

personal AI agent could look like. But

3:44

if you actually try to set it up, it's

3:46

actually not as easy to set it up

3:48

properly. Zo takes a different approach

3:50

by turning your laptop into a server. Zo

3:53

gives you your own computer in the cloud

3:55

with AI builtin. It's fully isolated, so

3:57

you're not one bad configuration away

3:59

from exposing your whole machine. Out of

4:01

the box, you can integrate with Gmail,

4:03

Google Drive, Stripe X, Spotify, and

4:06

more. You can also build websites, set

4:08

up automations that run while you sleep,

4:10

and even code full projects right from

4:12

chat. My favorite feature is texting Zo

4:14

on iMessage or Telegram and talking to

4:17

my agent. Zo is always on, always yours

4:20

without configs. Link in the description

4:22

below. After two long days of standoff

4:24

between Enthropic and the Department of

4:27

Defense, the official statement was made

4:29

by Peak Hacks to designate Enthropic as

4:32

a supply chain risk to the national

4:34

security. This was of course after

4:36

Daario made a statement standing his

4:38

ground on such matters. Now, one quick

4:41

clarification to be made here is that

4:43

the extent of the ban that applies to

4:45

Anthropic only applies to contracts

4:48

associated with the Department of War,

4:50

not the entire branch of the US

4:52

government. Enthropic was given 6 months

4:55

to transition their already deeply used

4:57

claude models out and what now looks

5:00

like open as model coming in. And

5:02

Enthropic's response to being labeled as

5:04

a supply chain risk was standing their

5:06

ground on their decision. Dario made a

5:09

statement how the US has never publicly

5:12

labeled an American company a supply

5:14

chain risk. He also commented that

5:16

frontier AI models just simply aren't

5:19

there yet to be used in more

5:21

sophisticated applications like fully

5:23

autonomous weapons. And the fact that

5:25

Enthropic treated AI safety as a core

5:28

part of its business likely contributed

5:30

to being an early adopted model by the

5:32

US government. And it's this very thing

5:35

that seem to help them win government

5:37

contracts in some but also lose in

5:40

others. But in all fairness from the

5:42

Department of Defense's perspective,

5:44

they claim that Anthropic is essentially

5:46

trying to strongarm the military by

5:49

using what they call effective altruism

5:52

to control how they think the military

5:54

should be using their product. And while

5:56

a reasonable response could have been to

5:58

just terminate the contract short of

6:01

coming off weak in negotiation, what

6:03

really won the public opinion is

6:06

anthropic, not the department of

6:07

defense. For now, Sam Alman announced

6:10

that OpenAI will be providing their

6:12

models to the Department of Defense, but

6:14

basically under the same conditions as

6:17

Anthropic made. And you might be

6:19

wondering why OpenAI got a different

6:21

outcome than Anthropic despite making

6:23

the same stance. OpenAI's agreement with

6:26

the military is in principle that yes,

6:29

AI shouldn't be used for such cases. But

6:32

on the other hand, anthropic stance is

6:34

much more concrete than pure principle

6:37

since safeguards in place are baked into

6:40

the model's weights than in principle.

6:42

At the end of the day, this entire

6:44

ordeal between Anthropic and the US

6:46

government sets precedent on how

6:49

government and Frontier Labs should work

6:52

together under the powerful technology

6:54

that's being built outside of much

6:56

government regulation. And given that

6:58

we're still really early in finding out

7:01

just how powerful AI really could

7:03

become, the tension between the military

7:05

use cases and private frontier labs will

7:08

be something that we'll need to continue

7:10

monitoring going forward. What do you

7:12

think? Do you think the Department of

7:13

Defense was well within their rights to

7:15

ban anthropic, but only to the extent of

7:18

the military contracts? What do you

7:20

think the government and private AI

7:22

companies should look like in the

7:23

future?

Interactive Summary

The video details a significant conflict between Anthropic, an AI frontier lab prioritizing AI safety, and the US Department of Defense (DoD) regarding the military use of Anthropic's Claude AI model. Despite prior collaborations, the DoD issued an ultimatum to Anthropic to remove its safety guardrails or face consequences, including being designated a supply chain risk. Anthropic refused, emphasizing AI safety as a core principle and questioning the suitability of current AI for advanced military applications. Consequently, the DoD labeled Anthropic a supply chain risk for Department of War contracts, mandating a six-month transition away from Claude. Interestingly, OpenAI later offered its models to the DoD under similar conditions but achieved a different outcome, likely because Anthropic's safety measures are integrated into its models' architecture rather than merely being a policy. This event establishes a crucial precedent for the future relationship between private AI companies and government regulation.

Suggested questions

4 ready-made prompts