$380 Billion Gone? Anthropic Refuses to Bend the Knee
341 segments
Dude, we are living in the idiocracy
version of the AI timeline. Trump
ordered agencies to stop using Anthropic
with a six-month rampdown period, and
Hegsth has moved to label it a supply
chain risk. This is not normal vendor
drama. This isn't this isn't normal at
all. This is precedent setting AI power
politics.
Here's the sequence of events. Then
we'll talk about what the heck they're
even arguing about. Apparently,
Department of War and Anthropic, as well
as other major AI vendors, have been in
discussions about using their LLMs for
national defense. It goes south when
they hit a stumbling block. So, Dario
Amod posts a press release through
anthropic site stating what they will
and won't do. And this escalates quite
fast. Trump through official White House
comms as well as Truth Social says,
"Effective immediately, federal agencies
stop use. we will be observing a
six-month rampdown of use of anthropic.
Then the Pentagon moves to declare
anthropic a supply chain risk to
national security. They also make some
bold claims about who can and can't use
anthropic even outside of government
specific business. And then Anthropic
issues a statement to that saying
actually your claim is toothless and
legally you can't do it. This is the
inflection point because this is going
to determine who the keys of AI power
are handed to for the future. And don't
get me wrong, I am an anthropics court
on this and I'll explain why, but
ultimately I believe AI should be in the
hands of the people. I'm a big proponent
of running local models. I think there
needs to be more focus on that. That's
the reason that Open AI was founded in
the first place. But let's park that
because it's right now anthropic versus
the Pentagon. This all kicked off when
the Pentagon asked for models for quote
all legal purposes in the Department of
Defense. I haven't heard as of this
point any dispute coming from OpenAI,
from XAI, from any of the other major
model providers. We don't know if they
are complicit with this, if they're in
agreement, or if they will be contesting
this as well, but we haven't heard
anything from them. So, now would be the
time to speak up. Still haven't heard
anything, though. It's 8:00 p.m. on
Friday night in Phoenix and uh and still
nothing. Anthropic in their press
release says all legal, all lawful
purposes. Sure, fine. All good with
that. Uh, hold up. Two exceptions,
though. We don't want you using it for
mass domestic surveillance of US
citizens, and we also don't want you to
fully automate killchain decisions. And
again, I'm a proponent of local models
of roll your own. I'm very libertarian,
but at least in this circumstance,
anthropics response seems non-emotional.
It seems very wellreasoned. And I think
those are two noble things to abstain
from. Like I I would have great moral
qualms about participating in the mass
surveillance of US citizens
domestically. That seems like something
that we should do everything in our
power to prevent against. That actually
seems like something that the government
should prevent against that they haven't
done a very good job preventing against.
So it's actually good for wants to see a
corporation saying we're we're not going
to surveil US citizens. We're not going
to do that. That's not within our moral
framework. Fully autonomous weaponry is
a bit of a thornier one. And that could
be a whole YouTube video on its own. I
won't get into it, but I see the
reasoning from Anthropic here that they
don't want to be involved in these
missionritical killchain decisions. And
I can see a very solid argument for it
because essentially what you're doing is
you are trusting an algorithm to make a
decision about a human life. Ethically,
I don't know as a society where we
should stand on that. I know where I
stand on that, but I don't know where
we'll end up. So, I am very much in
Anthropic's corner on this debate. If
you fervently disagree with me, I want
to hear about why in the comments. Let's
have a rational discussion about it.
This is something that we should all be
talking about to figure out where the
line is for our society. This degrades
pretty quickly into a legal argument.
And usually I wouldn't bore you with too
much of it and I'm not going to bore you
with the details, but it's a very
interesting legal case shaping up
because you have on one hand the
Pentagon which has like an army of of
lawyers. At least like oneif of the
Pentagon is probably lawyers. That's
like one side, one whole side of the
Pentagon. They probably got the lawyers
in there. It's probably like the bottom
right side of the Pentagon. That's where
all the lawyers sit in a in a big long
row with each other. They have a lot of
lawyers and those lawyers have worked in
department of defense type law for a
very long time. They have a lot of
experience there. Right? On the other
hand, you have probably some of the most
cracked lawyers in the world in
anthropic because they have an insane
amount of money to throw at people. An
insane amount of money. They're they're
attracting better lawyers than the
Department of Defense with their
salaries. And they also have AI,
probably better AI that's available to
peeons like you and I behind the scenes
that they can just set loose on the law
books and find technicalities and
loopholes and all sorts of craziness
like that. And yeah, like yeah, the
Department of War does not have that.
I'm not a lawyer. I really cannot tell
who's in the right or wrong here, but I
can tell you what they're specifically
debating about because that's the
interesting point here. Pete Haggsath
and the Department of War contest that
if uh software is labeled a national
security supply chain risk like they are
moving to declare anthropic. What that
means is that if your company is working
with the Department of War on anything
even business totally unrelated to the
US government outside of that you're no
longer allowed to use anthropics. So
it's not just like a ban while you're
working on stuff that you make for the
Department of War. It's a total ban for
your entire company. That would be like
if you built refrigerators. You have a
big refrigerator business. You sell
them, you know, through Home Depot,
Lowe's. You're well known in the space,
but Department of War calls up and
they're like, "We need this particular
heat sink that you manufacture for your
refrigerators. We need it." You're like,
"Okay, fine. Great. Great. I want to
support the American war fighter. I'm
going to quint quintuple the price
because I know you can pay for it with
that defense budget, but yeah, I'll sell
you as many as you need." The catch
there is you use Anthropic for your
business everywhere. Like most
businesses are using that or open AAI
now. You're using it all over the place.
You're using it to aggregate customer
feedback. You're using it as a support
chatbot and you're using it to design
these heat sinks. So Anthropic's
argument is that you can no longer use
it to design the heat sinks for the US
government. That's that's clear. It's
out of the question. Anthropics argument
goes on to say, "You are still well
within your rights to use it to design
your refrigerators and for any other
business uses outside of what you're
doing for the Department of War." Hegath
and the Department of War contest that
no, you cannot use it for your entire
business, refrigerators, what, whatever.
You can no longer use Anthropic at all.
It'll be interesting to see how that one
shapes up, but both of them are assuming
full authority on this question already.
The latest press release from Anthropic
says, you know, hey, if you're
concerned, if you're doing some business
with Department of War, you can still
use it outside of that. And if you're
concerned, you can call our sales or our
legal team. Uh they're here to help you.
Uh so both of them are doubling down on
this, and we'll see who blinks first.
The capitalist craziness on top of all
of this, if this was not already a
nuttiest situation, this is maybe the
weirdest part of it. Anthropic is a
company. It's got to make money, right?
OpenAI is a company. It's got to make
money. You got to be wondering if Daario
and Sam are on the phone saying, you
know, blood packed, like we're going to
not allow our LLMs to be used for this
thing and that thing. Or if they're just
like, hey, business comes first, baby.
And you know, I'm maybe Sam Alman's
saying, I'm going to totally compromise
the morals of my LLM. We'll let them use
it for whatever. We do not care as long
as they have the money to pay for it.
Not saying that Sam Alman said that, but
I'm saying as an example. And uh all of
a sudden, I mean, you know how big the
defense budget is. It's massive. If you
don't know, go look it up. I mean, it is
it's bigger than dwarfs any other
spending in the US. It's preposterous.
And so, you hope you hope that people
will take a stand about what they're
designing and what they're building and
say, you know what, like this is this is
my boundary and you have to respect my
boundary. And that's exactly what
Anthropic is doing. I don't know if
OpenAI is doing it, but you have to
think if you're Daario, if you're the
board of directors, if you're an
influencer in that space, you got to be
thinking like if OpenAI keeps doing
business with them, that could mean
we're ruined in terms of profit. Like
we're going to bring in a lot less money
if the other AI companies cave and start
working with the government. It's a
gross angle to think about, but
undoubtedly this is going on when
there's that much money at stake. The
administration is largely framing up
this argument on the basis of
patriotism, morality, ideology, and
military readiness. All of which think
things I agree with. Anthropic is
framing this uh quite intelligently, I
might add, on model reliability and
constitutional rights. The model
reliability we want to put a little
asterisk to and as a clarification
there. They're saying they are by their
own admission saying we do not trust our
own models enough to say that yeah sure
you can use it to determine whether to
blow that guy up or not. We don't we
don't think our models are good enough
which I think is probably a pretty
responsible take if you've used the
models. It's concerning to me that the
Department of War wants full access to
these. This is not a tech company. Uh
Pete Hgsth I don't think he's ever been
a programmer. I haven't really looked
into his history, but he doesn't strike
me as the type that spends a lot of time
with computers or books for that matter.
And Anthropic is caught a stray bullet
on this one. Uh they're going to be made
an example of whichever way this goes
down, but this is going to set the
template for all AI defense projects
moving forward. So this is far far far
bigger than Anthropic. And the precedent
that is going to be set is can private
model providers like Anthropic, like
Open AI, can they define what you can do
with the model, what the government can
do with the model? You, you and I don't
get models without safeguards cuz we're
we're too dumb. You know, they got to
protect us from ourselves. This is
absolutely insane timing for this to
really come to a head before the weekend
because we're going to have to wait
until Monday to see does anyone file
anything in court on Monday, later in
the week. But that's what's going to be
next is there's going to be some court
filings, some legal proceedings that
kick off. But what I'd be really
interested to know is if you have a
company, right, that's working like that
refrigerator company. They're working a
little bit with the Department of War.
You know, I wonder, do they say like,
"Okay, we're we're terrified of of legal
proceedings, so we're just we got to
drop anthropic on Monday. We're not
using it uh anymore at the company on
Monday." Or do they say, "F it. We ball.
This is this model is tops. It is Opus
4.6 six is amazing. We are not going to
stop using it for a refrigerator
business. I don't know what's going to
happen there. My hunch is the former
because there's a lot of money coming
out of the Department of War and people
will tend to go with that unfortunately.
If you got a source there, let me know.
I would I would love to know how that's
actually uh hitting the market on Monday
and hitting corporations. But I know I'm
glad I'm not a lawyer at one of these
companies because uh I am not going to
work this weekend and they are going to
work a lot this weekend. I'll be
tracking all of that. Rest assured,
nobody is going to change their terms of
services without me catching it and
reporting it on the channel. We're going
to have all of the coverage of this
moving forward. If you haven't already,
subscribe to the channel, click the bell
to be notified when new videos drop,
when new news drops on this, and also
sign up for the newsletter. You want the
facts and figures behind this, sign up
for the newsletter, get that in your
inbox. Thank you for watching.
Ask follow-up questions or revisit key timestamps.
The video discusses an unprecedented situation where the Trump administration ordered federal agencies to cease using Anthropic's AI models, declaring the company a supply chain risk. This decision followed Anthropic's refusal to allow its LLMs to be used for mass domestic surveillance of US citizens or for fully automating kill-chain decisions in national defense. While other major AI vendors have remained silent, Anthropic argues its stance is based on moral principles and the inherent unreliability of current AI models for such critical tasks. The Department of War asserts that any company working with them, even on unrelated business, must completely stop using Anthropic. Anthropic, however, counters that the ban should only apply to government-related work. This conflict is rapidly escalating into a major legal battle that will set a critical precedent for whether private AI model providers can dictate how the government uses their technology.
Videos recently processed by our community