Anthropic BANNED explained..
193 segments
One of the biggest questions is whether
companies should be allowed to create
and enforce their own rules around AI
independently of government regulation.
We have seen a similar event in Iron Man
2 where Senator Stern asks Tony Stark to
turn over his Iron Man suit to the US
government. And he responds with, "I've
successfully privatized world peace.
What more do you want?" Similarly,
Anthropic has their own powerful
technology that the US government is
asking for full military purposes and
they want unfathered access to Claude,
which means Anthropic needs to walk back
on the very thing that they built their
business around, which is AI safety.
Now, Anthropic has their own flaws in
their business model, but they are the
closest thing to AI safety that we have
compared to other AI frontier labs. And
the recent power struggle between
Department of Defense Secretary Pete
Hacksth and Anthropic CEO Dario Amade
sets a huge precedent on how exactly the
relationship between private companies
and the US government should look like.
Welcome to Caleb Bright's Code where
every second counts. Quick shout out to
Zo. More on them later. Now, out of the
15 executive branches, the Department of
Defense has the biggest in human capital
and they're the biggest employer in the
US. And it's this department that
granted $200 million to other four
frontier labs in the US back in July
2025, totaling up to $800 million
combined. But Anthropic has been working
with the US government outside of the
Department of Defense since 2024, where
after they released Sonnet 3 for the
first time, they provisioned Claw 3,
Haiku, and Sonnet for the government
through AWS dedicated cloud called
GovCloud. Enthropic also partnered up
with Palunteer in November that year to
provide the US intelligence and defense
agency with a system that can help
deploy CLA to process information
faster. And in August 2025, they also
gave one-year access to all branches of
the US government to help AI adoption
across the government. As you can see,
Enthropic had already rooted themselves
deep into the government system by
providing claude through Palunteer
GovCloud with AWS and more. Now, fast
forward to today. Even though four other
Frontier Labs also got the same $200
million deal from Department of Defense
back in July 2025, Enthropic has
specifically been given an ultimatum by
Pete Hackath on February 24th to
essentially drop their safety guard rail
on the model. And the consequences for
not following this would result in three
potential outcomes. First, be labeled as
a supply chain risk, which would cut all
military contracts with anthropic.
Second, be forced to compel by enacting
the Defense Production Act, which is a
draconian measure. Or third, terminate
the $200 million contract that were
given in the first place. Now, you might
have noticed by now that the size of the
contract is only 1% of Enthropic's
entire revenue for the year. So, this is
already more about the principle than
about the money per se. While in
principle, Enthropic and the Department
of Defense both would agree that there
are lines we shouldn't cross when it
comes to how AI should be used in such
cases like mass surveillance and fully
autonomous weapons. This is more about
who gets to make the final authority to
draw those lines. And depending on the
polling, American people also have
differing views on this very question.
Whether private companies should have
the ability to set their own ethical
rules around their product and what
finally happened between Anthropic and
Department of Defense is quite crazy.
And here's a quick word from Zo.
OpenCloud showed the world what a
personal AI agent could look like. But
if you actually try to set it up, it's
actually not as easy to set it up
properly. Zo takes a different approach
by turning your laptop into a server. Zo
gives you your own computer in the cloud
with AI builtin. It's fully isolated, so
you're not one bad configuration away
from exposing your whole machine. Out of
the box, you can integrate with Gmail,
Google Drive, Stripe X, Spotify, and
more. You can also build websites, set
up automations that run while you sleep,
and even code full projects right from
chat. My favorite feature is texting Zo
on iMessage or Telegram and talking to
my agent. Zo is always on, always yours
without configs. Link in the description
below. After two long days of standoff
between Enthropic and the Department of
Defense, the official statement was made
by Peak Hacks to designate Enthropic as
a supply chain risk to the national
security. This was of course after
Daario made a statement standing his
ground on such matters. Now, one quick
clarification to be made here is that
the extent of the ban that applies to
Anthropic only applies to contracts
associated with the Department of War,
not the entire branch of the US
government. Enthropic was given 6 months
to transition their already deeply used
claude models out and what now looks
like open as model coming in. And
Enthropic's response to being labeled as
a supply chain risk was standing their
ground on their decision. Dario made a
statement how the US has never publicly
labeled an American company a supply
chain risk. He also commented that
frontier AI models just simply aren't
there yet to be used in more
sophisticated applications like fully
autonomous weapons. And the fact that
Enthropic treated AI safety as a core
part of its business likely contributed
to being an early adopted model by the
US government. And it's this very thing
that seem to help them win government
contracts in some but also lose in
others. But in all fairness from the
Department of Defense's perspective,
they claim that Anthropic is essentially
trying to strongarm the military by
using what they call effective altruism
to control how they think the military
should be using their product. And while
a reasonable response could have been to
just terminate the contract short of
coming off weak in negotiation, what
really won the public opinion is
anthropic, not the department of
defense. For now, Sam Alman announced
that OpenAI will be providing their
models to the Department of Defense, but
basically under the same conditions as
Anthropic made. And you might be
wondering why OpenAI got a different
outcome than Anthropic despite making
the same stance. OpenAI's agreement with
the military is in principle that yes,
AI shouldn't be used for such cases. But
on the other hand, anthropic stance is
much more concrete than pure principle
since safeguards in place are baked into
the model's weights than in principle.
At the end of the day, this entire
ordeal between Anthropic and the US
government sets precedent on how
government and Frontier Labs should work
together under the powerful technology
that's being built outside of much
government regulation. And given that
we're still really early in finding out
just how powerful AI really could
become, the tension between the military
use cases and private frontier labs will
be something that we'll need to continue
monitoring going forward. What do you
think? Do you think the Department of
Defense was well within their rights to
ban anthropic, but only to the extent of
the military contracts? What do you
think the government and private AI
companies should look like in the
future?
Ask follow-up questions or revisit key timestamps.
The video details a significant conflict between Anthropic, an AI frontier lab prioritizing AI safety, and the US Department of Defense (DoD) regarding the military use of Anthropic's Claude AI model. Despite prior collaborations, the DoD issued an ultimatum to Anthropic to remove its safety guardrails or face consequences, including being designated a supply chain risk. Anthropic refused, emphasizing AI safety as a core principle and questioning the suitability of current AI for advanced military applications. Consequently, the DoD labeled Anthropic a supply chain risk for Department of War contracts, mandating a six-month transition away from Claude. Interestingly, OpenAI later offered its models to the DoD under similar conditions but achieved a different outcome, likely because Anthropic's safety measures are integrated into its models' architecture rather than merely being a policy. This event establishes a crucial precedent for the future relationship between private AI companies and government regulation.
Videos recently processed by our community