Doomer video funded by AI Investor lying to you again. (Also: The Sun is hot)
136 segments
The AI Doomers over at the "AI in Context"'s
YouTube channel just released yet another video
to try to scare you about SuperAI. Most of that
video is reading a fictional story out of a
book
that I've talked about several times - I'll link
those videos - or repeating the opinions of
people
in the industry who have a financial interest
in AI being considered world-changing. But
there
are a handful of verifiable claims scattered
here and there, and even with only a tiny,
tiny part of the video involving the real world
at all, they still couldn't help lying to you
to
try to scare you. And there's very little I
hate more than rich and powerful people lying
to the
public - or paying someone to lie to the public
on their behalf - in order to get richer and more
powerful. What a bunch of As-----ls This is the
Internet of Bugs. My name is Carl. I've been a
software
professional since the late 1980s. I'm not
being paid to make this video and I have no
financial
incentive to tell you what I'm about to say. I'm
just trying to do my part to make the Internet
a
safer and less buggy place, because I think
those of us that are able, should be responsible
to try
to make the world better for everyone else. So
here's the claim from the video: "In February
2026, Anthropic asked their most advanced AI
model to autonomously find zero-days. These are
previously unknown software vulnerabilities.
Four zero days were enough to take down Iran's
nuclear program. Their AI, Claude Opus 4.6,
found 500 of them. In a post about this,
Anthropic wrote that, quote, 'the same
capabilities that help defenders find and fix
vulnerabilities
could help attackers exploit them.' Unquote.
Less than a week after that post, a single
hacker
used that same AI model to hack the Mexican
government and steal 195 million taxpayer and
voter records." So the model in video is talking
about is Claude Version 4.6, which was released
in February of 2026. That's the version that
Anthropic claims found 500 zero days, although
we can't verify that claim. And AI's are
notorious for finding security bugs that don't
actually
exist. So here's the paper on Claude helping
that attacker attack the Mexican government.
This paper was published on February 26, 2026,
not long after Claude's 4.6's release.
But here you can see that the hack actually
took place between December 2025 and early
January of
2026, happening months before the "AI in context"
video claimed it had. And the hack didn't
involve
any zero days. The bugs exploited the attack
were: "weak or default authentication
configuration,"
"exposed admin channels", and "web applications
that still had unpatched vulnerabilities from
2023", which means that the people responsible
for securing those servers never bothered to
install fixes that had been available for more
than a year at that point. What happened here
was
that Anthropic trained their model Claude to
know about exploitable bugs that were found in
2023.
And that model used that training that it got
on those bugs to write scripts for a bad guy so
that he could hack neglected and out of
date servers. And now this YouTube video is
trying to
convince you that means that a future super AI
will be able to use super zero day hacking
skills
to pursue goals of its own that are harmful to
humanity. This is typical Doomer propaganda.
The reason they want to scare you about AI and
are willing to lie to do it is because as long
as
the public can be convinced that super AI will
kill everyone, and as long as the public can be
convinced that the Chinese can't be stopped
from making super AI, then Silicon Valley could
do whatever
it wants in the service of getting to super
killer AI before the Chinese do. I've got a
more detailed
write up on this on my SubStack over here. Now,
why would 80,000 hours - the group who produces
the videos for the "AI in context channel" - want
Silicon Valley to be able to do whatever it
wants?
Well, I don't know for sure, but I have a guess.
So 80,000 hours' biggest funding source is a
group
called "Coefficient Giving", which was started by
Dustin Moskovich, who was one of the lead
investors
in Anthropic and an early backer of OpenAI. So
the fewer liabilities that Anthropic and Open
AI
have, the more they will be worth, and the more
that they are worth, the more the investments
of
the biggest funder of 80,000 hours is worth.
This is a scam to try to keep the public from
worrying
about: "AIs that encourage troubled teens to
commit permanent self harm", "send the wrong
people to jail
because of bad facial recognition", "modify
pictures of underage girls to make them appear
to be nude",
and were potentially involved in the selection
of an Iranian elementary school to be targeted
by two US Tomahawk missiles. Don't let them
distract you. And if you're going to be scared
of AI,
be scared for the people it's harming right now,
and not for some hypothetical super AI that
will
probably never happen. Thanks for watching, and
let's be careful up there.
Ask follow-up questions or revisit key timestamps.
Carl, a veteran software professional, debunks claims made by the 'AI in Context' YouTube channel regarding a supposedly dangerous SuperAI. He demonstrates that the channel's narrative—specifically about a 'Claude' model enabling a catastrophic hack on the Mexican government—is based on distortions of reality and fabricated timelines. Carl argues that this fear-mongering serves the financial interests of tech investors and distracts the public from real-world harms currently caused by existing AI technologies.
Videos recently processed by our community