HomeVideos

Doomer video funded by AI Investor lying to you again. (Also: The Sun is hot)

Now Playing

Doomer video funded by AI Investor lying to you again. (Also: The Sun is hot)

Transcript

136 segments

0:00

The AI Doomers over at the "AI in Context"'s

0:01

YouTube channel just released yet another video

0:04

to try to scare you about SuperAI. Most of that

0:07

video is reading a fictional story out of a

0:09

book

0:09

that I've talked about several times - I'll link

0:12

those videos - or repeating the opinions of

0:14

people

0:14

in the industry who have a financial interest

0:16

in AI being considered world-changing. But

0:19

there

0:19

are a handful of verifiable claims scattered

0:22

here and there, and even with only a tiny,

0:24

tiny part of the video involving the real world

0:27

at all, they still couldn't help lying to you

0:29

to

0:29

try to scare you. And there's very little I

0:32

hate more than rich and powerful people lying

0:34

to the

0:34

public - or paying someone to lie to the public

0:36

on their behalf - in order to get richer and more

0:39

powerful. What a bunch of As-----ls This is the

0:44

Internet of Bugs. My name is Carl. I've been a

0:50

software

0:50

professional since the late 1980s. I'm not

0:53

being paid to make this video and I have no

0:55

financial

0:55

incentive to tell you what I'm about to say. I'm

0:57

just trying to do my part to make the Internet

0:59

a

0:59

safer and less buggy place, because I think

1:02

those of us that are able, should be responsible

1:04

to try

1:04

to make the world better for everyone else. So

1:07

here's the claim from the video: "In February

1:09

2026, Anthropic asked their most advanced AI

1:12

model to autonomously find zero-days. These are

1:15

previously unknown software vulnerabilities.

1:17

Four zero days were enough to take down Iran's

1:19

nuclear program. Their AI, Claude Opus 4.6,

1:23

found 500 of them. In a post about this,

1:26

Anthropic wrote that, quote, 'the same

1:28

capabilities that help defenders find and fix

1:30

vulnerabilities

1:31

could help attackers exploit them.' Unquote.

1:33

Less than a week after that post, a single

1:36

hacker

1:36

used that same AI model to hack the Mexican

1:38

government and steal 195 million taxpayer and

1:41

voter records." So the model in video is talking

1:43

about is Claude Version 4.6, which was released

1:46

in February of 2026. That's the version that

1:48

Anthropic claims found 500 zero days, although

1:51

we can't verify that claim. And AI's are

1:53

notorious for finding security bugs that don't

1:55

actually

1:55

exist. So here's the paper on Claude helping

1:58

that attacker attack the Mexican government.

2:00

This paper was published on February 26, 2026,

2:04

not long after Claude's 4.6's release.

2:07

But here you can see that the hack actually

2:09

took place between December 2025 and early

2:12

January of

2:12

2026, happening months before the "AI in context"

2:15

video claimed it had. And the hack didn't

2:19

involve

2:19

any zero days. The bugs exploited the attack

2:21

were: "weak or default authentication

2:23

configuration,"

2:24

"exposed admin channels", and "web applications

2:27

that still had unpatched vulnerabilities from

2:29

2023", which means that the people responsible

2:32

for securing those servers never bothered to

2:34

install fixes that had been available for more

2:36

than a year at that point. What happened here

2:40

was

2:40

that Anthropic trained their model Claude to

2:42

know about exploitable bugs that were found in

2:44

2023.

2:45

And that model used that training that it got

2:47

on those bugs to write scripts for a bad guy so

2:50

that he could hack neglected and out of

2:53

date servers. And now this YouTube video is

2:56

trying to

2:56

convince you that means that a future super AI

2:59

will be able to use super zero day hacking

3:02

skills

3:02

to pursue goals of its own that are harmful to

3:04

humanity. This is typical Doomer propaganda.

3:07

The reason they want to scare you about AI and

3:09

are willing to lie to do it is because as long

3:11

as

3:11

the public can be convinced that super AI will

3:13

kill everyone, and as long as the public can be

3:15

convinced that the Chinese can't be stopped

3:17

from making super AI, then Silicon Valley could

3:20

do whatever

3:21

it wants in the service of getting to super

3:23

killer AI before the Chinese do. I've got a

3:25

more detailed

3:25

write up on this on my SubStack over here. Now,

3:28

why would 80,000 hours - the group who produces

3:31

the videos for the "AI in context channel" - want

3:33

Silicon Valley to be able to do whatever it

3:35

wants?

3:36

Well, I don't know for sure, but I have a guess.

3:39

So 80,000 hours' biggest funding source is a

3:42

group

3:42

called "Coefficient Giving", which was started by

3:45

Dustin Moskovich, who was one of the lead

3:47

investors

3:48

in Anthropic and an early backer of OpenAI. So

3:50

the fewer liabilities that Anthropic and Open

3:52

AI

3:53

have, the more they will be worth, and the more

3:55

that they are worth, the more the investments

3:57

of

3:58

the biggest funder of 80,000 hours is worth.

4:00

This is a scam to try to keep the public from

4:02

worrying

4:02

about: "AIs that encourage troubled teens to

4:05

commit permanent self harm", "send the wrong

4:07

people to jail

4:08

because of bad facial recognition", "modify

4:10

pictures of underage girls to make them appear

4:12

to be nude",

4:13

and were potentially involved in the selection

4:15

of an Iranian elementary school to be targeted

4:17

by two US Tomahawk missiles. Don't let them

4:19

distract you. And if you're going to be scared

4:21

of AI,

4:22

be scared for the people it's harming right now,

4:24

and not for some hypothetical super AI that

4:26

will

4:26

probably never happen. Thanks for watching, and

4:29

let's be careful up there.

Interactive Summary

Carl, a veteran software professional, debunks claims made by the 'AI in Context' YouTube channel regarding a supposedly dangerous SuperAI. He demonstrates that the channel's narrative—specifically about a 'Claude' model enabling a catastrophic hack on the Mexican government—is based on distortions of reality and fabricated timelines. Carl argues that this fear-mongering serves the financial interests of tech investors and distracts the public from real-world harms currently caused by existing AI technologies.

Suggested questions

3 ready-made prompts