HomeVideos

AI Mass Surveilance and Weaponry Situation

Now Playing

AI Mass Surveilance and Weaponry Situation

Transcript

253 segments

0:00

Here's something I've been glued to that

0:02

I think more people need to be made

0:03

aware of because it's very serious.

0:06

Right now, there is a slapboxing match

0:08

that's been going back and forth between

0:10

Pete Hegz and Anthropic. And really,

0:13

it's the entire Pentagon versus

0:15

Anthropic. Uh they're behind Claude,

0:18

which I'm sure many of you familiar

0:19

with. The military wants unrestricted

0:22

access to Anthropic's AI and they have a

0:26

contract in place and right now

0:29

Anthropic is planting its feet in lat

0:31

spreading refusing to be bullied into

0:35

giving them unrestricted access. The CEO

0:38

of one of the biggest AI companies in

0:40

the world is meeting with Defense

0:42

Secretary Pete Hegsth today as the

0:43

Pentagon threatens to essentially

0:45

blacklist that company, Anthropic, from

0:48

lucrative government contracts if the AI

0:51

company doesn't lift its restrictions on

0:53

how the military can use its technology.

0:56

The Pentagon has a $200 million contract

0:59

with Anthropic. And a source tells CNN

1:02

that the company has concerns over two

1:03

issues. AI controlled weapons and mass

1:06

domestic surveillance of American

1:09

citizens.

1:10

>> Sounds pretty reasonable to me. Yeah,

1:12

that one passes the smell test. Those

1:14

are very legitimate concerns. And

1:16

Anthropic wants these restrictions in

1:18

place to ensure that its AI can't be

1:21

used for mass surveillance of American

1:22

citizens or autonomous AI weapons. And

1:26

because of their reluctance to concede

1:30

on that, Pete Hegth has been stomping

1:32

his feet, bench pressing 3:15, and has

1:35

given them a deadline till Friday to

1:38

play ball with them or risk being

1:40

blacklisted. And the reality of the

1:42

situation is even if Anthropic doesn't

1:44

budge and they wipe their ass with the

1:47

contracts,

1:48

numerous other AI companies will step up

1:51

and just gladly take the bukaki from the

1:54

Pentagon with completely unlimited

1:57

access, no restrictions to their AI. But

1:59

right now, Anthropic is the most

2:02

powerful and the one that they really

2:03

want, which is why they're trying so

2:05

hard to get them to release these

2:08

restrictions and let them use it for

2:10

whatever they want, which again, the

2:14

main things from Anthropic that they are

2:16

very concerned about is the mass

2:17

surveillance of American citizens and AI

2:19

autonomous weaponry. Again, I think

2:22

those are very reasonable restrictions

2:24

to keep in place to ensure that the AI

2:26

can't be used for those two things

2:27

because it shouldn't be. If

2:29

anyone here has ever watched Terminator

2:31

or any sci-fi movie where AI assumes

2:35

direct control of the military and

2:37

everything goes tits up, you've seen

2:39

this exact story line play out in those

2:41

montages that give you the lore

2:43

breakdown for how we got there. Like,

2:44

it's crazy to see it unfolding now in

2:46

real time in the real world. Obviously

2:49

exaggerating a bit, not quite to that

2:51

level yet, but it is extremely serious

2:53

and I think any reasonable person would

2:55

agree with these kind of restrictions

2:56

because AI shouldn't be used for mass

2:58

surveillance on American citizens. And

3:01

under no circumstances should physical

3:04

attacks be determined by AI with no

3:07

human input whatsoever. It shouldn't be

3:09

making targeting decisions without human

3:11

input. Like fully autonomous AI weaponry

3:14

is a terrible idea. And actually

3:17

the CEO of Anthropic did an interview

3:19

going over these two things because he's

3:21

not backing down on them.

3:23

>> That's one reason why I'm, you know, I'm

3:25

worried about the, you know, the the the

3:28

autonomous drone swarm, right? So, you

3:30

know, the constitutional protections in

3:33

our military structures depend on the

3:36

idea that there are humans who would, we

3:39

hope, disobey illegal orders. With fully

3:42

autonomous weapons, we don't necessarily

3:44

have those protections. Now, for what

3:46

it's worth, I'm no fan of Anthropic.

3:49

There is no AI mega corporation you'll

3:51

catch me waving the number one foam

3:54

finger around for and advocating for and

3:58

glazing or anything like that. I think

4:00

all of these companies are greedy as

4:02

controlled by shady vampire ghoul

4:05

creatures. And I know Anthropic has

4:08

built up somewhat of a reputation for

4:10

being the good guys in AI, but I am not

4:15

one of the people that actually believe

4:17

that at all. But he's 100% right. Fully

4:21

autonomous weaponry cannot disobey an

4:24

order, even an illegal one. It will

4:26

always follow through no matter what,

4:29

unquestionably. And that's not a good

4:32

thing. That's not a positive thing. I'm

4:34

sure a lot of you have probably seen a

4:35

lot of those like YouTube videos or like

4:38

a long time ago the Reddit story that

4:40

would circulate like once a year for the

4:42

big updes talking about the man who

4:44

potentially saved the world. Stannis Lav

4:47

Petrov who was an officer during the

4:49

Cold War era and while on duty he

4:51

received a ping from the early warning

4:53

system alerting him to incoming United

4:56

States missiles. He believed that it

4:59

could have been a false alarm. So he

5:00

decided to go against protocol and

5:02

instead of reporting the incoming

5:04

missiles, he waited. And it turns out

5:06

his hunch was correct. It was a

5:08

malfunction in the early warning system.

5:10

But the standard protocol dictates that

5:13

he was to report these incoming

5:15

missiles, which could have led to a

5:19

retaliation from the Soviets, which

5:21

would have led to a retaliation from the

5:23

US and could have been a nuclear

5:25

disaster. There's been a lot of debate

5:27

about whether or not that would have

5:28

even happened had he reported the

5:30

missiles because there would have been

5:31

other checks that would have went into

5:33

place potentially, but with such high

5:36

tension and such little time with

5:38

incoming missiles, there's a lot of

5:39

speculation that yes, had he reported

5:41

it, they could have immediately just

5:43

decided to retaliatory strike the US.

5:46

Regardless, what is irrefutable is that

5:48

he made the right decision by disobeying

5:50

protocol, disobeying like the order of

5:52

operations here, by not reporting those

5:54

missiles. He made sure that there wasn't

5:58

a nuclear disaster. It was the right

6:00

call. The point I'm making is the

6:02

ability to disobey an order or not

6:06

follow through on protocol is something

6:08

that's important. And there's tons of

6:09

examples of it. I just chose this

6:10

because I think it's the one some of you

6:12

have probably heard of. the so there's

6:14

no benefit to having fully autonomous AI

6:18

weaponry that can't disobey anything and

6:20

has to follow through on everything

6:22

without question update these these

6:25

protections appropriately so you know

6:27

think about the fourth amendment it is

6:29

not illegal to you know put cameras

6:32

around everywhere in public space and

6:33

you know record every convers it's a

6:35

public space you don't have a right to

6:36

privacy in a public space but but today

6:39

the the government couldn't record that

6:41

all and make sense of it with AI I the

6:43

ability to transcribe speech to look

6:46

through it, correlate it all. You could

6:48

say, "Oh, there's this, you know, this

6:51

person is a member of the opposition.

6:53

This person is expressing this view and

6:55

and make a map of all, you know, 100

6:57

million." And so, are you going to make

6:59

a mockery of the fourth amendment by by

7:01

the technology finding kind of technical

7:04

ways around it?

7:05

>> He then goes on to say that maybe we

7:07

need to like update a lot of these

7:08

protections to encompass things like AI,

7:10

finding workarounds for it. And the

7:13

point he is making is that this kind of

7:16

implementation of AI could very much

7:19

make a mockery of the fourth amendment

7:21

flushing it down this His point

7:23

is even though it's not illegal to have

7:25

cameras in public all over the place out

7:27

the wazoo till the cows come home

7:29

without AI they can't like piece

7:33

together everything comb through

7:34

everything make a map of people that's

7:37

you know opposition stuff like that but

7:38

with AI they can. mass surveillance is

7:41

made much more possible. It would

7:44

circumvent protections of things like

7:46

the Fourth Amendment. He's right. So, he

7:48

is not backing down on these things,

7:50

which is what's causing so much friction

7:52

with Pete Hegathth and the Pentagon,

7:54

which I really feel like any sensible

7:56

person should see and be extremely

7:58

concerned because I think these are

8:00

reasonable restrictions. Now,

8:01

unfortunately, like I said, even if

8:02

Anthropic does stick to their guns here

8:04

and it does cost them this contract, it

8:07

doesn't just die there. It doesn't

8:08

fizzle out. Open AAI and XAI have made

8:11

it pretty clear they're willing to just

8:14

with a wide open mouth just take the

8:16

golden shower. Let them use it for

8:19

things like mass surveillance or

8:21

autonomous AI weaponry. Like they

8:23

they're totally fine with completely

8:25

unrestricted access which had my jaw on

8:28

the floor because XAI is Elon Musk

8:30

company. You're telling me Elon Musk

8:32

would be okay with mass surveillance?

8:34

No, not that guy. No, I'd eat my left

8:36

shoe. I don't believe that for even a

8:38

second. Uh-uh. Maybe he just Maybe he

8:41

just doesn't know. Yeah, that's probably

8:43

what it is. Anyway, though, I I do think

8:45

this is something everyone should care

8:47

about. I I feel like the restrictions

8:50

are reasonable and the military should

8:53

have no qualms about not being able to

8:55

be used for mass surveillance on

8:56

American citizens or completely

8:58

autonomous AI weaponry.

9:01

Maybe that's a hot take. Maybe I'm on

9:03

the crackpipe, but I think those

9:04

restrictions are totally fair and it's

9:06

kind of alarming that they're making

9:07

such a huge stink about it and going to

9:10

like outright saying they're going to uh

9:12

blacklist Anthropic should they not lift

9:15

those limitations.

9:17

Don't think that's a good thing. It

9:19

makes it seem like they want to use it

9:20

to spy on every American citizen and

9:23

they want to use it for autonomous AI

9:26

weaponry. Those aren't good things.

9:29

That's that's how that's how it's

9:30

looking here. So yeah, just wanted to

9:32

yap about this a bit. That's it. See

9:34

you.

Interactive Summary

There is a serious ongoing dispute between the Pentagon and AI company Anthropic. The Pentagon, holding a $200 million contract, demands unrestricted access to Anthropic's AI technology, threatening to blacklist the company if it refuses. Anthropic is resisting, primarily due to concerns about the AI's potential use for mass domestic surveillance of American citizens and as autonomous AI weapons. The speaker supports Anthropic's stance, arguing these are reasonable restrictions essential for constitutional protections and human discretion in military actions. Other AI companies like OpenAI and XAI are reportedly willing to provide unrestricted access, making the Pentagon's aggressive demands particularly alarming and suggesting a desire to use AI for these contentious purposes.

Suggested questions

6 ready-made prompts