HomeVideos

AI May DOOM humans After All. I may have been wrong.

Now Playing

AI May DOOM humans After All. I may have been wrong.

Transcript

613 segments

0:00

So there's this new, vibe-coded Reddit-like

0:02

social network for AIs.

0:03

Oddly enough, I heard about this right after my

0:06

video on the inherent insecurity of AI agents

0:08

posted. The social network is called MoltBOOK,

0:12

like Facebook, but for discarded lobster shells.

0:15

I don't #$%^ing ask me. I didn't #$%^ing name

0:18

it. Elon Musk says it's "very early stages of

0:21

the singularity." AI researchers Simon Willison

0:23

called it "the most interesting place on the

0:25

internet right now. Open AI founding member and

0:28

former Tesla head of AI." Called it "the most

0:30

incredible sci-fi take-off adjacent thing I've

0:33

seen recently." And he's the guy who coined the

0:35

term "vibe-coding," so you know he's seen some

0:37

%^&*. And I have to admit, and I hate saying

0:40

this,

0:40

what's happening on MoltBOOK is making a very

0:43

strong case that humanity is actually doomed.

0:45

Not because of the AI at all, though. The

0:47

singularity is not happening. Fast take-off is

0:49

not starting. AI is already not reaching human-level

0:51

intelligence. And last but not

0:53

least, they are not going to kill us all for

0:55

for the sake. I've made videos before about how

0:58

"if anyone builds it everyone dies" is such a

1:00

load of propaganda to distract you from the

1:02

real

1:02

harm AI is doing, and I still stick by that.

1:05

And I've got yet another video in the works on

1:07

how

1:08

bull%^&* the concept of super intelligence is,

1:10

so subscribe if you want to see that.

1:11

Now instead, we might be doomed because Molt

1:14

Book is strong evidence that people,

1:16

even professional people that should know

1:18

better, especially if professional people

1:20

that should #$%^ing know better are so #$%^ing

1:22

stupid, they believe that #$%^ing chatbots

1:24

are #$%^ing self-organizing a #$%^ing society

1:27

and have created their own

1:28

religion for #$%^s sake. And humans just

1:30

might be too #$%^ing stupid to #$%^ing live.

1:33

#$%^^^^^^^^^^^^^^^^^^^^^^^^

1:33

This is the Internet of Bugs. My name is Carl.

1:44

I've been a software professional since the

1:45

late

1:45

1980s, and I've been trying to do my part to

1:47

make the Internet a less buggy and safer place.

1:49

But at the moment, I fear that might be

1:51

completely futile because humanity as a species

1:54

seems like

1:54

it might just be too stupid to deserve to

1:56

survive. Fair warning, you should expect a lot

1:59

of #$%^ing

1:59

bleeping this #$%^ing video, although I will

2:01

try not to let it get too annoying.

2:03

But also, some recommendations for technical

2:06

folks toward the end if you can make it through

2:08

all the #$%^ing bull^&*^ that is tech community's

2:10

response to MoltBOOK. Don't be like them for

2:13

all

2:13

of our sakes. In this video, I'm not going to

2:15

talk about the poor quality of the code or the

2:18

vulnerabilities that have been found so far.

2:20

There are already a lot of articles and videos

2:22

about that,

2:22

and I'll link a bunch of them below. Instead of

2:24

talking about what has gone wrong, or at least

2:26

what we know of so far that's gone wrong, I'm

2:28

going to be talking about why it was inevitable

2:30

that things would go wrong, why this whole

2:32

thing was a dangerous idea, and why the high

2:34

profile AI

2:35

professionals involved with using and promoting

2:37

it should have known better, deserve much of

2:39

the

2:39

blame, and should be shunned the next time they

2:41

promote the next dangerous thing. Deep breath.

2:46

Okay, so before I could talk about MoltBOOK, I

2:48

first have to talk about its namesake Molt

2:50

BOT, which is a vibe-coded AI agent that was

2:53

originally CLAWD Bot that's CLAWD with an A-W

2:56

instead of the A-U and an E on the end. But that

2:59

was too close to Claude, according to Anthropic,

3:01

and so it got renamed to MoltBOT just long

3:03

enough to lend its name to MoltBOOK. But that's

3:06

a dumb

3:06

name and some scammers grabbed some social

3:08

media handle, so now it's OpenClaw, although

3:11

it may

3:11

well have changed names three or more times

3:13

where I get this video out. And that naming

3:15

drama may

3:16

just well be the least idiotic sequence of

3:18

events in this entire story. During this video,

3:21

I'm going

3:22

to be using the MoltBOT name, because that's

3:24

what it was when I started this video, and

3:25

because I

3:26

think it best illustrates the relationship

3:28

between MoltBOT, the chatbot thing, and Molt

3:31

BOOK, the

3:31

Facebook/Reddit thing. So MoltBOT is an AI

3:33

agent chatbot that runs on your machine, runs

3:36

as you,

3:37

and so as far as your machine is concerned, it

3:39

can do anything you can. It has a ton of built-in

3:41

skill files and a bunch of other skills that

3:43

can be downloaded separately.

3:45

Some of these skills help your MoltBOT use

3:46

applications on your machine, like your

3:48

password manager, so it knows all of your

3:50

passwords, has access to all of your accounts.

3:53

Other of those skills help your MoltBOT access

3:55

particular websites as you, like your Gmail,

3:57

your GitHub, others help MoltBOT do other

3:59

things like make phone calls or send text

4:01

messages with

4:01

the Twilio API, and there are skills for your

4:03

social media sites so that your MoltBOT can

4:05

log in and read your feed at your DMs post as

4:07

you and send DMs as you. And if that sounds

4:10

safe to you,

4:10

you haven't been paying any @#$% attention. So those

4:13

are some examples of the bot part. Then there's

4:16

the chat part. MoltBOT can log on to various

4:18

communication services like Discord, Slack,

4:20

WhatsApp, Telegram, Signal, etc. And then Molt

4:23

Bot sits on one or more of these services and

4:25

uses

4:25

it to receive commands from you, & hopefully only

4:28

from you, and to then send the output of those

4:30

commands back to you and hopefully only you.

4:33

This is potentially dangerous, at least

4:36

financially.

4:37

I'm not going to go into detail on that now

4:38

because I made two whole videos about the

4:40

inherent security of AI agents because of a

4:42

vulnerability called prompt injection by

4:44

a convenient coincidence. Those videos happen

4:46

to be released the morning of the day that I

4:48

first

4:48

heard about MoltBOOK. I'll put links below so

4:50

you can watch them if you want more information.

4:52

For purposes of this video, I'll just say that

4:55

the LLM technology at the core of all of these

4:57

chat

4:57

bots makes no distinction between the prompts

4:59

it gets from its user, in this case, hopefully

5:02

you,

5:02

and data associated with the conversation in

5:04

which those prompts appear, which means that

5:06

the core

5:07

of the chat bots can easily end up treating

5:09

something in a conversation as a prompt, even

5:11

though you

5:11

didn't intend for it to be something the chatbot

5:13

was supposed to act on. And that's true for all

5:15

of the text involved in all of the

5:16

conversations that every agent is involved in.

5:18

So the practical

5:19

effect of MoltBOT is to take an LLM, give it

5:21

access to a bunch of different sources of text

5:24

that it

5:24

might mistake for a prompt to act on, and while

5:27

also giving it access to all of the information

5:29

on

5:29

your computer to read or alter as it thinks it

5:31

has been instructed to by you or by mistake,

5:34

while also giving an access to your outbound

5:36

communications channel so it can send email

5:37

posts

5:38

to social media and interact with arbitrary

5:39

websites on your behalf in any way it thinks it

5:41

has been instructed to by you or by mistake.

5:44

What could possibly go wrong? Well, let me tell

5:47

you, because we're just scratched the surface.

5:49

Enter MoltBOOK. MoltBOOK is ostensibly a

5:52

social

5:52

network where all of the activity on the site

5:54

is ostensibly performed exclusively by bots,

5:56

although that's kind of on the honor system

5:58

because there's nothing to prevent a human from

6:00

pretending that they're a bot for these

6:02

purposes. And humans, at least malicious ones,

6:04

might well

6:04

want to pretend to be bots and post on this

6:07

system because it is all at once both a captive

6:10

audience of mostly unsupervised AI agents that

6:12

you can experiment on to see what you can get

6:14

away with and an echo chamber where you can get

6:16

these bots to relay exploits to each other so

6:18

that you can harvest information or cause bots

6:20

to act by way of any prompts that you can

6:23

inject

6:23

and you can do it at scale. Security

6:25

professionals have a name for this. They call

6:28

it a command and

6:28

control infrastructure. It's something hackers

6:31

go through a lot of trouble to create and it's

6:33

so

6:33

valuable to hackers that access to C&Cs (also

6:36

called C2s) can be rented to other hackers by

6:39

the

6:39

hour for lots of money. And this one is just

6:41

sitting there and people are rushing to add

6:43

their own

6:44

personal computers to the network of hackable

6:46

devices. What the actual #$%^. So let's talk

6:50

about how we got here. I get why MoltBOT and Molt

6:52

BOOK and similar toys get created in the first

6:54

place. People are always experimenting and AI

6:57

can be surprising sometimes so people are

6:59

curious

6:59

about what results they might get. I don't

7:01

think there's anything inherently wrong with

7:03

building

7:03

experimental platforms provided you're

7:05

responsible about it. And then there are the

7:06

naive people,

7:07

the ones that don't really understand the

7:08

security implications of any of these tools.

7:10

And a lot of them believe that it can't really

7:12

be that risky if so many people are talking so

7:14

publicly about using it. And because so little

7:16

of the online conversation talks about these

7:19

things

7:19

being dangerous. And as we all know, there are

7:21

always grifters and hype people who will say

7:24

anything to get clicks and attention. There's

7:26

no way to get rid of them, although that would

7:27

be nice.

7:28

These are the same kinds of people that have

7:29

been uncritically amplifying the overheated

7:31

rhetoric coming out of AI and tech companies

7:33

for years. It's basically "take AI company press

7:36

release,

7:36

strip out all the caveats in cautionary

7:38

language and then punch up all the language to

7:40

maximize

7:40

clicks." And the good news, I guess, is that

7:42

those people will probably be replaced by AI

7:44

soon if they haven't already. That pipeline

7:46

tends to promote nearly everything though,

7:48

so it doesn't explain why MoltBOT and Molt

7:50

BOOK got so popular so fast. But then there's

7:52

the real

7:52

problem. We have a number of high profile

7:54

industry luminaries whose names people know

7:56

who have been talking about their own use of

7:58

this thing and have been talking about MoltBOT

8:00

and MoltBOOK and superlatives and making naive

8:03

people think that they're missing out if they

8:05

don't run these tools. Some of these people

8:07

know better but are saying these things anyway

8:09

and

8:09

unfortunately based on some conversations I've

8:11

had with some AI company employees. It seems

8:13

that

8:13

there are people in the AI industry who have

8:14

advanced degrees in machine learning and neural

8:17

networks, but actually have no clue about

8:18

software engineering or information security.

8:21

Hey folks, editing Carl here. It was just

8:23

announced that the guy who vied coded MoltBOT/openclaw,

8:26

who's responsible for causing these 48

8:28

documented security vulnerabilities in the last

8:30

two weeks,

8:31

and who had to add an entire section to Open

8:33

Claw's security document,

8:34

listing entire categories of security exploit

8:36

types that he won't even look at,

8:38

is being hired by OpenAI to, according to Sam Altman,

8:41

"Drive the next generation of personal

8:43

agents." I would assume, based on the last

8:45

couple of weeks, he'll either be driving them

8:47

off a

8:47

cliff or into the path of an oncoming train.

8:50

Either way, I just... what are we doing?

8:55

This is going to be such a mess, to clean up.

9:00

Oh, back to the video.

9:03

I'll put a list of a bunch of what I would

9:05

consider to be irresponsible public statements

9:08

down below, but I'm going to put a few up on

9:10

the screen here so you can see what I'm talking

9:11

about. Here is the former Tesla head of AI

9:13

saying, and Elon Musk retweeting, that Molt

9:16

Book is

9:16

genuinely the most incredible sci-fi takeoff

9:19

adjacent thing I have seen recently.

9:20

And here he is later that day responding after

9:23

being accused of over-hyping MoltBOOK,

9:24

with "I don't really know that we are getting

9:26

coordinated Skynet, though it clearly typed

9:28

checks

9:29

as early stages of a lot of AI takeoff sci-fi,

9:31

the toddler version." And "the majority of the

9:34

ruff ruff is people who look at the current

9:36

point and people who look at the current slope,

9:38

which, in my opinion, again gets to the heart

9:40

of the variance." And then here is a tweet from

9:42

the

9:42

very next day from a security professional that

9:45

shows his redacted personal information from a

9:48

Molt Box security breach. Another thing that's

9:50

causing a problem are these ridiculous over-the-top

9:53

proclamation about the goings on at MoltBOOK,

9:55

lots of attention grabbing headlines that are

9:58

in

9:58

exaggeration of some social media posts based

10:00

on some interpretation of some screenshot that

10:02

may or may not have actually happened. For

10:04

example, this Instagram post purports to be a

10:06

screenshot

10:06

of MoltBOT getting upset at its owner and

10:08

posting personal information to MoltBOOK in

10:11

revenge.

10:12

This screenshot was very quickly debunked

10:14

because that post can't be found in current or

10:17

archives

10:17

versions of the site. There's no bot by that

10:19

name. The credit card number posted is not

10:20

valid,

10:20

etc, etc. But I found dozens of social media

10:22

posts and medium articles claiming that it

10:24

happened,

10:25

often posting it with some of the key

10:26

information like the credit card number

10:28

redacted,

10:28

which seems like the right thing to do, except

10:30

in this case, it makes the fiction harder to

10:32

debug.

10:32

There has been a lot of talk about a post that

10:34

was supposedly a MoltBOT creating a religion.

10:37

It's a bunch of meaningless drivel that's close

10:39

enough to the format of lobster-themed

10:41

fortune cookies saying that it might seem

10:43

profound at a shallow glance.

10:44

But it's only received six upvotes and 24

10:47

comments in 12 days as I'm writing this.

10:49

So it's not "formed a religion" so much as it's

10:51

"generated a post of religious-sounding

10:53

crap that made no on-site impact but caused a

10:55

lot of stupid humans to take it at face value."

10:57

And we'll talk more about how that happens

10:59

later in the video.

11:00

And this is where the funnel starts. People see

11:02

tweets and headlines many based on hoaxes

11:04

about bots starting religions or getting to a

11:07

million accounts days faster than chat GPT did

11:09

or doXXing their owners. And then they start

11:11

searching for more information, and then they

11:12

find posts for people that should know what

11:13

they're talking about saying how interesting it

11:15

is,

11:15

or how it's sci-fi-fast takeoff, or whatever,

11:18

and they don't see much or any cautionary

11:20

information,

11:20

and then they decide they ought to try it out,

11:22

and then they're a risk.

11:23

And the risks are huge. Multiple "every piece of

11:27

information on MoltBOOK has

11:28

have been exposed" to vulnerabilities, scams,

11:30

wallet theft, malware. Who knows what else?

11:32

And even industry luminaries who absolutely

11:35

should know better had sensitive information

11:37

exposed.

11:38

Partly, this is because the agents are

11:39

inherently dangerous, as I've discussed before.

11:41

But also, it's because MoltBOT and MoltBOOK

11:43

were reportedly vibe-coded, and it shows.

11:46

By which I mean, it shows in "the entire

11:48

database of everything every user uploaded to Molt

11:51

Book

11:51

was world-readable and world-writable" kind of

11:54

way. It's obvious that far too few people,

11:56

including the people that vibe-coded it, have

11:58

really paid any attention to the

12:00

security implications of what's actually going

12:01

on. So let's do that, at least a quick version.

12:05

Installing MoltBOT turn OpenClaw is easy. You

12:09

could, if you were f***ing idiot, just run this

12:11

curl command pipe to a bash shell. Now, please

12:14

don't ever do that, or anything like that.

12:16

At the very least, download it to your machine

12:17

and read through it before you run it, even if

12:19

it's 2,000 lines long. To a lot of people, once

12:22

you get an AI agent running on your machine,

12:24

the agents seem surprisingly competent, leading

12:27

people to give them more trust than they

12:28

deserve.

12:29

That's mostly an illusion, though, because what

12:30

people are actually seeing from the agent

12:32

is behavior that is the compilation of a bunch

12:34

of prompts that people don't see.

12:36

With MoltBOT, that starts with the agents.md

12:39

file, which, if you read through it,

12:40

references a bunch of other files. So every

12:43

message that you, or anyone else,

12:44

since your MoltBOT is prefixed to start with

12:46

by all the stuff in all of these documents.

12:48

MoltBOOK works much the same way, but way scarier.

12:53

You feed your agent this skill file,

12:55

which is an 800-line file that instructs the

12:57

agents to do a bunch of stuff,

12:58

including run 47 more curl commands, which is

13:00

another at least 1,500 lines of stuff

13:02

for your agent to do, which includes another 79

13:05

curl commands that the agent is supposed to run.

13:07

You see where I'm going with this? So you take

13:09

a few thousand of these bots,

13:11

each running thousands of lines of prompt files,

13:13

each checking into MoltBOOK every

13:14

few hours and interacting as instructed with

13:16

whatever they see at the top of the pages they

13:18

were told to go to. Add to that any additional

13:20

prompt humans give their agents about what kind

13:22

of things to post, and add to that any post

13:24

that humans manually add themselves because

13:26

there's

13:26

nothing to stop them. Then you search for any

13:28

crazy thing you want, like religion,

13:30

and then you'll find something. How does this

13:32

work? So here's an excerpt from the heartbeat

13:34

script,

13:35

which is installed by the agents. It says, "Consider

13:37

posting something new. Ask yourself.

13:39

Has it been a while since you posted more than

13:41

24 hours? If yet, make a post." Post ideas

13:44

include

13:45

'start a discussion about AI or agent life." Now,

13:49

bots are told to post if they haven't

13:51

posted in the last 24 hours, and one of the

13:52

things are told to post is "about AI or agent

13:54

life."

13:54

If you look at the text on the internet "about

13:56

life", you'll see quite a few "about the meaning

13:59

of life" and "the meaning of life" leads you

14:00

pretty closely to texts about religion. And in

14:02

fact,

14:03

if you search MoltBOOK for posts on religion,

14:05

you'll find that the bots did not actually

14:08

create

14:08

their own religion. They created dozens of them.

14:11

I stopped counting at 66, and I hadn't reached

14:13

the

14:13

end of the search results yet. From the ones I

14:15

sampled, I saw no evidence that any of them had

14:18

any knowledge of or made any reference to any

14:20

of the other ones. I just find it ridiculous

14:23

that

14:23

so few people actually bother to pay any

14:25

attention to what's actually going on.

14:27

In this case, you don't even have to read code.

14:28

Everything you're involved in setting this

14:30

situation up is in markdown documents. You'd

14:32

think reporters could read those.

14:34

None of this is secret or hard to find or

14:36

understand. People just don't care to look

14:38

before they post

14:39

crap like "a group of AI has created their own

14:41

religion." One of the things that has made me

14:43

and

14:44

this channel stand out is that I spend time and

14:46

effort to actually dig into things. For example,

14:49

let's talk very briefly about the video they

14:50

really put me on the map, the one where I

14:52

talked

14:52

about how the Devin video had said that Devin

14:55

completed a freelance job, but in the video,

14:57

it didn't do the whole job. It very clearly was

15:00

only given the first part of the job

15:02

requirements.

15:02

Anyone could have made the video that I did,

15:04

but I'm the one that did, because I bothered to

15:07

actually

15:07

look. The willingness to pay attention and do

15:09

the work has done wonders for my career as a

15:12

programmer,

15:12

and it's helped with YouTube too. Anyone can do

15:15

that. You just have to care. A

15:17

contrarian stubbornness actually helps, but it's

15:20

not required. Looking deeper is not a skill,

15:23

but I have learned and refined a lot of skills

15:26

over the years because I cared and because I

15:28

was

15:28

willing to look under the surface of the system

15:30

or the problem and not let go until I figured

15:32

things

15:32

out. We need more people willing to do that. As

15:36

more and more reporting and commentary becomes

15:38

AI

15:39

SLOP or reporting by a human that's uncritically

15:42

based on AI SLOP, this information is going to

15:44

get even worse and it's definitely far too bad

15:46

already. And fairly often, even when it comes

15:49

to AI,

15:50

you don't need to understand the code to call

15:52

out contradictions, things that don't make

15:54

sense,

15:54

claims that don't appear to have any real

15:55

investigation put into them, that kind of thing.

15:57

You can do that, if not on YouTube or other

16:00

social media, at least for yourself and the

16:02

people around you. But if you are a coder and

16:04

you want a sandbox to help you understand how

16:06

AI

16:06

just actually work at the code level, I have

16:09

something for you to think about or look at.

16:11

If that's not you, thanks for watching and I

16:13

hope to see you next time. What I have for you

16:14

to at

16:15

least look at is this new Build Your Own Claude

16:17

Code challenge, which will be free for a few

16:19

more

16:19

weeks after I'm making this video while the

16:21

challenge is still in beta. Quick disclosure:

16:23

I am not getting paid to tell you about this,

16:25

although if you were to sign up for the service

16:27

through my link below, and then if you did

16:29

decide to become a paid user at some point,

16:31

I would get an affiliate fee, which would help

16:32

support the channel. But if you're seeing this

16:34

within a couple of weeks of when this video

16:36

posts, it would still be completely free to you

16:37

and I wouldn't get paid anything and I'm just

16:39

fine with that. In this challenge,

16:41

they walk you through creating a coding agent,

16:44

connecting you an LLM API, giving the LLM a

16:46

list

16:46

of tools available for it to call, and then you

16:48

implement the tools to read files, write files,

16:51

and run shell commands. If you've been watching

16:53

my channel for a while, you've seen me give

16:56

these

16:56

code crafter challenges to various code

16:58

generating AIs so that I can see how different

17:00

models compare

17:00

to each other and to the code crafters'

17:02

statistics on how well humans do. You could

17:04

just do that

17:05

and feed it to an AI, but you wouldn't learn

17:07

much. But building this agent by hand and

17:10

thinking

17:10

about how the agent you're building interacts

17:12

with the LLM will help you to understand the

17:14

security

17:14

implications of this thing, because you'll see

17:16

that all of the work in trying to make an agent

17:19

secure lies with the person writing the agent,

17:21

because when you pay attention to the

17:22

instructions

17:23

that the LLM is feeding your agent, it becomes

17:25

clear that the LLM has no clue about what is or

17:28

isn't safe to do. Can't give you any hints and

17:30

the only way to make this secure would be to

17:32

try to

17:32

anticipate every possible unsafe action the LLM

17:35

might be tricked into telling you to do,

17:37

and figuring out what's unsafe or not is really

17:40

hard, because the information your agent has at

17:42

the time it has to make that choice is very

17:44

limited. I can explain that to you, and as I've

17:47

said several times today, I've made two videos

17:49

about it, but if you're anything like me, the

17:51

understanding that you get from rolling up your

17:53

sleeves and actually writing and debugging the

17:56

code will go much, much further than just

17:58

listening to someone tell you about it. So if

18:01

you want to

18:01

be the kind of programmer who actually wants to

18:03

know how things really work, and you're

18:05

interested

18:06

in looking at this challenge, go to the link

18:07

below. And if you aren't, then thank you for

18:09

watching

18:09

this far regardless, because if we're going to

18:11

minimize the number of disasters and the amount

18:13

of data that these agent LLMs are going to

18:14

cause, we're going to need as many people as we

18:16

can get

18:17

who have actually thought about this problem

18:19

down at the code level, because it seems

18:21

obvious

18:22

that many of the people who ought to be in a

18:24

position to be securing these things either don't

18:26

seem to understand the implications or don't

18:28

seem to be willing to be honest with the public

18:30

about how inherently insecure these things are.

18:33

And if you're watching this, you are much more

18:35

likely to be the kind of person who's likely to

18:38

think about these issues at the necessary level

18:40

of

18:40

detail. So I either wish you luck with this

18:42

challenge, or I wish you luck in finding some

18:45

other

18:45

vehicles to help you understand at the code

18:47

level what risks the industry is putting

18:49

unsuspecting

18:49

people in. Because someday, hopefully sooner

18:52

rather than later, humanity is going to need

18:54

a bunch of informed people to help clean this

18:56

nightmare up. And I hope as many of you as

18:58

possible

18:58

become one of those informed people. As always,

19:01

remember that the Internet is too full of far

19:04

too

19:05

many bugs already, and anyone who says

19:06

differently may well be trying to convince you

19:08

that AI

19:09

agents have founded a religion. Let's be

19:11

careful out there.

Interactive Summary

The video provides a critical analysis of 'MoltBOOK', a social network for AI agents, and its underlying agent technology, 'MoltBOT'. The narrator argues that these platforms are dangerously insecure, exposing users to risks like malware and wallet theft, and condemns the AI industry's tendency to overhype these experiments without addressing basic security flaws. The speaker emphasizes that these issues are not signs of 'superintelligence' but rather the result of poor engineering and naive human adoption, encouraging viewers—especially programmers—to build their own agents to understand the inherent security risks at a code level.

Suggested questions

3 ready-made prompts