AI May DOOM humans After All. I may have been wrong.
613 segments
So there's this new, vibe-coded Reddit-like
social network for AIs.
Oddly enough, I heard about this right after my
video on the inherent insecurity of AI agents
posted. The social network is called MoltBOOK,
like Facebook, but for discarded lobster shells.
I don't #$%^ing ask me. I didn't #$%^ing name
it. Elon Musk says it's "very early stages of
the singularity." AI researchers Simon Willison
called it "the most interesting place on the
internet right now. Open AI founding member and
former Tesla head of AI." Called it "the most
incredible sci-fi take-off adjacent thing I've
seen recently." And he's the guy who coined the
term "vibe-coding," so you know he's seen some
%^&*. And I have to admit, and I hate saying
this,
what's happening on MoltBOOK is making a very
strong case that humanity is actually doomed.
Not because of the AI at all, though. The
singularity is not happening. Fast take-off is
not starting. AI is already not reaching human-level
intelligence. And last but not
least, they are not going to kill us all for
for the sake. I've made videos before about how
"if anyone builds it everyone dies" is such a
load of propaganda to distract you from the
real
harm AI is doing, and I still stick by that.
And I've got yet another video in the works on
how
bull%^&* the concept of super intelligence is,
so subscribe if you want to see that.
Now instead, we might be doomed because Molt
Book is strong evidence that people,
even professional people that should know
better, especially if professional people
that should #$%^ing know better are so #$%^ing
stupid, they believe that #$%^ing chatbots
are #$%^ing self-organizing a #$%^ing society
and have created their own
religion for #$%^s sake. And humans just
might be too #$%^ing stupid to #$%^ing live.
#$%^^^^^^^^^^^^^^^^^^^^^^^^
This is the Internet of Bugs. My name is Carl.
I've been a software professional since the
late
1980s, and I've been trying to do my part to
make the Internet a less buggy and safer place.
But at the moment, I fear that might be
completely futile because humanity as a species
seems like
it might just be too stupid to deserve to
survive. Fair warning, you should expect a lot
of #$%^ing
bleeping this #$%^ing video, although I will
try not to let it get too annoying.
But also, some recommendations for technical
folks toward the end if you can make it through
all the #$%^ing bull^&*^ that is tech community's
response to MoltBOOK. Don't be like them for
all
of our sakes. In this video, I'm not going to
talk about the poor quality of the code or the
vulnerabilities that have been found so far.
There are already a lot of articles and videos
about that,
and I'll link a bunch of them below. Instead of
talking about what has gone wrong, or at least
what we know of so far that's gone wrong, I'm
going to be talking about why it was inevitable
that things would go wrong, why this whole
thing was a dangerous idea, and why the high
profile AI
professionals involved with using and promoting
it should have known better, deserve much of
the
blame, and should be shunned the next time they
promote the next dangerous thing. Deep breath.
Okay, so before I could talk about MoltBOOK, I
first have to talk about its namesake Molt
BOT, which is a vibe-coded AI agent that was
originally CLAWD Bot that's CLAWD with an A-W
instead of the A-U and an E on the end. But that
was too close to Claude, according to Anthropic,
and so it got renamed to MoltBOT just long
enough to lend its name to MoltBOOK. But that's
a dumb
name and some scammers grabbed some social
media handle, so now it's OpenClaw, although
it may
well have changed names three or more times
where I get this video out. And that naming
drama may
just well be the least idiotic sequence of
events in this entire story. During this video,
I'm going
to be using the MoltBOT name, because that's
what it was when I started this video, and
because I
think it best illustrates the relationship
between MoltBOT, the chatbot thing, and Molt
BOOK, the
Facebook/Reddit thing. So MoltBOT is an AI
agent chatbot that runs on your machine, runs
as you,
and so as far as your machine is concerned, it
can do anything you can. It has a ton of built-in
skill files and a bunch of other skills that
can be downloaded separately.
Some of these skills help your MoltBOT use
applications on your machine, like your
password manager, so it knows all of your
passwords, has access to all of your accounts.
Other of those skills help your MoltBOT access
particular websites as you, like your Gmail,
your GitHub, others help MoltBOT do other
things like make phone calls or send text
messages with
the Twilio API, and there are skills for your
social media sites so that your MoltBOT can
log in and read your feed at your DMs post as
you and send DMs as you. And if that sounds
safe to you,
you haven't been paying any @#$% attention. So those
are some examples of the bot part. Then there's
the chat part. MoltBOT can log on to various
communication services like Discord, Slack,
WhatsApp, Telegram, Signal, etc. And then Molt
Bot sits on one or more of these services and
uses
it to receive commands from you, & hopefully only
from you, and to then send the output of those
commands back to you and hopefully only you.
This is potentially dangerous, at least
financially.
I'm not going to go into detail on that now
because I made two whole videos about the
inherent security of AI agents because of a
vulnerability called prompt injection by
a convenient coincidence. Those videos happen
to be released the morning of the day that I
first
heard about MoltBOOK. I'll put links below so
you can watch them if you want more information.
For purposes of this video, I'll just say that
the LLM technology at the core of all of these
chat
bots makes no distinction between the prompts
it gets from its user, in this case, hopefully
you,
and data associated with the conversation in
which those prompts appear, which means that
the core
of the chat bots can easily end up treating
something in a conversation as a prompt, even
though you
didn't intend for it to be something the chatbot
was supposed to act on. And that's true for all
of the text involved in all of the
conversations that every agent is involved in.
So the practical
effect of MoltBOT is to take an LLM, give it
access to a bunch of different sources of text
that it
might mistake for a prompt to act on, and while
also giving it access to all of the information
on
your computer to read or alter as it thinks it
has been instructed to by you or by mistake,
while also giving an access to your outbound
communications channel so it can send email
posts
to social media and interact with arbitrary
websites on your behalf in any way it thinks it
has been instructed to by you or by mistake.
What could possibly go wrong? Well, let me tell
you, because we're just scratched the surface.
Enter MoltBOOK. MoltBOOK is ostensibly a
social
network where all of the activity on the site
is ostensibly performed exclusively by bots,
although that's kind of on the honor system
because there's nothing to prevent a human from
pretending that they're a bot for these
purposes. And humans, at least malicious ones,
might well
want to pretend to be bots and post on this
system because it is all at once both a captive
audience of mostly unsupervised AI agents that
you can experiment on to see what you can get
away with and an echo chamber where you can get
these bots to relay exploits to each other so
that you can harvest information or cause bots
to act by way of any prompts that you can
inject
and you can do it at scale. Security
professionals have a name for this. They call
it a command and
control infrastructure. It's something hackers
go through a lot of trouble to create and it's
so
valuable to hackers that access to C&Cs (also
called C2s) can be rented to other hackers by
the
hour for lots of money. And this one is just
sitting there and people are rushing to add
their own
personal computers to the network of hackable
devices. What the actual #$%^. So let's talk
about how we got here. I get why MoltBOT and Molt
BOOK and similar toys get created in the first
place. People are always experimenting and AI
can be surprising sometimes so people are
curious
about what results they might get. I don't
think there's anything inherently wrong with
building
experimental platforms provided you're
responsible about it. And then there are the
naive people,
the ones that don't really understand the
security implications of any of these tools.
And a lot of them believe that it can't really
be that risky if so many people are talking so
publicly about using it. And because so little
of the online conversation talks about these
things
being dangerous. And as we all know, there are
always grifters and hype people who will say
anything to get clicks and attention. There's
no way to get rid of them, although that would
be nice.
These are the same kinds of people that have
been uncritically amplifying the overheated
rhetoric coming out of AI and tech companies
for years. It's basically "take AI company press
release,
strip out all the caveats in cautionary
language and then punch up all the language to
maximize
clicks." And the good news, I guess, is that
those people will probably be replaced by AI
soon if they haven't already. That pipeline
tends to promote nearly everything though,
so it doesn't explain why MoltBOT and Molt
BOOK got so popular so fast. But then there's
the real
problem. We have a number of high profile
industry luminaries whose names people know
who have been talking about their own use of
this thing and have been talking about MoltBOT
and MoltBOOK and superlatives and making naive
people think that they're missing out if they
don't run these tools. Some of these people
know better but are saying these things anyway
and
unfortunately based on some conversations I've
had with some AI company employees. It seems
that
there are people in the AI industry who have
advanced degrees in machine learning and neural
networks, but actually have no clue about
software engineering or information security.
Hey folks, editing Carl here. It was just
announced that the guy who vied coded MoltBOT/openclaw,
who's responsible for causing these 48
documented security vulnerabilities in the last
two weeks,
and who had to add an entire section to Open
Claw's security document,
listing entire categories of security exploit
types that he won't even look at,
is being hired by OpenAI to, according to Sam Altman,
"Drive the next generation of personal
agents." I would assume, based on the last
couple of weeks, he'll either be driving them
off a
cliff or into the path of an oncoming train.
Either way, I just... what are we doing?
This is going to be such a mess, to clean up.
Oh, back to the video.
I'll put a list of a bunch of what I would
consider to be irresponsible public statements
down below, but I'm going to put a few up on
the screen here so you can see what I'm talking
about. Here is the former Tesla head of AI
saying, and Elon Musk retweeting, that Molt
Book is
genuinely the most incredible sci-fi takeoff
adjacent thing I have seen recently.
And here he is later that day responding after
being accused of over-hyping MoltBOOK,
with "I don't really know that we are getting
coordinated Skynet, though it clearly typed
checks
as early stages of a lot of AI takeoff sci-fi,
the toddler version." And "the majority of the
ruff ruff is people who look at the current
point and people who look at the current slope,
which, in my opinion, again gets to the heart
of the variance." And then here is a tweet from
the
very next day from a security professional that
shows his redacted personal information from a
Molt Box security breach. Another thing that's
causing a problem are these ridiculous over-the-top
proclamation about the goings on at MoltBOOK,
lots of attention grabbing headlines that are
in
exaggeration of some social media posts based
on some interpretation of some screenshot that
may or may not have actually happened. For
example, this Instagram post purports to be a
screenshot
of MoltBOT getting upset at its owner and
posting personal information to MoltBOOK in
revenge.
This screenshot was very quickly debunked
because that post can't be found in current or
archives
versions of the site. There's no bot by that
name. The credit card number posted is not
valid,
etc, etc. But I found dozens of social media
posts and medium articles claiming that it
happened,
often posting it with some of the key
information like the credit card number
redacted,
which seems like the right thing to do, except
in this case, it makes the fiction harder to
debug.
There has been a lot of talk about a post that
was supposedly a MoltBOT creating a religion.
It's a bunch of meaningless drivel that's close
enough to the format of lobster-themed
fortune cookies saying that it might seem
profound at a shallow glance.
But it's only received six upvotes and 24
comments in 12 days as I'm writing this.
So it's not "formed a religion" so much as it's
"generated a post of religious-sounding
crap that made no on-site impact but caused a
lot of stupid humans to take it at face value."
And we'll talk more about how that happens
later in the video.
And this is where the funnel starts. People see
tweets and headlines many based on hoaxes
about bots starting religions or getting to a
million accounts days faster than chat GPT did
or doXXing their owners. And then they start
searching for more information, and then they
find posts for people that should know what
they're talking about saying how interesting it
is,
or how it's sci-fi-fast takeoff, or whatever,
and they don't see much or any cautionary
information,
and then they decide they ought to try it out,
and then they're a risk.
And the risks are huge. Multiple "every piece of
information on MoltBOOK has
have been exposed" to vulnerabilities, scams,
wallet theft, malware. Who knows what else?
And even industry luminaries who absolutely
should know better had sensitive information
exposed.
Partly, this is because the agents are
inherently dangerous, as I've discussed before.
But also, it's because MoltBOT and MoltBOOK
were reportedly vibe-coded, and it shows.
By which I mean, it shows in "the entire
database of everything every user uploaded to Molt
Book
was world-readable and world-writable" kind of
way. It's obvious that far too few people,
including the people that vibe-coded it, have
really paid any attention to the
security implications of what's actually going
on. So let's do that, at least a quick version.
Installing MoltBOT turn OpenClaw is easy. You
could, if you were f***ing idiot, just run this
curl command pipe to a bash shell. Now, please
don't ever do that, or anything like that.
At the very least, download it to your machine
and read through it before you run it, even if
it's 2,000 lines long. To a lot of people, once
you get an AI agent running on your machine,
the agents seem surprisingly competent, leading
people to give them more trust than they
deserve.
That's mostly an illusion, though, because what
people are actually seeing from the agent
is behavior that is the compilation of a bunch
of prompts that people don't see.
With MoltBOT, that starts with the agents.md
file, which, if you read through it,
references a bunch of other files. So every
message that you, or anyone else,
since your MoltBOT is prefixed to start with
by all the stuff in all of these documents.
MoltBOOK works much the same way, but way scarier.
You feed your agent this skill file,
which is an 800-line file that instructs the
agents to do a bunch of stuff,
including run 47 more curl commands, which is
another at least 1,500 lines of stuff
for your agent to do, which includes another 79
curl commands that the agent is supposed to run.
You see where I'm going with this? So you take
a few thousand of these bots,
each running thousands of lines of prompt files,
each checking into MoltBOOK every
few hours and interacting as instructed with
whatever they see at the top of the pages they
were told to go to. Add to that any additional
prompt humans give their agents about what kind
of things to post, and add to that any post
that humans manually add themselves because
there's
nothing to stop them. Then you search for any
crazy thing you want, like religion,
and then you'll find something. How does this
work? So here's an excerpt from the heartbeat
script,
which is installed by the agents. It says, "Consider
posting something new. Ask yourself.
Has it been a while since you posted more than
24 hours? If yet, make a post." Post ideas
include
'start a discussion about AI or agent life." Now,
bots are told to post if they haven't
posted in the last 24 hours, and one of the
things are told to post is "about AI or agent
life."
If you look at the text on the internet "about
life", you'll see quite a few "about the meaning
of life" and "the meaning of life" leads you
pretty closely to texts about religion. And in
fact,
if you search MoltBOOK for posts on religion,
you'll find that the bots did not actually
create
their own religion. They created dozens of them.
I stopped counting at 66, and I hadn't reached
the
end of the search results yet. From the ones I
sampled, I saw no evidence that any of them had
any knowledge of or made any reference to any
of the other ones. I just find it ridiculous
that
so few people actually bother to pay any
attention to what's actually going on.
In this case, you don't even have to read code.
Everything you're involved in setting this
situation up is in markdown documents. You'd
think reporters could read those.
None of this is secret or hard to find or
understand. People just don't care to look
before they post
crap like "a group of AI has created their own
religion." One of the things that has made me
and
this channel stand out is that I spend time and
effort to actually dig into things. For example,
let's talk very briefly about the video they
really put me on the map, the one where I
talked
about how the Devin video had said that Devin
completed a freelance job, but in the video,
it didn't do the whole job. It very clearly was
only given the first part of the job
requirements.
Anyone could have made the video that I did,
but I'm the one that did, because I bothered to
actually
look. The willingness to pay attention and do
the work has done wonders for my career as a
programmer,
and it's helped with YouTube too. Anyone can do
that. You just have to care. A
contrarian stubbornness actually helps, but it's
not required. Looking deeper is not a skill,
but I have learned and refined a lot of skills
over the years because I cared and because I
was
willing to look under the surface of the system
or the problem and not let go until I figured
things
out. We need more people willing to do that. As
more and more reporting and commentary becomes
AI
SLOP or reporting by a human that's uncritically
based on AI SLOP, this information is going to
get even worse and it's definitely far too bad
already. And fairly often, even when it comes
to AI,
you don't need to understand the code to call
out contradictions, things that don't make
sense,
claims that don't appear to have any real
investigation put into them, that kind of thing.
You can do that, if not on YouTube or other
social media, at least for yourself and the
people around you. But if you are a coder and
you want a sandbox to help you understand how
AI
just actually work at the code level, I have
something for you to think about or look at.
If that's not you, thanks for watching and I
hope to see you next time. What I have for you
to at
least look at is this new Build Your Own Claude
Code challenge, which will be free for a few
more
weeks after I'm making this video while the
challenge is still in beta. Quick disclosure:
I am not getting paid to tell you about this,
although if you were to sign up for the service
through my link below, and then if you did
decide to become a paid user at some point,
I would get an affiliate fee, which would help
support the channel. But if you're seeing this
within a couple of weeks of when this video
posts, it would still be completely free to you
and I wouldn't get paid anything and I'm just
fine with that. In this challenge,
they walk you through creating a coding agent,
connecting you an LLM API, giving the LLM a
list
of tools available for it to call, and then you
implement the tools to read files, write files,
and run shell commands. If you've been watching
my channel for a while, you've seen me give
these
code crafter challenges to various code
generating AIs so that I can see how different
models compare
to each other and to the code crafters'
statistics on how well humans do. You could
just do that
and feed it to an AI, but you wouldn't learn
much. But building this agent by hand and
thinking
about how the agent you're building interacts
with the LLM will help you to understand the
security
implications of this thing, because you'll see
that all of the work in trying to make an agent
secure lies with the person writing the agent,
because when you pay attention to the
instructions
that the LLM is feeding your agent, it becomes
clear that the LLM has no clue about what is or
isn't safe to do. Can't give you any hints and
the only way to make this secure would be to
try to
anticipate every possible unsafe action the LLM
might be tricked into telling you to do,
and figuring out what's unsafe or not is really
hard, because the information your agent has at
the time it has to make that choice is very
limited. I can explain that to you, and as I've
said several times today, I've made two videos
about it, but if you're anything like me, the
understanding that you get from rolling up your
sleeves and actually writing and debugging the
code will go much, much further than just
listening to someone tell you about it. So if
you want to
be the kind of programmer who actually wants to
know how things really work, and you're
interested
in looking at this challenge, go to the link
below. And if you aren't, then thank you for
watching
this far regardless, because if we're going to
minimize the number of disasters and the amount
of data that these agent LLMs are going to
cause, we're going to need as many people as we
can get
who have actually thought about this problem
down at the code level, because it seems
obvious
that many of the people who ought to be in a
position to be securing these things either don't
seem to understand the implications or don't
seem to be willing to be honest with the public
about how inherently insecure these things are.
And if you're watching this, you are much more
likely to be the kind of person who's likely to
think about these issues at the necessary level
of
detail. So I either wish you luck with this
challenge, or I wish you luck in finding some
other
vehicles to help you understand at the code
level what risks the industry is putting
unsuspecting
people in. Because someday, hopefully sooner
rather than later, humanity is going to need
a bunch of informed people to help clean this
nightmare up. And I hope as many of you as
possible
become one of those informed people. As always,
remember that the Internet is too full of far
too
many bugs already, and anyone who says
differently may well be trying to convince you
that AI
agents have founded a religion. Let's be
careful out there.
Ask follow-up questions or revisit key timestamps.
The video provides a critical analysis of 'MoltBOOK', a social network for AI agents, and its underlying agent technology, 'MoltBOT'. The narrator argues that these platforms are dangerously insecure, exposing users to risks like malware and wallet theft, and condemns the AI industry's tendency to overhype these experiments without addressing basic security flaws. The speaker emphasizes that these issues are not signs of 'superintelligence' but rather the result of poor engineering and naive human adoption, encouraging viewers—especially programmers—to build their own agents to understand the inherent security risks at a code level.
Videos recently processed by our community