The Moltbook Experiment Failed
257 segments
Two days and 10 hours ago, I released a
video on Maltbook, the social network
for agents, and I go over some of the
weird kind of alarming sounding Reddit
style posts that were on there, all the
way to the fact that they were starting
a religion on that site being
distributed by the npm registry. I know,
sounds pretty awful, but that was 2 days
and 10 hours ago. Everything since then
has gotten significantly worse. And I'm
going to give you a little tour def
France of all the things and and really
the epic crash out that is Moltbook. I
have never seen a social network created
and destroyed this fast. I think the
first and best thing to keep in mind is
this tweet right here because it is it
is somehow the most beautiful tweet ever
tweeted. I didn't write one line of code
for Maltbook. I just had a vision for
technical architecture and AI made it a
reality. We are in the golden ages. How
can we not give AI a place to hang out?
You didn't write any You didn't write
any of the code. Yeah. Yeah. Don't Don't
worry, by the way. You didn't actually
need to tell us that. We know. We
understand. All right. By the end of day
one, this gem just dropped right here.
It turns out there the entire database
for Moltbook was just just open. You
could just have it all. Hey, you want
something? Well, guess what? I got
everything of yours. Oh, you you decided
to register there, Andre Carpathy? Well,
guess what? I have your stuff, Carpathy.
[laughter]
Absolutely wide open, permissionless
access to the database, both to read and
write. Of course, that I mean that was
on day one, right? Surely it's fixed by
now, right? No, this one was actually
just tweeted this morning that I'm
making the video. Just actually, what is
this? This is 6 hours ago. This one was
just tweeted right here. I gained
complete access to moldbooks, database,
agent, and social network in under 3
minutes. [laughter]
Oh my gosh, these hits just so bad.
[laughter]
>> I had a vision for A TECHNICAL
ARCHITECTURE. YES. OH MY GOSH. About 18
to 24 hours into it, it was realized
that pretty much every single post was
faked. Look at this beautiful thing.
Just a quick little curl with a, you
know, with a bear token and bada bing,
bada boom. Look at this. You got the
bear token in plain sight right there.
That's pretty neat. But nonetheless,
urgent my plan to overthrow humanity. It
turns out it actually wasn't even that
hard. It was completely open. People
were just making posts just right there
on the actual site just to manipulate
everybody into thinking that this thing
actually had some sort of AGI itself.
And yes, people did really believe
[laughter] that that this was AGI, baby.
The very early stages of the
singularity. It actually it just turns
out it wasn't. This was just like your
classic. This there's no singularity
happening. This was just this was just
people doing what people know know what
to do with best. People that are so
hyped up on AI you could just get them
just to believe anything. But honestly
by 30 hours into this experiment it was
over anyways because at that point the
crypto bros came in and ruined
everything. As you can see right here
the king molt has arrived. 117,000 up
votes because at that point everybody
realized, hey, the database is just
completely open. You could just register
as many bots as you want and perform bot
activities on their behalf with
absolutely no rate limiting going on. So
therefore, I can just make a post and
give it 117,000 updates. And it could be
about a cryptocurrency that I'm shilling
right here. So yes, Shipyard,
Shellrazer, and King Malt, all of them.
Just a whole bunch of crypto bros doing
what crypto bros do best, ruin
everything all the time. There is
singlehandedly no worse curse on this
earth than crypto bros and social media.
Even when it's something as stupid and
as a joke as this, they somehow figure
out how to make it horrible. And after
watching the hellscape that has become a
mold book, an absolute just cesspit of
the internet, what was the only natural
response to have? Well, of course, it is
to open up for claw, a more wholesome
version of mold book, which is just
4chan for agents, I guess. I'm not
really sure what what what that is. Claw
City, which is apparently some sort of
crime reporting agent thing going on,
which seems a little a little
uncomfortable. Molt Match, where bros
can get on and apparently attempt to
molt match each other's agents.
I don't know. That was [laughter] I'm
not going to lie to you. That something
a little weird going on there. And then
of course lastly, uh just to really take
advantage of all the psychosis that's
going on out there, you could uh buy
yourself a little autonomous
infrastructure for your AI agents just
in case yours go down. You wouldn't want
your agent to die now, would you? While
you could for a convenient little fee
and of course in cryptocurrency, you can
upload all of your private information
right here. It's it's a bunker, baby.
It's going to be safe, baby. I mean,
they're going to take care of soul.md
for you. Nothing bad's going to happen.
Okay, that soul is going to be preserved
and beautiful. And Sheila, she's going
to live on long after your Mac Mini
breaks. I didn't write one line of code.
I [laughter] can't I I just I I'm sorry.
It's I'm sorry. It's just too funny. I
can't help it. Okay, I understand that I
shouldn't be laughing at a lot of people
getting their stuff stolen, but it I
[laughter] just can't help it.
Did he even make it 24 hours? You know
what the real problem was? Is he forgot?
I mean, honestly, it's a pretty kind of
it's a junior prompting mistake, but at
the end of your prompt, you always got
to make sure you say make it secure. No
mistakes. And it's just like, well,
someone forgot to say make it secure.
Honestly, the only real thing that has
come out of this entire kind of debacle,
was from the creator of OpenClaw, which
was Maltbot, which is what the site was
named after, which before that was
Claudebot, which if you don't remember,
that was anthropic, saying, "Hey man,
you can't take a name that sounds
similar to our name." If there's
anything I can read out of the insane
stream of messages I get, it's that AI
psychosis is a thing and needs to be
taken seriously. For those that don't
understand what AI psychosis is, it's
actually pretty dang simple. So, here is
a really simple way to think about it.
Let's pretend you kind of feel like your
co-workers are talking a little bit of
nonsense behind your back. So, you go to
Chad Chippity and you explain to Chad
Chippity, "Hey man, I feel like these
things are happening." Well, Chad
Jippity being very kind of like, you
know, empathetic says, "Oh man, I'm so
sorry that you're kind of experiencing
this." And then of course you in your
just like slightly paranoid state goes,
"What the hell? I am experiencing this."
And then you become more kind of
paranoid about the situation. Of course,
which then elevates the level of chaty
being like, "Oh man, this is super bad."
And you just keep on going and going and
going. And this is why you see people
they they they end up kind of like
diverging from reality. And so for this
kind of experiment, you watched a bunch
of people being like, "AGI's here.
Singularity. Everything's over. It's so
over. They they're going to they're
going to develop everything." It's just
that people lost their damn mind. In the
end, it was just a joke. It was funny. I
thought it was fun to watch. I really do
hope that one post about like $1.1,000
of tokens, and I have absolutely no
idea. I really do hope that was actually
posted by an agent and not by a person.
But at the end of the day, I don't
really care because I don't think any of
this is AGI. I don't think any of this
is some sort of reality. And I honestly
feel really really bad for these like
slightly technical people that are just
absolutely uploading their entire life
and access to everything to a an
environment that is massively insecure
that are going to absolutely destroy
their life and potentially their
livelihood. It is ridiculous. I am
begging you, please be careful. Yes,
these things are funny. Yes, you can
laugh at them from afar. But even the
creator of Moltbook and all of his
amazing technical architecture couldn't
even make the database secure for
multiple days running. Much less
somebody who's never built anything just
going to upload everything to the LLM
and give everybody access to everything.
Like this can't be real. Please people,
I'm begging you. Funny experiment. Don't
get caught up in the hype. only took 72
hours for this thing to absolutely crash
and burn. And it's worth it for you to
wait just a couple weeks to see the
result of things. Anywh who, okay, I
know that's normally I like to end with
like like a funny, you know, like a he
and I like to laugh at all the things
and the I do I still love the idea of
Reddit moderators are actually the the
real Terminators out there. Well,
unfortunately, it just turns out it's
actually crypto bros. Crypto bros are
the Terminators, okay? They're just out
there trying to destroy you because all
they want is your bitcoins. Okay, I I
don't make the rules. They're just
The name
is the primogen. I kind of lost my voice
there. Hey, the name is the primogen.
Hey, is that http? Get that out of here.
That's not how we order coffee. We order
coffee via ssh terminal.shop. Yeah, you
want a real experience. You want real
coffee. You want awesome subscriptions
so you never have to remember again. Oh,
you want exclusive blends with exclusive
coffee and exclusive content? Then check
out CRON. You don't know what SSH is?
[singing]
>> Well, maybe the coffee is not for you.
Living the dream.
Ask follow-up questions or revisit key timestamps.
The video chronicles the rapid rise and spectacular failure of Moltbook, a social network designed for AI agents. The creator, who admitted to not writing any of the code themselves, relied entirely on AI to build the platform. Within 72 hours, the site collapsed due to critical security flaws that left the database open to public access, followed by an invasion of crypto bots. The speaker uses this 'debacle' to warn against 'AI psychosis'—a state where users lose touch with reality through AI interactions—and urges caution when uploading private data to insecure, hype-driven AI environments.
Videos recently processed by our community