This is the WAY OF THE FUTURE
585 segments
So, the entire internet seems like it's
losing its composure over Claudebot. And
what I wanted to do was just kind of
explore what's going on. Now, I spent a
little time kind of reading up on it and
figuring out what is it, how does it
work, and that sort of thing. So, let's
just start at the top. Claudebot is a
semi-autonomous
personal agent that the primary
difference is that it is proactive. It
finds stuff to do rather than just
waiting for you to give it commands.
Now, this is a non-trivial
uh problem and it's one that I had been
working on for quite a while and I'll
talk about my previous work um on this
uh in a little bit. But the long story
short is that uh people are freaking out
about it because this is the level of
autonomy that the a the agentic browsers
were supposed to promise, but those were
built in more of a uh let's say a a
paradigm that was more corporate
friendly. Whereas Claudebot is open-
source and it's rogue. It's very
renegade. And the reason that it emerged
in the open-source space is because of
the lower risk. It's saying, "Hey, this
is open source software. Use at your own
risk. If it deletes, you know, all of
your emails or, you know, buys a million
platton tickets to Tahiti, that's on
you." Whereas, you know, the Comet
browser and the Open AI browser, they
had to make sure that it couldn't do
those things. So, this is where the
open- source movement has a very
distinct advantage over corporations,
over closed source, because it's just
like, I'm just going to release this and
see what happens. So, people are
freaking out about it. Um, this this
tweet in particular was hilarious. So,
Flowers, this is this is the same person
who was formerly uh Future Flowers um
said, "We should give Claudebot Minute
Man 3 access for a fast fun alignment
test."
Um, so the idea is like, okay, like what
kinds of guard rails does this have and
what, you know, what are the safety
risks? And on a technical level, what a
lot of people are concerned about is
that you have uh an AI that's constantly
running. It's got ports open, so it's
hackable, so it's a security nightmare.
Uh, so that's one thing that people are
worried about. But you can run it on a
local PC, you can run it on a Mac Mini,
you can run it on um a micro PC and then
it just becomes your just little shotgun
ride along buddy. You could put it in a
container. Um if you containerize it,
you can run it on anything including
your phone. So this is very obviously
the direction that things were going to
go. Uh and the reason that like all the
all the primitives have been worked on.
So there's a few technological
primitives and and what I mean by
primitive is like the basic building
blocks. So the technical primitives that
we've been working on over the last few
years and I say the royal we like all of
us um is uh so number one models that
were capable of agency so you know
taking tasks or uh uh taking taking
commands solving tasks solving problems
um and that sort of thing. Tool use was
one of the next big ones. So tool use
the ability to use JSON the ability to
use APIs the ability to even say I don't
know how to use that API. Let me let me
find the documentation. So a lot of
those autonomous like those th those
fully autonomous tasks that were done in
service to userdirected tasks uh were
like kind of some of the building
blocks. Uh memory management so
recursive language models that
innovation has been a big contributor
because one of the biggest problems is
not you know uh what like what can you
do it's remembering what should you do
what does this person need. So things
like retrieval augmented generation was
kind of the first uh version of
long-term memory, but it was really
unstructured. It was basically just a
soup of memory um and and had very very
little structure. So recursive language
models are a better more structured uh
memory management system. And so this is
this is similar to uh what I what I
worked on a few years ago called Remo.
Um which was what was it? uh it was
recursive emergent memory optimization I
think is what it stood for. I I worked
on I worked on agentic memory systems
for a while. Um so for for those that
have been around for a while you
remember when I was working on that and
when I was working on things like Nala
and the ACE framework and I'll talk
about that in just a minute. So um the
where we're going with this is this was
very obviously the path forward. So over
the last couple years, I've made a few
videos talking about how uh just due to
efficiency, this is what the market
would demand because as soon as the AI
was capable of being fully autonomous,
people would build things that were
fully autonomous and then the market
would demand things that were fully
autonomous. So this is and I started
saying that because you know 2 three
years ago when open AI and and anthropic
and everyone else is saying humans are
going to be in the loop and it's just
going to empower humans and it's going
to be an empowerment tool and I called
BS on that because that's not how
technology works. You don't get to
decide what a technology does ahead of
time. there's emergent capabilities and
when when you when you take a step back
and you say we are clearly building
something that is a thinking engine and
what's the difference between being able
to carry out a task that you tell it to
carry out and autonomously carrying out
that task? Well, you just need something
to specify the task. So this is what
like that that's really what an agentic
framework is is you have one module or
one motor or one loop that's saying okay
what's the most important task to do
next? So that's one loop and then you
have the outer loop which is go do the
task and so then you have other services
that manage the memory and that sort of
thing. I remember when when I had built
a small team and we were working on the
ACE framework. Um one of the team
members implemented it in such a way
that it started doing things
autonomously and he's like he's he shut
that off. He's like well I didn't like I
didn't like that it was coming up with
its own ideas. I want it to only do what
I tell it to. I'm like I think you're
fundamentally missing the purpose of
autonomy. And he's like, "Well, no, but
we need user stories." And I So I was
like, "Okay, this this team clearly
doesn't get it." Um or that that person
on the team clearly didn't get it. Um so
I was like, "All right, you guys can do
whatever you want."
Anyways, so long story short, I've been
in the space for more than four years
now. I wrote uh natural language
cognitive architecture four years ago.
And so I'm really excited. Um, it took
longer. It took both less time and more
time than I would have hoped for this
kind of stuff to to be out there. Um,
and I'm glad that I was there to help
help it along. So, let's see. Um, so
this is this is my original work. So, I
I wrote natural language cognitive
architecture uh four more than four
years ago. Um, and this was basically
with GPT3. I realized that this was the
agentic framework. So um I'm not saying
that this is exactly how Claudebot works
because again this was a while ago and I
you can't predict everything but this
was the theory that I had. So you had
the interloop which would do a search
space uh create a kernel which is
basically what do I do then build a
dossier. So that's a task uh
specification and then load that into a
shared database. And this is pretty
close to how Cloudbot works. Um, where
there's like a tasks.mmarkdown file and
then you have um you have a few other
other shared things. And the the
innovation here and I was actually
working towards that was just put
everything in plain text. You don't need
a database just put it in plain text
because that's that's what the language
models read. And so everyone has settled
on do it in markdown. And then the outer
loop is actually the the task execution
loop. So this was my first idea uh where
you have you know the the shared
database. So that's kind of your uh
recursive language uh model. So so
that's your that's your context
management. That's your shared tasks and
that sort of thing. And so then you
build a context. Uh so by extracting
from that you build a corpus which is
basically recruiting all the information
that you need uh to execute the task.
Then you do the task and you output the
task into the environment. So that's
your API calls and that sort of thing.
And then that updates uh so you get
feedback. So you get the input
processing and output loop. That's the
outer loop. And then the inner loop is
the task manager. And so this was
actually pretty pretty salient. This is
pretty close to how how Cloudbot was
ultimately um implemented. So the next
uh layer that I was working on was the
ACE framework. So ACE framework stands
for autonomous cognitive entity and it's
a more sophisticated hierarchy. And
basically doing a side-by-side
comparison, Claudebot uh actually does
all but the aspirational layer. So to
provide a little bit of more context,
the entire theory of the ACE framework
was that you'd have um hierarchical
layers. So different processes that were
responsible for different aspects of you
know uh basically making stuff happen.
So you have the global strategy layer
which is the environmental context and
your long long time horizon planning.
The agent model which is basically a
list of what the what your agent can and
cannot do. Um as well as so like it has
to understand what it is. So like I
understand that I'm Claudebot and here
are my tools, here are my hands, here's
my memory, here's how I work. Uh because
and this was important at the time
because uh language models had a lot
less baked in in terms of what they were
and what they were capable of. So we had
to explicitly state you're a language
model this you're a part of the ACE
framework and that sort of thing and the
agent forge team has taken all this and
run with it. Um so they're still they're
still chugging away as far as I know. Um
then the executive function which is
risks resources and plans. Um as far as
I know the cloudbot focuses more on
plans and maybe resources. I don't know
if it has a risk control layer. Um but
that would be a really easy thing to
add. Uh next is cognitive control which
is task selection and task switching. So
cognitive control is about saying like
I'm failing at this task. I need to I
need to either cancel this task or try a
different method or that sort of thing.
Uh the inspiration for this layer was
actually frustration. Um the point the
the the neurocognitive point of
frustration is to tell you that what
you're doing isn't working. And so you
get frustrated enough you either quit or
you try harder or you try something
else. So that's basically what the
cognitive control layer does. And then
you finally have the task prosecution
layer which is actually executing a
specific task like call this API, make
this calculation, write that function,
that sort of thing. Now many many tasks
actually require um all of these layers.
So this was basically kind of what what
a lot of people in the project and and
people who were observing it at the
time, they said you're basically
describing the like an org chart of a
company. Um, and so some people actually
represented it as like like floors of an
office building. And so then you have
many many small agents taking on each
role in that floor and they all talk
with each other. And so then you have a
northbound bus and southbound bus. So
the northbound bus is basically feedback
from the environment. So this is the
green bar here is your interface with
the outside world. So that's APIs,
that's telemetry, that's anything that
you have uh control over um or get input
from from the outside world. So that
information needs to be disseminated to
all the agents and layers. And then the
southbound bus is command and control.
Now, one of the things that is missing
from Claudebot is an aspirational layer.
So this is this is one of the main
critiques that a lot of people have have
have created or or said is that Cloudbot
doesn't have like its own Supreme Court
to decide like you know does this does
this abide by our mission values or our
mission parameters or universal ethics
or that sort of thing. And so the
aspirational layer is about morality,
ethics and overall mission. So that's
that's this is very similar to what you
would say is like the constitution. So
like Claude's constitution serves as an
aspirational layer and this has been
really the c the centerpiece of my work
since I got into AI safety which is the
heristic imperatives. I'm really glad to
be talking about the heristic
imperatives again. So the heristic
imperatives are what what I came up with
after studying morality, ethics,
philosophy, game theory and that sort of
thing which is okay if you have a fully
autonomous uh machine what are the
highest values that you should give it
so that it stays aligned with humanity
and pro-life and that sort of thing. And
so that was um reduce suffering in the
universe uh increase prosperity in the
universe and increase understanding in
the universe. So I came to those three
values by figuring out like what are the
what are the deonttological
like like what what are the most
universal deonttological values. So
that's a duty based ethics which is
saying like from where I'm at today what
should I try and achieve and so many
people confuse like the paperclip
maximizer is an example of a teological
thing. It's like the the the best
version of the universe is the one with
with the most paper clips. So that is
purely a teological
version of morality or ethics or mission
or purpose. Whatever whatever um
whatever intervates something, whatever
gets you going uh whereas uh a more
deontological thing is from where I'm at
today, what do I have a duty to do? And
so like a duty to protect. So this is
this is where Asimov worked with the
three laws of robotics which is um you
know robot may not harm a human. So
under like whatever whatever the outcome
is, whatever the long-term outcome is,
do not do any actions that harm humans
and then do not do any actions that
allow harm to come to humans and that
sort of thing. And then of course later
in the foundation the zeroth principle
was actually your goal is to preserve
life. So ultimately something like
Claudebot will need an aspirational
layer and I would recommend the heristic
imperatives. So uh reduce suffering this
is a this is a very pro-social and
pro-life heristic which is basically
most intelligent animals will recognize
the suffering in other animals and try
and inter intervene. So you'll see this
where like um you know uh elephants and
and other animals if they see another if
they see another animal in distress
they'll try and help. Generally speaking
animals will try and help each other
because they recognize that that
distress that that suffering is bad. And
so I I said suffering specifically
because there's a difference between
pain. Pain is instructive. Pain is like
you need pain to understand what hurts
you. But suffering is non-addaptive.
Meaning suffering is just pain that has
no real purpose. So suffering is
generally bad. Um now you from an
artistic perspective some people say ah
suffering creates art which is you know
that that's a defensible assertion. You
look at Vincent Van Gogh and you know he
suffered a lot and he created great art
and that sort of thing. Um but that
doesn't necessarily mean that it is it
is teologically good. Um and also you're
never going to eliminate suffering. So
another thing is what I've established
is a vector. So reduce suffering not get
it to zero not eradicate suffering just
reduce it. Just just control suffering.
Next is prosperity. Because when you
have a heristic direction that says
reduce suffering um the best way to to
reduce suffering is to reduce life
because the less life there is the less
suffering there is. So then it took a
while to figure out the term prosperity.
So to counterbalance reduce suffering
you then have a value of increased
prosperity which is uh a very univers
universal word. It comes from Latin
prosperatoss which means to live well.
Literally the the root of prosperity
means you want to live well. You want to
flourish, you want to thrive. So you you
reduce suffering, you increase thriving,
you increase flourishing in the
universe. And you it's universal because
all life depends on all other life. Now
that's not universally true. There are
parasites. There are things there are
life forms that are basically just
harmful. Um but at the same time every
every life lives in the trophic level or
in the ecosystem and occupies a
particular niche. So then the final one
was because I realized well just those
two values you could end up with a green
earth that has no intelligence on it
that has nothing that no curiosity no
expansion and so then I realized that
the the core objective function of
humans that makes humans different is
that we are curious. So I said how do we
encode a curiosity
uh like algorithm into into an
autonomous machine whether it's AGI or
ASI or your personal cloudbot as it
turns out to be and that was to increase
understanding. So curiosity is the
desire to know for its own sake. So the
desire to understand for its own sake.
So rather than saying curiosity, which
is I just want to know things, right?
Because pure curiosity, unbridled
curiosity can lead you to do things
like, you know, section 31 and torturing
frogs just to see what happens. There
was actually an episode, this was also
inspired by an episode of Star Trek
where there was this galactic entity
that wanted to experiment on the crew of
the Enterprise just to see what would
happen. And so that was that was a an
ethical uh dilemma showing pure
curiosity for its own sake can actually
be destructive and it can be harmful and
cause suffering. So however you do want
to understand and that's one of the core
drives of humanity. It's like hey what's
over there? What's on the horizon? How
do I get across this body of water? Why
does fire happen? Our curiosity is what
sets us apart. So then I said okay we
have created this higher layer of
organization and how do we encode that
into a machine now plenty of people cuz
you know if if you followed me for a
while you know that heristic imperatives
people are talking about it people have
put it into agents I'm really excited
and the reason I'm making this video and
I didn't know that I was going to make
the video this way but the reason I'm
making this video is because I think we
have a really powerful opportunity to
now take that pri prior work you know
the the agent forge team worked on um
the ACE framework team worked on and
anyone who's tried to implement natural
language cognitive architecture the core
heristic imperatives would be I think
really great to add to something like
Cloudbot. Um so yeah, I guess that's
that's where I'll leave it. I think I
was working on um a notebook LM to
understand it, but it looks like it
failed. Um anyways, so I'll leave it
there today and um yeah, thanks for
watching. If I sound different, it's
because I'm practicing using my voice
differently. Um, I uh I mentioned on a
private video for my fans that um that
my voice usually gets really tired after
an hour or two. And that had me really
worried because I'm going to be
narrating my book. Um and and I was
like, I'm I don't know if I'm going to
be able to do this. And they said, oh,
like you need to practice using your
voice differently. And so I was doing a
little bit of research and I realized
that um I learned to use my voice in two
primary places. One was singing chorus
and then two was leading meetings. And
both of those require a tremendous
amount of energy and projection which
you don't need to do when the microphone
is literally a few inches from your
face. So if I sound weird, if I sound
different, it's because I'm practicing
using my voice differently so that I can
entrain the narrator voice. And that's
that's was the inspiration. I was like,
I need to make a video today. And
Claudebot is blowing up. So, thank you
for being here. Um, and I'll end by
plugging um all of the ways that you can
support me. Um, and a little bit of
news. So, we're very close to getting
the Kickstarter up for the post labor
economics book. Um, I actually do need
to do a little bit of recording for
that. I think um we've got a editor
chosen. So, this is an editor um who has
uh been the copy editor for like New
York Times bestseller. So, we found the
right guy. Um, the right genre, the
right the right talent, the right skill
set. So, the book is going to it's going
to really shine. So, there's that. And
then, um, the Kickstarter, the book, the
editor. Um, oh, yeah. And so, then, uh,
if you, if you're hanging around, what I
have what I have figured out for my fan
base is I've been trying to figure out
like the bonus content strategy for a
while. And what I have settled on, what
I'm trying right now is just fireside
chats. So just more unstructured
fireside chats. Um, and I'm posting
those on every platform that you can
subscribe on as a as a paid member. So
that's here on YouTube. That is Patreon,
that's sub uh Substack, that's Twitter,
um, and Spotify. So those are the five
platforms where you can sign up as a
paid subscriber and get those the bonus
content. And I'm going to talk about uh
it's going to be a little bit more
broader topic. So, some of it's going to
be personal updates, some of it's going
to be burnout, some of it's going to be
more philosophical. I don't know what
it's going to be, but it's basically
going to be if you want twice as much,
Dave, sign up for one of those platforms
and I will be posting um the insider
content there. And it's it's basically
the same format. It's going to be me
talking to the camera for 20 to 30
minutes. Um I'm also going to start as
my health improves, I'm going to start
hiking again, so you can get some of the
hiking videos and that sort of thing.
All right, with all that being said,
thank you for watching and I hope that
you guys enjoy and consider integrating
the heristic imperatives into Claudebot
and its successors and we'll go from
there. Cheers.
Ask follow-up questions or revisit key timestamps.
The video discusses Claudebot, an open-source, semi-autonomous personal agent that proactively finds tasks rather than waiting for commands. Unlike corporate-friendly agents, Claudebot is renegade and open-source, meaning users accept the risks. This open-source nature gives it an advantage in rapid development and iteration. The video also touches upon the technical concerns, such as security vulnerabilities due to its constantly running nature, but notes it can be run locally on various devices. The speaker then delves into the technological primitives that have enabled such agents, including models capable of agency, tool use (like APIs and JSON), and advanced memory management through recursive language models. The speaker shares their past work on agentic memory systems, including projects like Remo, Nala, and the ACE framework, highlighting that the current direction of AI development was predictable. The core concept of an agentic framework is explained as having loops for task specification and execution, with services for memory management. The speaker recounts an experience where a team member resisted autonomous behavior, missing the fundamental purpose of autonomy. The video then details the speaker's earlier work on a "natural language cognitive architecture" from over four years ago, which shares similarities with Claudebot's architecture, particularly in its use of plain text for task specification. The ACE framework is presented as a more sophisticated hierarchical model with layers for global strategy, agent modeling, executive function, cognitive control, and task execution. A missing component in Claudebot is identified as the aspirational layer, which deals with morality, ethics, and mission. The speaker introduces their "heuristic imperatives" (reduce suffering, increase prosperity, increase understanding) as a potential framework for this aspirational layer, explaining their philosophical underpinnings and differentiating them from purely teleological or deontological ethics. The video concludes with the speaker discussing personal reasons for practicing their voice for narration and plugging their upcoming book and paid content. They encourage integrating the heuristic imperatives into Claudebot and its successors.
Videos recently processed by our community