The gap is widening
799 segments
All right. So, I want to talk about what
happens next. And this is with respect
to Open Claw and Molt Book and Rent a
Human and all that that kind of fun
stuff. So, if you're not if you're not
up to speed, here's a 202 catch you up
to speed. Uh, a few weeks ago, someone
or actually guess it was end of 2025.
Someone created OpenClaw or what was
called Claudebot at the time. Fully
autonomous uh agent that, you know,
works for you around the clock. uh runs
on scripts and cron jobs and it wakes up
at a you know regular cadence to go do
stuff autonomously for you and then
someone made claude book um or no claude
book whatever it was called um basically
read it for uh sorry mold book there we
go my brain was like no that's not right
mold book so basically read it for these
agents now that immediately became a
cess pit for cryptog and humans writing
posts and that sort of thing. But it
made open claw very popular. And then
someone extended that with rent a human,
which is basically uh AI agents can go
pay a human to do something for them.
Now what's happening? So what is what is
legitimately actually physically
happening in the real world is yes mold
book was full of grift. Um but it shows
a path forward. At the same time what is
actually physically true is that lots
and lots of people hundreds of thousands
of people if not millions of people are
using openclaw around the world uh to do
real work. And you know, people talk
about it and and the conversation is as
sharply divided about this kind of agent
as it was for chat bots when chat bots
first got big cuz you still see see
people out there saying, you know, I
don't I don't understand a legitimate
use case for chat GPT and it's like,
okay, buddy, you know, you're you're
just a lite and you're going to live in
your cave. I remember when when chat GPT
first launched and I was still on the
OpenAI forums, there were guys going
around saying, "This is just Eliza. Tell
you need to prove to me that this is
like people get so huffy and indignant
and like angry when the paradigm
shifts." And we're seeing the same exact
thing. I think that this is why it took
me a while to like figure out how to
talk about this is because the reaction
is so sharp around openclaw and these
this idea of autonomous agents that it's
it's like that basically we've had a few
paradigms. So from my perspective having
been in this since been in this like
full-time since GPT2 um so we had we had
the original LLMs which were literally
just autocomplete engines. They weren't
even chatbots yet. So that was paradigm
one. Then paradigm two was the instruct
models where I was like okay well
instead of having to give give every
single time you give a a prompt example
let's just go ahead and pre-train the
models to follow simple instructions.
Well once we had that it was a hop skip
and a step to get chat bots cuz then the
instruction is just be a chatbot with
you know x y and z personality. So we
had plain vanilla, we had instruct
aligned, then we had chat bots. And chat
bots was the first like you could say
instruct aligned was like paradigm 1.5
because it was still basically just an
autocomplete engine, but it was an
autocomplete engine that was designed to
follow at least a single instruction
relatively well. The chatbot was a was a
fundamentally different UX. So chatbot
was paradigm 2. And I didn't even use
chat GPT for the first few months cuz
I'm like I don't care. Like that's just
a chatbot. such as a distraction for
this this model. But of course, time has
proven that once you add on tool use and
you add on reasoning and a bunch of
other things, then the chatbot actually
becomes semiautonomous.
You know, like I still regularly use
chat GPT Pro when I'm doing heavy duty
math research to model like, you know,
how many how many jobs were eliminated
in 2025. And by the way, cross-checking
across multiple AIs, it looks like
artificial intelligence ultimately
either destroyed or avoided about
200,000 to 300,000 jobs in America alone
last year. Um, so anyways, just wanted
to throw that little factoid out. The
official number is 54,167
or something like that. Um, so the the
real number and the way that you detect
this is there's kind of two ways you
look at number one, you look at um
excess deaths. So during co times and
I'm sorry for this tangent but this is
actually kind of important and it's not
I don't I don't want to do a whole video
on it. So anyways during co times we did
excess deaths which is basically all
else being equal you look at how many
extra people died in a given time and
place and you say okay well the primary
difference is COVID. So then you
attribute those excess deaths to CO
exposure. So we did the same thing for
jobs which is all else being equal
looking at things like inflation data
and um and in and interest rates and
those sorts of things. You can generally
anticipate how many layoffs should have
happened and then so then you say okay
well how many excess layoffs did we
have. Then the other thing that you can
do is you can look at it the opposite
direction which is labor growth. So then
you compare things like GDP growth to
the actual number of jobs created and
you look at the at the difference
between those. And when you correct for
everything else like you know Doge
layoffs and whatever else was going on
last year um then you get somewhere
between like a 100,000 and 350,000
uh jobs were either destroyed or
avoided. And this this actually goes
back to an AI layoff is not necessarily
just someone getting fired handed a pink
slip saying AI was responsible for this.
It is that new jobs are not being
created. Uh, so anyways, um, that's an
example of what I use ChatGBT Pro for
and Gemini Pro and um, I don't use
Perplexity Max anymore, but I have a
Claude Max. So, I use all of these
different tools, but they're just
waiting for me. So, the the next big
paradigm shift has been agents. And
people have been trying to crack the nut
of agents for years now. like the ACE
framework if you remember like ACE
framework the Raven framework um we've
been trying to crack this nut for a
while and then dude with open claw
figures it out so here we are now what I
will say is that adding reasoning and
tool use was kind of paradigm 2.5 cuz
that was still fundamentally the same
form factor as chat bots so it was chat
bots with a little bit of extra bolted
on but then those chat bots with a
little bit extra bolted on has evolved
into openclaw So, Paradigm 1 was just
autocomplete engine and then we made
them a little bit better until they
evolved into chatbots. It's like
Pokemon, honestly. So, you know, the the
the the plain vanilla original Pokemon,
you know, that you get right at the
beginning of the game is just the
autocomplete engine. Stage two is the
chatbot. Stage three is the agent. Now,
the agent allows for a lot of emergence.
So you've probably heard uh everyone
from me to systems thinkers like Daniel
Schmokenberger and everyone else even
the AI safety people we talk about
emergence. So emergence has a few
connotations in in pure AI the idea is
that new abilities emerge as you get
certain levels of sophistication data
parameter counts architectures those
sorts of things. So like theory of mind
seems to have emerged. Um, but what has
what has also happened is that once you
know to look for an ability in a
language model, it's usually been there
all along. It was just not prominent
enough to be useful. So like theory of
mind, GPT2 has planning and theory of
mind. It's not very good, but it's in
there. So then it got better with GPT3
and people still didn't even know to
look for it even though a lot of us were
talking about it. Um, and then once you
get to GPT4 and GPT5, then people are
like, "Oh yeah, it's clearly got good
enough theory of mind." um and planning
and reasoning. It's always been there.
It just wasn't that good. So now it's
there. It's better. It's useful. And in
fact, in in the case of theory of mind,
um modern like frontier models are
generally better than the average human
at theory of mind. Um so that's that's
one example of emergence. But another
version of emergence because that's
within a single system. um within in in
larger systems in complex systems or or
um well yeah I guess in in systems
theory emergence can be like emergent
gameplay is an example of emergence
where you have a game that is
sophisticated enough with enough rules
that people can kind of make their own
games in this. So um some examples are
like Minecraft and Roblox and those
sorts of things where there's enough
different game mechanics. you have
enough you have a you have a
sophisticated enough alphabet of game
mechanics that you can get spontaneous
uh new forms of gameplay. So all the big
games like Fortnite and stuff where
there's a building mechanic, a survival
mechanic, a a fighting mechanic and
those sorts of things, you get emergent
gameplay. Um so that that is that is a
real life example of emergence. Now when
you apply that to things like agents, so
now you have agents interacting with
each other and they're interacting with
each other, you know, locally, they're
interacting with each other publicly,
they're interacting with humans, they're
interacting with businesses. That's
where you get another layer of emergence
because the thing is is with chat bots,
you had a very constrained environment
where a chatbot was interacting with
you. You were pretty much the the
primary variable. And then of course you
know it it had tool use and retrieval
augmented generation and reasoning
abilities as kind of the three primary
food groups. Obviously that's they have
lists of tools. Um but they can get
information from elsewhere. They can run
programming and other tools and then
they have reasoning abilities. So those
are the three big variables. But it was
still a constrained environment because
it was just going to do a process. You
know they could think I think my longest
chat GPT pro was like 58 minutes or
maybe just over an hour. and then it's
going to spit out the results and then
wait for you. So that there's a there's
a very definitive time step and it's
based on the human. However, whenever
you have complex systems, you introduce
more and more variables which which
ultimately creates more chaos and I mean
this in like the strict like the the the
legitimate like chaos theory because
with a single chatbot doing a single
thing, what we're trying to do with
safety and alignment is reduce the
chaos. Meaning we know that every time
you use Claude Max or G Chat GPT Pro or
Gemini Pro or Gro Heavy, the input
processing and output cycle is going to
be very predictable. And what I mean by
predictable is it's not going to go off
the rails. It's not going to do any
financial harm. It's not going to
accidentally teach you how to make drugs
or nukes or that sort of thing. And so
that's the time step. So we've basically
been operating up until now on an
individual loop. So the loop is human
puts something in, the AI goes and does
something and then gives the human
output. So that's the input processing
output loop and that's the loop and it
just keeps looping. But what openclaw
does is it makes the loop not dependent
on humans. So then not only do you have
incremental time steps that are not
limited to humans, you have incremental
time steps that are influenced by other
agents and other environments. And that
introduces basically irreducible
complexity or or chaos. And so this goes
back to the original work that that we
did and I say we because we had a very
large community working on Gateau. So
that's the global alignment tax taxonomy
omnibus framework which we we realized
you know anyone working on cognitive
architecture as of three plus years ago
realized and I remember one of the one
of the guys that we collaborated with he
said I realize that these things are
going to be talking to each other more
than us very soon and that has turned
out to be true. Now, the reason I'm
bringing all of this up is because from
a risk and compliance and safety and
liability standpoint,
um it like openclaw is intolerable. And
what I mean by intolerable is that no
frontier lab is going to touch this
anytime soon. Um no Fortune 500
enterprise is going to touch it anytime
soon either. one of the biggest barriers
of adoption for even professional-grade
agentic frameworks and I I have Silicon
Valley founder friends um who who are
who are building these kinds of things
is number one interoperability. So like
what a lot of people expect an agent is
going to be is just a digital human that
you drop in and there are some people
working towards that where you just put
it in a in a a virtual machine give it a
virtual Windows desktop and away it
goes. But um the the the native space
that these things live in is more like
OpenClaw where it's all terminals, it's
all command lines, it's all API calls.
So rather than saying, okay, well, you
need to um have a keyboard, a mouse, and
and a screen and a virtual screen that
you have to look at and interpret, it's
like, why don't I just give you the the
information directly? Um so they're
they're they're not guey native,
graphical user interface native. It's
actually kind of like back to old
school. And this is why whenever you
watch people that are like hacking on
OpenClaw and stuff, they have like four
monitors up and it's all terminals. It
looks like the Matrix, right? Um, and
that's because their native environment
is textbased, meaning that the terminal
output from, you know, command line or
APIs is natively what they they read.
It's there's actually too much
information. There's too much noise if
you give them a screen, if you give them
a camera. Uh so
that is all right that was a little bit
of a tangent but I I felt like I needed
to explain it but the the overarching
point here is that from a security so
there's there's a few ways to look at
this. So if we if we put our you know um
Fortune 500 hat on we say all right so
you got cyber security and they say well
what are the risks and it's like oh well
you give it root access to a virtual
machine and it can have prompt
injections from from um infected skills
that it downloaded online and so cyber
security is going to say banan it say
this is not a product this is
functionally malware um and and in many
cases that is one of the best ways to
characterize openclaw from a cyber
security perspective. I'm not saying
that it is literal malware, but I'm
saying having worked in Fortune 500
companies, numerous of them. I'm telling
you that is how cyber security would
treat it. And honestly, as an as a
former uh automation uh infrastructure
guy, I would say like, yeah, you can't
trust this because you if it if it
touches one of your routers or one of
your switches or one of your servers or
one of your storage arrays and it runs
the wrong command, it shuts everything
down and you're losing, you know, in
some cases tens of millions of dollars
per hour on top of reputation damage and
on top of legal liability. So, it will
take a very long time. And when I say
very long time, the fastest I could see
many um Fortune 500 companies deploying
something like not OpenClaw directly,
but a successor to OpenClaw is maybe 18
months. And that is just because that's
how long it takes for you to do an
infrastructure audit, a cyber security
audit. Um they will have like toy
versions set up to play with. And you
know, cyber security team, they'll
they'll be monkeying around with it
already. But one of the other things
that you absolutely need to have is
executive buyin at the very top. And and
when I say at the very top, I don't mean
your chief technology officer. It has to
come from the CEO and ideally the board
of directors. Um the the the companies
that we are seeing that are and this is
this is all organizations. It doesn't
have to be a for-profit company. It
doesn't have to be a Fortune 500
company. It can be a government. It can
be a mid-size company. It can be an
enterprise. It can be a small company.
The only companies, the only
organizations that are successfully
making this this pivot is when the owner
or whoever is the highest stakeholder,
whoever it happens to be, whether it's
the CEO, the owner, board of directors,
the governor, whoever it is, if they
issue an edict saying we are going allin
on AI and they are the ones leading the
charge. And the reason is because AI is
so fast moving and it's so scary and
organizations are so riskaverse that if
you don't see the head honcho constantly
saying oh look what I did with chat GPT
then everyone else is going to be like I
might use chat GPT a little bit but I'm
going to hide the use and I'm not going
to do that kind of thing. my wife's
company, it's not a company she owns,
it's a company she she contracts for.
The owner and CEO and founder, all the
same person, on their weekly meeting
says, "Tell us what you used AI for."
And so, he's created a culture of, "Hey,
we're we're treating AI like a first
class asset." It's like, "Imagine
imagine you didn't have computers and
you didn't have internet." And it's
like, "Oh, well, the factory has never
needed computers or internet, so we're
just going to kind of ignore it." That's
how most companies and organizations are
reacting. Um, but the companies that are
saying, "Hey, we don't know how to use
this, but go play with it." Um, and you
know, it's like play with it until you
go blind. And if you know, you know.
Anyways, this is a little bit of a dirty
joke. Um, the idea is that you need
executive sponsorship. And when I say
executive sponsorship, I mean like
wholehog buyin. They need to be the ones
leading the charge. In fact, our
consultation business, our side business
that we're working on, we have decided
that we will not work with any clients
for whom like it's not coming from the
top because the thing is is in the
consultation space you might have um you
might have a company or a manager or you
know even a sea level we have we have
seen cases where the sea level says we
need to go allin on this but you know
the the chief financial officer chief
head of legal counsel, chief of HR,
they're all either ambivalent or
skeptical. And so the CEO is like,
"Well, you know, do what you can, but
it's not a top priority." If the CEO
says it's not a top priority, we walk.
Like, we we just we know that they're
not ready. And the reason I'm telling
you all of this, and I know that most of
you have never been in a Fortune 500
company. Most of you have never worked
for and have no interest in working for
a Fortune 500 company. The reason I'm
telling all of you this is because one
that's like they did that for 15 years.
But then also so that you are aware that
the gulf between what you know is
possible on the cutting edge and where
most people are is actually getting
wider because we still have people
saying like well I don't really see a
role for chat bots in enterprise. I
don't really see a role for chat bots in
government. and they're all so
riskaverse and they're so slowmoving,
it's going to take years for them to
really come around. Meanwhile, the ones
that are rapid adopters are going to be
pulling ahead. And honestly, when this
in in hindsight, imagine you had a
company that said, "Well, we don't
really get this internet stuff. We don't
really get this personal computer stuff.
We don't really get this cloud stuff."
What happens? A lot of them go out of
business. Look at Borders, right?
Borders is is one of the primary
examples where they said books that that
that's not changing. People like having
a physical book in their hand. And here
comes Jeff Bezos with his, you know,
warehouse robots and bulldozes, that
whole thing. Barnes & Noble managed to
survive somehow. I don't even know how
cuz when was the last time you even went
to a Barnes & Noble, but Borders is long
gone. And so what I'm what I'm telling
you is that the companies that survive,
that decision is being made now. But
it's like it's kind of like, you know,
you you see in a war movie or or
whatever where it's like, "Oh, he's
already dead. His body just hasn't
caught up to the fact yet." I don't
remember what movie that was from, but
it it's it's a trope, right? It's like
the there are already zombie companies.
There are dead men walking out there and
it comes from the top because of their
attitude towards artificial
intelligence. If they're dragging their
feet on chat bots, they're they're
hardly using any language models at all.
you know, they might still be using um
one of the one of the first use cases
that has diffused out is using small
language models to do things like help
with automatic routing of tickets and
that sort of thing. You know, the in in
business theory, there's a there's a
concept called forgivability, which
means what's the cost of doing it wrong
and how difficult is it to reverse? And
so, if you route a ticket to the wrong
person, that has basically zero cost.
It's like it's going to cost someone a
human 5 minutes to route it back to the
right place, but no lives are lost, no
money is lost, and there's no legal
exposure. So, you go through who's
everyone that you need to convince on
the board or, you know, at city council
or town hall or the governor's office
or, you know, the seauite. So, you've
got legal, you've got finance, you've
got HR, um you've got cyber security. So
even when the tech nerds over here say
guys this is very clearly the way of the
future you're going to get a lot of
resistance and in some cases it is
rightly so because to be fair a lot of
us tech guys want to move faster than
the rest of the organization is even
capable of moving fast or is or is would
even be um would even be uh wise to move
that safe. And the reason is because you
really have to dot your eyes and cross
your tees and justify, okay, if we roll
out these chat bots, which by the way,
Microsoft is charging $40 per month for
co-pilot. And you know, that doesn't
sound like much. And from my
perspective, the argument is very
simple. It's like in that for in that,
you know, for for every single seat,
will you get $40 more value to the
organization than that cost? So, like
the ROI for for people who use chat bots
is very obvious where it's like, hey,
you're paying someone effectively $50 to
$60 an hour. All you have to do is get
an extra hours worth of value to the
company per month out of that employee
and the cost is justified. But that's
not even how CFOs think. CFOs say, "Yes,
but one, how do we track that? And then
two, what if they just get the same
amount of work done faster and then
they're lazier?" So those are very real
considerations, but for most people like
especially smaller organizations where
it's like, yeah, we're all using chat
GPT or Gemini or whatever all day every
day and we are 10 times more effective
as a team. That is much easier to
achieve when you have a 20 or 30 person
organization rather than a 20 or 30,000
person organization. And so what what
we're and I'm reporting from the front
lines. This is what we're literally
seeing out there in the enterprise space
is that you have um you have like how am
I trying to say this? You you have a lot
of job roles where chat bots really
don't help that much. Um and then and
and that's a true thing. Like if you
spend most of your day in a truck, you
know, moving heavy materials around or
meeting with people face to face, you
might use a chatbot at the end of the
day to help write reports. Now, to be
fair, if you spend four or five hours a
week writing reports and the chat bots
can help you do it in 30 minutes, so
then you can spend more time face tof
face with clients, with customers, with
suppliers, then again, that should be a
no-brainer. But that's not how CFOs
think. And then of course your chief
legal counsel is just like you know
here's here's an example that I learned
recently is that uh insurance will not
underwrite AI right now. They don't know
how to price it in. Um and so well if if
insurance won't write it then there
that's an an an intolerable amount of
financial risk andor legal risk. Uh so
then legal says shut it down. Don't use
it. Now, of course, there's often a lot
of hypocrisy because finance might be
using the chatbot as shadow IT. Legal
might be using the chatbots as shadow
IT. HR might be using the chatbots as
shadow IT. So, then you have this other
problem. And this is where your your C
your CISO, your chief information
security officer comes in. So, the CISO
says, well, we have this problem with
with shadow IT. Legal's using it and we
can't tell them to stop. HR is using it
and we could probably tell them to stop.
um all of it is using it and we can we
can definitely you know lean on on the
CTO to have a lockdown policy but people
are going to be trying to use it anyways
regardless of what we try and say or do.
So then there's almost kind of a forcing
function or a binding function because
these things are just not ready and and
when I say these things I'm specifically
referring to like the agents because
what the journey that we're on is
there's the kind of the three phases.
There's the plain vanilla autocomplete.
There's the chat bots. And then there's
the agents. So all of this is to is to
show that it's going to take longer than
is remotely reasonably
uh uh I guess considerable or what is
what is reasonable to deploy these
things because often the limitation is
not technology
particularly with fastmoving new stuff.
The limitation is the rest of the
organization. And so like I have I have
a a friend a professor friend who also
consults for the state government and
it's like you have to have so many
endless meetings and all of us tech
people get so bored with it. We're like
guys you are like 2 or 3 years behind
the curve. It's time to move on. And
this is not to say that like you know
the singularity is canled or that sort
of thing but if you're watch if you are
one of the people watching this video
you are at the tip of the spear like you
are on the bleeding edge. you're on the
cutting edge and you know what's coming
and you know that it's going to take
like you know chat GPT's been out for
what three years now and and people are
still arguing like oh I don't really see
any b business value for that it's like
okay well that's that's a you problem
friend likewise there's people there's
there's high-tech cutting edge people
out there who are honestly saying I
don't see a use case for openclaw I
don't see a use case for maltbook I
don't see a use case for rent a human
this is all silly and that's just not
how general purpose technologies evolve.
What we're seeing right now is like
taking electricity and and you know, use
case one of electricity was like light
bulbs, which is basically you just short
it out, right? If if you if you have a
if you have a current and you short it
out, it makes light and heat. So it's
like a light bulb is like the equivalent
of the autocomplete era. And then
electric motors came around and it's
like, oh wow, if you do some coils and a
stator and some magnets, you can
actually convert that current into a lot
of torque. Cool. But then you have more
sophisticated uses of electricity like
communication being one of the most
sophisticated things. And then
computation, all of that is still
predicated. So like communication like
technology so so radio, telephone,
telegraph um and and switched networks
that's like layer three that's like uh
iteration number three of of the uses of
electricity as a general purpose
technology and then computation is is
layer four. So like that's a fourth
order consequence of electricity. It was
not immediately obvious when the first
people said, "Oh, like if we put these
chemicals together with different
metals, then we get like a little spark.
Hey, cool. What does that mean?" I don't
know. Or if you know the first time
someone spun up a dynamo and it's like,
"Oh, cool." Like it can give a little
zap or a little arc. What do we use this
for? We can use it to make rocks think
like that was not obvious. And my point
there is that it's not obvious to go
from an autocomplete engine which is
just a token producer to then reshape
that token producer to a chatbot and
then give it reasoning and give it tool
use and then reshape that again into
autonomous agents. It was obvious to us
technologists. That's why I went that I
literally decided to go all in on AI
when GPT3 came out and that's how it
became my career because I said this is
very obviously the next big thing. But
the diffusion part this is what we're
in. We're in the long slog of diffusion,
meaning it's going to take time. It's
going to take experimentation. There's
going to be emergent risks. There's
going to be emergent benefits. There's
going to be emergent form factors. All
kinds of stuff. And this is why, you
know, like, okay, let's just use an
example for something that didn't pop
off, which is VR. So, the metaverse, VR,
XR, AR. The idea was that we have a new
primitive which is, you know, basically
a head-mounted device, so an HMD as an
interface to cyerspace. And so, you
know, if you go all the way back to the
1980s and and maybe even before with
particularly uh Japanese culture, manga
and and anime, the idea that you'd have
a headset or some kind of headmounted
device and that you'd dive into the
cyberspace and the etherwebs and
whatever else, that [snorts] was just
people thought it was a foregone
conclusion that that was the future. We
actually have those headmounted devices
now and nobody uses them. Um so there
it's like okay we we invented one new
technological primitive and it didn't go
anywhere. So it's understandable that
people say okay well you have you have
you know tok you have a uh you next
token prediction and that's a new like
technological primitive. So yeah some
people assume it's not going to go
anywhere but they're wrong just plain
and simple. And even though they're
wrong it's still going to take time for
this technology to diffuse out for it to
mature and those sorts of things. So, I
don't really mean to like rain on the
parade, but like when you when you
straddle two worlds and the two worlds
that I'm talking about right now is like
the frontier, like what's happening in
real time, what is actually physically
possible versus what the companies and
governments are actually going to do.
It's very depressing. So, [laughter]
that's all there is to it. Talk to
you'all later.
Ask follow-up questions or revisit key timestamps.
The video discusses the evolution of AI, moving from basic autocomplete engines to chatbots and now to autonomous agents like OpenClaw. It highlights that while early iterations like Mold Book were rife with grift, they pointed towards a future where AI agents perform real work. The speaker draws parallels between the current divided reaction to autonomous agents and the initial skepticism surrounding chatbots, suggesting that many resistant individuals are simply unfamiliar with or unwilling to adapt to paradigm shifts. The development of AI is presented as a progression through distinct paradigms: 1.0 (autocomplete), 1.5 (instruct-aligned models), 2.0 (chatbots), and the emerging 3.0 (agents). The speaker emphasizes that capabilities like reasoning and tool use, when added to chatbots, paved the way for more sophisticated agents. The concept of emergence in AI is explained, both in terms of new abilities arising within models (like theory of mind) and in complex systems where agents interact. The video then delves into the significant barriers to enterprise adoption of AI agents, particularly OpenClaw, citing cybersecurity risks, lack of interoperability, and the need for executive buy-in from the highest levels (CEO/Board). The speaker contrasts the rapid adoption in smaller, agile organizations with the slow, risk-averse nature of large enterprises and governments, likening resistant companies to "zombie companies." Finally, the speaker concludes that while the potential of AI agents is immense, the diffusion and adoption process will be lengthy and complex, citing historical parallels with the adoption of electricity and the limited success of VR as a new technological primitive.
Videos recently processed by our community