AI AGENTS DEBATE: These Jobs Won't Exist In 24 Months!
3903 segments
I think a lot of people don't realize
how massive the positive impact AI is
going to have on their life. Well, I
would argue that the idea that this AI
disruption doesn't lead us to human
catastrophe is optimistic. For example,
people are going to be unemployed in
huge numbers. You agree with that, don't
you? Yes. If your job is as routine as
it comes, it's gone in the next couple
years. But it's going to create new
opportunities for wealth creation. Let
me put it to you this way. We have
created a new species and nobody on
earth can predict what's going to
happen. We are joined by three leading
voices to debate the most disruptive
shift in human history, the rise of AI.
And they're answering the questions
you're most scared about. This
technology is going to get so much more
powerful. And yes, we're going to go
through a period of disruption. But at
the other end, we're going to create a
fair world. It's enabling people to run
their businesses, make a lot of money,
and you can solve meaningful problems
such as the breakthroughs in global
healthcare and education will be
phenomenal. and you can live an
incredibly fulfilling existence. Well, I
would just say on that front, this has
always been the fantasy of technologists
to do marvelous things with our spare
time, but we end up doom scrolling,
loneliness epidemic, right? Falling
birth rates. So, the potential for good
here is infinite and the potential for
bad is 10 times. For example, there's
war, undetectable beat fakes and scams.
So, people don't understand how many
different ways they are going to be
robbed. Look, I don't think blaming
technology for all of it is the right
thing. all these issues, they're already
here. You're all fathers here. So, what
are you saying to your children? Well,
first of all, this has always blown my
mind a little bit. 53% of you that
listen to this show regularly haven't
yet subscribed to the show. So, could I
ask you for a favor before we start? If
you like the show and you like what we
do here and you want to support us, the
free simple way that you can do just
that is by hitting the subscribe button.
And my commitment to you is if you do
that, then I'll do everything in my
power, me and my team, to make sure that
this show is better for you every single
week. We'll listen to your feedback.
We'll find the guest that you want me to
speak to and we'll continue to do what
we do. Thank you so
[Music]
much. The reason why I wanted to have
this conversation with all of you is
because the subject matter of AI, but
more specifically AI agents, has
occupied my free time for several weeks
in a row. And actually amad when I
started using replet for me it was a
paradigm shift. There was two paradigm
shifts in a row that happened about a
week apart. Chat GPT released their
image generation model where you could
create any image. It was incredibly
detailed with text and all those things.
That was a huge paradigm shift. And then
in the same week I finally gave in to
try and figure out what this term AI
agent was that I was hearing all over
the internet. I heard vibe coding. I
heard AI agent. I was like I will give
it a shot. Mhm. And when I used Replet,
20 minutes in to using Replet, my mind
was blown. And I think that night I
stayed up till 3 or 4 a.m. in the
morning. For anyone that doesn't know,
Replet is a piece of software that
allows you to create software. Mhm. And
pretty much any software you you want.
So someone like me with absolutely no
coding skills was able to build a
website, build in Stripe, take payment,
integrate AI into my website, add Google
login to the front of my website and do
it within minutes. I then got the piece
of software that I had built with no
coding skills, sent it to my friends,
and one of my friends put his credit
card in and paid. Amazing. So I just
launched a SAS company with no coding
skills.
to demonstrate an AI agent in a very
simple way. I used an online AI agent
called operator to order us all some
water from a CVS around the corner. The
AI agent did everything end to end and
people will be watching on the screen.
It put my credit card details and it
picked the water for me. It gave the
person a tip. It put some delivery notes
in at some point a guy is going to walk
in. He has not interacted with a human.
He's interacted with my AI agent. And I
just the reason I use this as an example
is again it was a paradigm shift moment
for me when I heard about agents. Mhm.
about a month ago and I went on and I
ordered a bottle of electrolytes and
when it when my doorbell rang I freaked
out. I freaked out. But am who are you
and what are you doing? So uh I started
programming at a very young age. You
know I I built my first business when I
was a teenager. I used to go to uh
internet cafes and program there. And I
realized that they don't have software
to manage the business. I was like oh
why don't you create accounts? I don't
have a server. It took me two years to
build that piece of software. And that's
sort of embedded in my mind this idea
that hey like you know there there's a
lot of people in the world with really
amazing ideas especially in the context
where they live in that allows them to
build uh businesses. However, the main
source of uh friction between an idea
and software or call it an idea and
wealth creation is u infrastructure is
physical infrastructure is a meaning a
computer in front of you. It is um an
internet connection. It is the set of
tools and skills that you need to build
that. If we make it so that anyone who
has ideas who who wants to solve
problems will be able to do it. I mean
imagine the kind of world that we we
could live in where no one can be anyone
who has merit anyone who can think
clearly anyone who can generate a lot of
ideas can generate wealth. I mean that's
an amazing world to live in right
anywhere in the world. So with with
Replet the company that I started in
2016 the the idea was like okay coding
is difficult how do we solve coding and
um we built every part of the process
the hosting the code editor the only
missing thing was the you know the AI
agent and so over the past two years
we've been working on this AI agent that
you can just you know similar since chat
GPT you know this revolution with Genai
and you can just uh speak your ideas
into existence. I mean this starts you
know sounding religious like this is
like the you know the gods you know that
the myths that that um that humans have
created they used to imagine a world
where you can you can be everywhere and
anywhere at once that's sort of the
internet and you can also speak your
ideas into into existence and um you
know it's it's still early I think uh
replet agent is is a fantastic tool and
I think this technology is going to get
so much more powerful
specifically what is an AI agent. I've
got this um graph actually here which I
don't need to pass to any of you for you
to be able to see the growth of AI
agents. But this graph is Google search
trend data. This also resembles our
revenue too.
Oh, okay. Right. The the water has
arrived. Hello. Thank you. You can come
on in. Can I have a go, please? Yes.
It's
3951. Great. Thank you so much. Thank
you. Thank you. I mean this is this is
like a supernatural kind of power. You
conjured water. I conjured water from my
mind. Yeah. And it's shown up here with
us and it clearly thinks we need a lot.
But but just to define the term AI agent
for someone that's never heard the term
before. Yeah. Yeah. So uh I assume most
of the audience now are familiar with
chat, right? You can go in and you can
talk to an AI. It can search the web for
you. It has a limited amount of tools.
Uh maybe it can call a calculator to do
some addition subtraction for you, but
that's about it. It's a request response
style. Agents are when you give it a
request and they can work indefinitely
until they achieve a goal or they run
into an error and they need your help.
It's an AI bot that has access to tools.
Those tools are access to the to a web
browser like operator, access to a
programming environment say like replet,
access to um you know credit cards. The
more tools you give the agent, the more
powerful it is. Of course there's all
these consideration around security and
safety and all of that stuff. But uh the
the most important thing is the AI agent
will determine when it finished
executing. Uh today AI agents can run
for anywhere between you know 30 seconds
to 30 minutes. Uh there's a recent paper
that came out that's showing that every
7 months the number of minutes that the
agent can run for is doubling. So we're
at like 30 minutes now. In seven months
we're going to be at an hour then you
know 2 hours. Pretty soon we're going to
be at days. And at that point, you know,
AI agent is doing labor is doing kind of
humanlike labor and actually uh OpenAI's
new model 03 beat the expectation. So it
it sort of doubled coherence over long
horizon tasks in just three or four
months. So we're in this massive and I
mean this looks this exponential graph,
you know, that shows you the massive
trend we're on.
Brett, give us a little bit of of your
background, but also I saw you writing
some notes there. There was a couple of
words used there that I thought were
quite interesting, especially
considering what I know about you. The
word God was used a few times.
Well, uh, let me just say I'm an
evolutionary biologist, and probably for
the purposes of this conversation, it
would be best to think of me as a
complex systems theorist. One of the
things that I believe is true about AI
is that this is the first time that we
have built machines that have crossed
the threshold from the highly
complicated into the truly complex.
And I will say I'm listening to this
conversation with a um a a mixture of
profound hope and dread
because seems to me that it is obvious
that the potential good that can come
from this technology is effectively
infinite. But I would say that the harm
is probably 10 times. It's a bigger
infinity. And the question of how we are
going to get to a place where we can
leverage the obvious power that is here
to do good and dodge the worst harms. I
have no idea. I I know we're not
prepared. So I hear you talking about
agents and I think um that's marvelous.
We can all use such a thing right away
and the more powerful it is, the better.
The idea of something that can solve
problems on your behalf while you're
doing something else is marvelous. But
of course, that is the precondition for
absolute devastation to arise out of a
miscommunication, right? to have
something acting autonomously to
accomplish a goal, you damn well better
understand what the goal really is and
how to how to pull back the reins that
it starts accomplishing something that
wasn't the goal. The potential for abuse
is also utterly profound. You know, you
can imagine, just pick your your dark
mirror
uh fantasy dystopia where something has
been told to hunt you down until you're
dead and it sees that as a, you know, a
technical challenge.
So, I don't know quite how
to balance a discussion about all of the
things that can clearly come from this
that are utterly transcendent. I mean, I
do think it is not inappropriate to be
invoking God or biblical metaphors here.
You know, you're uh producing water
seemingly from thin air. I believe that
does have an exact biblical parallel.
Uh so, uh any case, the the power is
here, but so so too is the need for
cautionary tales, which we don't have.
That's the problem is that there's no
body of myth that will warn us properly
of this tool because we've just crossed
a threshold that is similar in its
capacity to alter the world as the
invention of writing. I really think
that's that's where we are. We're
talking about something that is going to
fundamentally alter what humans are with
no plan. You know, writing alters the
world slowly because the number of
people who can do it is tiny at first
and remain so for thousands of years.
This is changing things weakly and
that's an awful lot of power to just
simply have dumped on a system that
wasn't well regulated to begin with.
Dan, yeah. So, I'm an I'm an
entrepreneur. Um, I've been building
businesses for the last 20 plus years.
I'm completely well positioned between
the two of you here. the excitement of
the opportunity and the terror uh of
what could go on. There's this image
that I saw of New York City in 1900 and
every single vehicle on the street is a
horse and cart and then 13 years later
the same photo from the same vantage
point and every single vehicle on the
street is a car. And in 13 years all the
horses had been removed and cars had
been put in place. And um if you had
have interviewed the horses in 1900 and
said uh how do you feel about your level
of confidence uh in in the world? The
horses would have said well we've been
part of uh humanity for you know horse
and hoof hand and hoof for for many many
years for for thousands of years.
There's one horse for every three
humans. Like how bad could it be? You
know we'll always have a special place.
will always be part of society. Um, and
little did the horses realize that that
was not the case. That the horses were
going to be put out of of uh business
very very rapidly. And to reason through
analogy, you know, there's a lot of us
who are now sitting there going, "Hey,
wait a second. Does this make me a horse
in 1900?" I think a lot of people don't
realize how massive these kind of
technologies are going to have as an
impact.
You know, one minute we're ordering a
water and that's cute and the next
minute it can run for days and in your
words uh it doesn't stop until it
achieves its goal and it comes up with
as many different ways as it could
possibly come up with to achieve its
goal and in your words it better know
what that goal is. I'm thinking a lot as
Daniel's speaking about the vast
application of AI agents and where are
the bounds because if if this thing is
going to get incrementally smarter well
incrementally might be an understatement
it's going to get incredibly smart
incredibly quick and we're seeing this
AI race where all of these large
language models are competing for
intelligence with one another and if
it's able to traverse the internet and
click things and order things and write
things and create things and all of our
lives run off the internet today. What
can't it do? It's going to be smarter
than me.
No doubt it already is. And it's going
to be able to take actions across the
internet, which is pretty much where
most of my professional life operates.
It's like how I build my businesses.
Even this podcast is an internet product
at the end of the day because you can
create we've done experiments now and I
can show the graphs on my phone to make
AI podcasts and they have we've just
managed to get it to have the same
retention as the dire CEO. Now with the
image generation model retention as in
viewer retention the percentage of
people that get to one hour wow is the
same now. So we can make the video, we
can publish it, we can script it, you
can synthesize my voice sounds like me.
So what what is it going to be able to
do? Mhm. And can you give me the variety
of use cases that the average person
might not have intuitively conceived?
Yeah. So I I tend to be an an optimist
and and part of the reason is because I
try to understand the limits of the of
the technology. What can it do is
anything that we can any sort of set of
human data that we can train it on. What
can it not do is anything that uh humans
don't know what to do because we don't
have the training data. Of course, it's
super smart because it integrates
massive amount of knowledge that you
wouldn't be able to read, right? It also
much faster. It can run through massive
amount of computation that you know your
brain you can't even comprehend because
all of that they're smart. They can take
actions but we know the limits of what
they can do because we trained them.
They're able to simulate what a human
can do. So the reason you were able to
order the water there is because it was
trained NSF data that includes clicking
on Door Dash and ordering water. I
applaud your optimism and I like the way
you think about these puzzles, but I
think I see you making a mistake that we
are about to discover is very common
place. So we have several different
categories of systems. We have simple
systems, we have complicated systems, we
have complex systems and then we have
complex adaptive systems.
And to most of us, a highly complicated
system appears like a complex system. We
don't understand the distinction.
Technologists often master highly
complicated systems and they know, you
know, for example, a computer is a
perfectly predictable system inside.
There's it's deterministic. Mhm. But to
most of us, it functions, it's it is
mysterious enough that it feels like a
complex
system. And if you're in the position of
having mastered highly complicated
systems and you look at complex systems
and you think it's a natural extension,
you fail to
anticipate just how unpredictable they
are. So even if it is true that today
there are limits to what these machines
can do based on their training data, I
think the problem
is to see what's going to happen. You
really want to start thinking of this as
the evolution of a new species that will
continue to evolve. It will partially be
shaped by what we ask it to do, the
direction we lead it, and it will
partially be shaped by things we don't
understand. So, how does this computer
that that we have work? Well, one of the
things that it does is we plug them into
each other using language. It's almost
as if you've plugged an Ethernet cable
in between human minds. And that means
that the cognitive potential exceeds the
sum of the individual minds in question.
Your AIS are going to do that. And that
means that our ability to say what they
are capable of does not come down to
well we didn't train it on that data. As
they begin to interact that feedback is
going to take them to capabilities we
don't anticipate and may not even
recognize once they become present.
That's one of my fears. This is an
evolving creature and it's not even an
animal. If it were an animal, you could
say something about what the limits of
that capability are. But this is a new
type of biological creature and it will
become capable of things that we don't
even have names for. Even if it didn't
do that, even if it just stayed within
the boundaries that you're talking
about, you mentioned about it having
median level intelligence. Well, that by
definition means 50% of the people on
the planet are less intelligent than uh
AI. uh you know to a degree it's almost
as if we've just invented a new
continent of remote workers. Um there's
billions of them. They've all got a
masters or a PhD. They all speak all the
languages. Anything that you could call
someone or ask someone over the internet
to do, they're there 24/7 and they're 25
cents an hour. Uh if that um so like if
that really happened like if we really
did just discover that there were a
billion extra people on the planet who
all had PhDs and were happy to work
almost for free that would have a
massive disruptive impact on society.
Like society would have to rethink how
everyone lives and works and gets
meaning. Um so like and that's that's if
it just stays at a median level of
intelligence. Like it's it's pretty
profound. I still think it's it's a
tool. this is power that is there to be
harnessed by entrepreneurs. You know, I
I think that the world is gonna get
disrupted, right? Um and the the you
know, the this you know post-war world
that we created where you go through
life, you go through 12 years of
education, you get to college and you
just check the boxes, you get a job. We
can already see the fractures of that
that it is, you know, this American
dream is perhaps no longer there. And so
I think the world has already changed.
So but but like what are the
opportunities? Obviously there are
downsides. The opportunities is for the
first time access to opportunity is is
equal. And I I do think there's going to
be more inequality. And the reason for
this inequality is because actually
Steve Jobs uh you know made this
analogy. It's like the the best taxi
driver in New York is like 20% better
than the you know the you know average
taxi driver. the best programmer uh can
be 10x better. You know, we call we say
the 10x engineer. Now, the variance will
be in the thousandx, right? Like the
best the best entrepreneur that can
leverage those agents could be better uh
could be a thousand times better than
than someone who doesn't have the grit,
doesn't have the skill, doesn't have the
ambition. Right? So, so that that will
create a world. Yes, there's massive
access to opportunity, but there are
people who will take seize it and then
they'll seize it and then there'll be
people who don't. I imagine it almost
like a um a marathon race and AI has two
superpowers. One superpower is to
distract people um such as Tik Tok
algorithm. That's right. And the other
superpower is to make you hyper
creative. So you become a hyper consumer
or a hyper creator. And in this marathon
race, the vast majority of people have
got their shoes tied together cuz AI is
distracting them. Some people are
running traditional race. Some people
have got a bicycle and some people have
got a Formula 1 vehicle. And it's going
to be very confronting when the results
go on the scoreboard and you see, oh,
wait a second. There's a few people who
finished this marathon in about 30
minutes. And there's a lot of us who
finished in like 18 hours because we had
our shoes tied together. And I can't
understand if we've got equal
opportunity why there's so much
disparity between how fast it you know
and do, you know, I'm using an analogy,
but this idea that, you know, someone
like a lot of people are going to start
earning a million dollars a month and a
lot of people are going to say, "Hey, I
can't even get a job for $15 an hour."
there's going to be this kind of
interesting wedge. Well, but I I hear in
what both of you are saying
a kind of assumption that this will all
be done on the up and up. And I do want
to just I am not a doomer. I agree that
the doomers are likely incorrect that
their fears are misplaced. But I do
think we have a question of a related
rates problem.
You know I said the potential for good
here is infinite and the potential for
bad is 10 times.
Right? What I mean is there are lots of
ways in which this obviously empowers
people to do things that they were going
to be otherwise stuck in the mundane
process of learning to code and then
figuring out how to make the code work
and bring it to market and all of that.
And this solves a lot of those problems
and that's obviously a good thing.
really the what we should want is the
wealth creation object as quickly as we
can get there. But the problem is you
know as much as it that hyper creative
individual is empowered to make wealth
the person who is interested in stealing
may be even more empowered. And I'm
concerned about that at at a pretty high
level. the the abuse cases may outnumber
the use cases and we don't have a plan
for what to do about that. Um to can I
can I give you a quick uh like
introduction here like the optimistic
view? OpenAI invented uh GP the first
version GPT came out in 2019. 2020 was
GPT2 and so OpenAI you know now they get
a lot of criticism and lawsuit from IL
Musk that they're no longer open source
right they used to be. The reason is in
GPT2 they said we are we are no longer
gonna uh open source this technology
because it's going to create um
opportunities for abuse such as you know
influencing elections um you know
stealing you know grandma's credit card
and so on so forth. Wouldn't you say
Brett that it is kind of surprising how
little abuse we've seen so far?
I don't know how much abuse we've seen
so far. I don't know how any of us do.
And I also even the example that you
suggest where chat GPT is no longer open
source to prevent abuse. I'm taking
their word for it that that's the
motivation. Where as a systems theorist,
I would say, well, if you had a
technology that was excellent at
enhancing your capacity to wield power,
then open sourcing it is a failure to
capitalize on that. and that the most
remunerative use is to keep it private
and then either sell the ability to
manipulate elections to people who want
to do so or sell the ability to have it
kept off the table for people who don't.
And I would expect that that's probably
what's going on. There's no if you have
a technology as transformative as this,
giving it away for free is
counterintuitive, which leaves those of
us in the public more or less at the
mercy of the people who have it.
So I I don't see the reason for comfort
there. We are at the dawn of this
radical transformation of humans that by
its very nature as a truly complex and
emergent innovation. Nobody on earth can
predict what's going to happen. We can
we're on the event horizon of something.
And the problem is you know we can talk
about the obvious disruptions the job
disruption and that's going to be
massive. And does that lead some group
of elites to decide, oh well, suddenly
we have a lot of useless eaters and what
are we going to do about that? Because
that conversation tends to lead
somewhere very dark very quickly. Um,
but I think that's just the beginning of
the the various ways in which this could
go wrong without the doomer scenarios
coming into play. This is an
uncontrolled experiment in which all of
humanity is downstream. Yeah. So I I was
trying to make the point that OpenAI has
been sort of wrong about the uh the sort
of uh how big of a potential for harm it
is like you know I think we would have
heard about it in the news like uh the
sort of how much harm it's done and
maybe you know some of it is working in
the shadows but like the few incidents
that we've heard about where you know
the cause of LLM's large language models
the technology that's powering chat has
been a huge headliners uh like New York
Times talked about this kid that was,
you know, perhaps goated by some kind of
chat software that, you know, helps
teenagers to be less lonely into into
suicide, which is which is tragic. And
obviously, these are the kind of safety
and and abuse uh issues that we want to
we want to worry about. But these are
kind of these isolated uh incidents and
we do have open- source large language
models. Obviously, the thing that
everyone talks about is DeepSeek.
Deepseek is uh coming from China. So
what is Deepseek's incentive? You know
perhaps the incentive is to destroy the
AI industry in the US. Uh you know when
they released Deepseek, you know the
market tanked the market for Nvidia, the
market for AI and all of that. But there
is an incentive to open source. Meta is
open sourcing Llama. Llama is another AI
similar to Chat GPT. The reason they're
open sourcing Llama and Zuckerberg just
says that out loud is basically they
don't want to be beholden to open AI.
They don't sell AI as a service. They
use it to build products. And there's
this concept in business called
commoditize your compliment because you
need AI as technology to run your
service. The best strategy to do is to
open source it. So these market forces
are going to create conditions that I
think are actually beneficial. So I'll
give you a few few examples. One is
first of all the AI companies are
motivated to create AI that is safe so
that they can sell it. Second there are
security companies investing in AIs that
allows them to protect from the sort of
malicious uh acting of of AI. And so you
have the free market and we've always
had that you know but generally as
humanity we've been able to leverage uh
the same technology to protect against
the abuse. So I I don't really
understand this and maybe this is
actually this is the exact discussion
that you would expect between somebody
at the frontier of the highly
complicated staring at a complex system
and a biologist who comes from the land
of the complex and is looking back at
highly complicated
systems. In game theory we have
something called a colle collective
action problem. And in the market that
you're
describing, an individual company has no
capacity to hold back the abuses of AI.
The most you can do is not participate
in them. You can't stop other people
from programming LLMs in some dangerous
way. And you can limit your own ability
to earn based on your own limitations of
what you're willing to do. And then
effectively what happens is the
technology gets invented anyway. It just
that the dollars end up in somebody
else's pocket. So the incentive is not
to restrain yourself so that you can at
least compete and participate in the
market that's going to be opened. And
so the number of ways in which you can
abuse this technology. Let's take a
couple.
What is to stop somebody from training
LLMs
on an
individual's creative output and then
creating an LLM that can out compete
that individual can effectively not only
produce what they would naturally
produce over the course of a lifetime
but can extrapolate from it and can even
hybridize it with the insights of other
people so that effectively those who
have the LLM can train it on the
creativity ity of others, not cut them
in on the use of that insight. You can
effectively end up putting yourself out
of business by putting your creative
ideas in the world where they get sucked
up as training data for future LLMs.
That is unscrupulous, but it's
effectively guaranteed. In fact, it's
already happening. So, that's a problem.
And likewise, what what would stop
somebody from interacting with an
individual and training an LLM to become
like a personalized con artist?
Something that would play exactly to
your blind spot. That does happen. That
that is starting to happen. Um people
get phone calls and it sounds like their
daughter and I've I've lost my phone and
I'm borrowing a friend's phone and all
of that sort of stuff. What's what's
interesting is that I I think you make a
a really good point.
I worry about the impact on society. And
yet when I look at every single
individual who uses AI regularly, it
almost has nothing but profoundly
positive impact on their life. I look at
people like um I was just spending some
time with my parents-in-law um who are
in their 70s and early 80s and they use
AI regularly for all sorts of things
that they find incredibly valuable and
that improves the quality of their life.
I personally did a an M&A a mergers and
acquisitions deal where I bought a
company last year and the AI was so
powerful at helping that process. The
conversations were transcribed and they
were turned into letters of intent and
then press releases and uh legal
documents and we probably shaved
$100,000 worth of uh costs and and we
sped up the whole process and it was
pretty magical to see how how it could
happen. With that said, you know,
there's there's all of these like, well,
$100,000 worth of lawyers didn't get
paid, right? So, well, what I want to
know, yeah,
people upset about, but if we look back
at the invention of the cell phone or
the invention of the social media
platforms, there would be every reason
to have the exactly the same
perspective, right? I remember the
beginning of Facebook and I remember the
idea that suddenly the process that used
to afflict people where you would just
lose touch with most of the people who
had been important to you, that was not
something that needed to happen anymore.
You could just retain them permanently
as a part of a diffuse
uh social grouping that just simply grew
and value was added. There's no end to
how much good that did, but what it did
to us was profound and not evident in
the first chapter. Say the same thing
about the cell phones and the dopamine
traps and the way this has disconnected
us from each other, the way it has
disconnected us from nature, the way it
has altered the very patterns with which
we think. It has altered every
classroom. Mhm. So, and those things I
think are going to turn out to have been
sort of minor foreshadowings of the
disruption that AI will produce. So, I
agree with you today. The amount you can
do with AI, there's a tremendous amount
of good. There's a little bit of harm.
Maybe that's something we need to worry
about. But as this develops, as we get
to, you know, to peer over the edge of
this cliff that we're headed to, I think
we're going to discover that we can't
yet detect the nature of the alteration
that's coming. So, I just wanted to add
some context to that cuz Amjad, I I saw
the interview you did in a newsletter in
2023 where you said, "I wouldn't prepare
for AGI in the same way that I wouldn't
prepare for the end of days." It's
effectively the end of days if the
vision of AGI that some of these
companies have comes to bear because
it's called the singularity moment
because you can't really predict what
happens after that. And so like how
would you even prepare for that and you
want to prepare for the more likely
world and that world that you can
actually predict is a world where yes
there's like a massive improvements of
technology and there's like insane
compounding effects of technology and
it's pretty hard to keep up. From that
it appeared that in 2023 you were saying
a similar thing to Brett in terms of we
can't see around the corner here because
it is a singularity.
Sorry you also used AGI artificial
general intelligence. It'd be
interesting to know what your definition
of AGI is. say that. Yeah. So, so what
what I was saying there is even if I'm
wrong that you can actually create a
unbounded seemingly conscious artificial
intelligence that can entirely replace
humans and can act autonomously in a way
that even humans can't act and can
coordinate across different AIs,
different data centers to take over the
world. Even if if that's so so the
definition of AGI is artificial general
intelligence meaning that AI can acquire
new skills in efficiently in the same
way that a humans can acquire skills.
Right now AIs don't acquire skills
efficiently you know they they require
massive amount of energy and compute
entire data set of compute to acquire
these these skills and I think there's a
again a limit on how general
intelligence can get I think for most of
the time we're lagging in terms of what
humans are are capable of doing the
singularity is based on this concept of
intellig intelligence explosion so once
Once you create an AGI, once you create
an artificial general
intelligence, that intelligence will be
able to modify its own source code and
create the next version that is much
more intelligent. And the next version
creates the next version and the next,
you know, for infinity, right? Within a
week, within a week, perhaps within
milliseconds at some point. Yeah. Right.
Uh because it might invent new computing
substrate and and and all of that.
perhaps they'll use quantum computing
and and so then you have this
intelligence
explosion in a way that it is impossible
to predict how the world is going to be
and what I'm saying is this is sort of
like an end of time story like how would
you even prepare for that so if if
that's coming like why would I spend my
time preparing for I think it's unlikely
to happen can't see around the corner
yeah but I'd rather prepare
what I was saying there. I'd rather
prepare for the more likely world in
which we have access to tremendous
power, but the world's not ending and
humans are still uh important.
I don't I don't know why you say more
likely. I mean, I I think the structure
of your argument is is sound. You would
prepare for the world that might happen
for which you can prepare. There's
literally no point in trying to prepare
for a world You can't predict it all.
The only thing you can do is just sort
of upgrade your own skills and pay
attention. But if I have one message for
the technologists, it's that your
confidence about what this can and
cannot do is misplaced because you have
without noticing stepped into the realm
of the truly complex. In the truly
complex, your confidence should drop to
near zero. that you know what's going
on. Are these things conscious? I don't
know. But will they be highly likely
they will become conscious and that we
will not have a test to tell us whether
that has happened. Elon Musk predicts
that by
2029 we will have AI with us AGI that
surpasses the combined intelligence of
all
humans. And Sam Alman actually wrote a
blog three months ago that I read where
he said we are confident now Sam Alman
being the founder of open arrow which um
created chatbt we are confident now that
we know how to build AGI as we have
traditionally understood it. When I put
these things together, I go back to the
central question of what role do humans
have in in this in the sort of
professional output in GDP creation. If
it's smarter than all humans combined,
if Elon Musk is correct there, and it's
able to take actions across the internet
and continue to learn. This is like a
central question that I'm hoping I can
answer today, which is like where do we
go? Yeah. I I mean in my vision of the
world we are in the creative seat. We're
sitting there where um we we are
controlling swarms of intelligent being
to do our job. You know the way you run
your business for example you're sitting
at a computer you have an hour to work.
Yeah. and you're going to launch like a
thousand SDR you know sales uh you know
representative to go like grab as as
many leads as possible and you're
generating new update on replet for for
your website here and then uh on this
side you you you're actually you you
have an AI that's crunching data about
your uh your existing business to to
figure out how to improve it and these
AIs are kind of somehow all coordinating
together and I am trying to privilege
the human like this is my my mission is
to uh build tools for people. I'm not
building tools for agents and agents are
a tool and so ultimately not only do I
think that humans have a privileged
position in the in the world and in the
universe. We don't know where conscious
consciousness is coming from. We don't
really have the science to explain it.
Um I think humans are special. That's
one side is is my belief that humans are
are special in the world and another
side which I understand that the
technology today and I think for the
foreseeable future y is going to be a
function of us training data. So there
was this whole idea like what if chat
GPT generates uh pathogens. Well have
you trained it on pathogens? They were
doing that kind of stuff in Wuhan. know
I mean I mean a lot of the biotech
companies are essentially using
artificial intelligence like I can think
of Abselera I think it's Abselera in
Canada their whole business is using AI
to create new vaccines using artificial
intelligence and bigger data sets than
we've never have had before and I know
cuz I was I was very close to one of the
founders of people involved in Absela so
that that work is going on anyway and if
we think about Wuhan that's it's quite
probably well known now that it came out
of a lab and people working in a lab and
in that scenario that had a huge impact
and shut down the world. What I'm the
central question I'd love to answer
before I throw it back open to the the
room is what jobs because I know that
you have this perspective. What jobs are
going to be made redundant in a world
where I am sat here as a CEO with a
thousand AI agents, right? I was
thinking of all the names of my of the
people in my company who are currently
doing those jobs. I was thinking about
my CFO when you talked about processing
business data, my graphic designers, my
video editors, etc.
So what what jobs are going to be
impacted? Yeah, all of those. Uh so I I
think and what do they do? You maybe
this is useful for for the audience. I
think if your job is as routine as it
comes, your job is gone in the next
couple years. So meaning if you if in in
those jobs for example uh quality
assurance jobs data entry jobs you're
sitting in front of a computer and
you're supposed to click uh and and type
things in a certain order operator and
those technologies are coming on the
market really quickly and those are
going to displace a lot of a lot of uh
labor accountants accountants noise yes
I mean I've just pulled a ligament in my
in my foot and they did an MRI scan and
I had to wait a couple of days for
someone to look at the MRI scan and tell
me what it meant. Yeah, I'm guessing
that that's gone. Yeah, I think I think
the healthcare ecosystem is hard to
predict because of regulation and and
again there there's so many limiting
factors on how this technology can
permeates the economy because of
regulations and and people's willingness
to to take it. But you know things
unregulated jobs uh that are
purely text in text out. if your job,
you know, you get a you get a message
and you produce some kind of artifact
that's like probably text or images that
that job is is at risk. So, just to give
you some stats here as well, about 50%
of Americans who have a college degree
currently use AI. The stats are
significantly lower for Americans
without a college degree. So, you can
see how a splinter might emerge there
and that crack will write widen because
people like us at this table are all
messing around with it. But my mom and
dad in Plymouth in the southwest, rural
England, haven't got like they just
figured out iPhones. So like I got them
an iPhone and now they're like texting
me back. AI is a million miles away. And
if I start running off with my AGI, my
agents, that gap is going to widen.
Women are disproportionately affected by
automation, which is what you were
talking about there. with about 80% of
working women in an at risk job compared
to just over 50% of men according to the
Harvard Business Review and jobs
requiring only a high school diploma
have an automation risk of 80% while
those requiring a bachelor's degree have
an automation risk of just 20%. So we
can see again how how this will cause a
sort of a it's also a huge risk um with
business processor out outsourcing which
is essentially western countries sending
jobs to India to the Philippines like at
the moment millions of people have been
lifted out of poverty through the
ability to do those kind of business
process auto outsourcing jobs and those
are all going to go but these they're
going to have a thousand employees but
but but uh also uh these people are
actually already transitioning to
training AI eyes. Mhm. You know, so so
there's going to be a massive industry
around training AI until they're
trained. Well, no, you you have to
continuously acquire new skills and this
is what I'm talking about. I mean, this
is again if AI is a function of its
data, then you need increasingly more
data. And by the way, we ran out of
internet data. I was actually thinking
interestingly that this might not be
great for the United States or the UK,
the Western world because it is going to
be a leveler where now a kid in India
doesn't need a Silicon Valley office and
$7 million in investment to throw up a a
software company basically. My yeah my
belief is that so I have a I have a more
broad definition of AGI and the
singularity and for me AGI is do we have
artificial general intelligence in terms
of generally speaking can AI just do
stuff that humans used to be able to do
and we've already crossed that point we
have this general intelligence that we
can now all access and 800 million
people a week are now using uh chat GBT
it's it's exploded in the last 3 months
and then to me a
singularity Uh when the first tractor
went out onto a farm, for me that was a
singularity moment. Uh because everyone
who worked in farming, it used to take a
100 people to plow a field and now a
tractor comes along and two guys with
the tractor can now plow the field in
just as much time and now 98 people out
of 100 are completely out of a job. We
also always underestimate a technology
if it does go on to change history. When
you look back through cars, horses,
planes, the Wright brothers just thought
of a plane as being something that that
army could use, we had no idea of the
application. So someone said to me
recently, they said, "When it does
change the world, we underestimate the
impact that it will change the world."
And I see people now with their
estimations of AI and AI agents already
incredibly optimistic. And so if history
holds here, we're undershooting the
impact it's going to have. And I think
this is the first time in my life where
the industrial revolution analogies seem
to fall a little bit short. Yeah.
Because we've never seen intelligent
that's like you could I could think of
this as a I'm not an intelligent person
on this, but I could see that as like
the disruption of muscles, whereas this
is the disruption of intelligence. That
that's that's exactly the thing is that
what makes human beings special is our
cognitive capacity and very specifically
our ability to plug our minds into each
other. So that the sum is is uh or the
whole is greater than the sum of the
parts. That's what makes human cognition
special. And what we are doing is we are
creating a something that can
technologically surpass it without any
of the preconditions that make that a
safe process. So yes, we've
revolutionized the world how many
different times? It's innumerable. But
you know, we've we've made farming
vastly more efficient. That's different
than taking our core competency as a
species and surpassing ourselves with
the product of our of our labor. I think
your question is a good one. Then then
what does become? We only have one thing
left. Um we have our muscles which we
got rid of in the industrial revolution
and then we have our intellect which is
this digital revolution. Now we're left
with emotions and agency. So we
essentially the the the agency idea I
think we used to judge people on IQ and
now IQ is the big leveler and now going
forward for the next 10 years we're
going to look at are you a high agency
person or a low agency person. Do you
have the ability to get things done and
coordinate agents? Do you have the
ability to start businesses or give
orders to digital armies? uh you know
and and essentially these high agency
people are going to thrive in this new
world because they have this other thing
that's been bubbling under the service
surface which is really interesting when
you said agency is going to remain as an
important thing we're sat here talking
about AI agents and the crazy thing in a
world of AI agents that have super
intelligence is I can just tell my agent
listen I'm going on holiday please build
me a SAS company that spots a market
opportunity throw up the website post it
on my social media channel I'll be in
Hawaii I and this new agentic world is
stealing that too cuz now it can take
action in the same way that I can browse
the internet. I can call Domino's Pizza,
speak to their agentic agent, organize
my pizza to be there before I even wake
up. And in fact, predictability, you
know, OpenAI now learns and Sam Alman
said that they've expanded the memory
feature. So, it knows every it's knowing
more and more and more and more about
me. It'll almost be able to predict what
I want when I want it. It'll know
Steve's calendar. He's arriving at the
studio. Make sure his cadence is on the
side. Make sure his iPad has the brief
on it. Do the brief. Do the research for
me. And everything else say remember
Brett's birthday so when I arrive
there'll be something. In fact, it's
removing my need for any agency. Yes.
And you know again I don't know how to
make this point so that it occurs to
people what I'm really suggesting
but today maybe it's not conscious but
well let me put it to you this way.
If you're
conscious, you started out as a child
that wasn't. And although this may not
fully encapsulate it, you are
effectively an LLM, right? You go from
an unconscious infant to a highly
conscious adult. And the process by
which you do that has a lot to do with
being trained effectively on words and
other things in an environment in
exactly the way that we now train these
AIs. So the idea that we can take
consciousness off the table, it won't be
there till we figure out how to program
it in and we're safe because we don't
know how consciousness works. I take the
opposite lesson. We've created the exact
thing that will produce that phenomenon
and then we can have philosophers debate
whether it's real consciousness or it
just behaves exactly as if it were. And
the answer is those aren't different.
Doesn't matter. And the same thing is
true for agency. you know, especially if
you've created an environment in which
these AIs are de facto competitors, what
you're effectively doing is creating an
evolutionary environment in which they
will evolve to fill whatever niches are
there. And we didn't spell out the
niches. So, I have the sense we have um
we have invited we have we have created
something that truly is going to
function like a new kind of life and
it's especially troubling because it
speaks our language. So that leads us to
believe it's more like us than it is and
it's actually potentially quite
different. So but by the way, he's the
optimist here, right? Like he's so
optimistic about LMS and how how they're
going to they're going to evolve. Yes.
It's amazing. It's amazing technology.
Like I think it raised global IQ, right?
Like 800 million people like 800 million
people are that much more intelligent
and emotionally intelligent as well.
Like I know people who previously were
very coarse and they kind of robbed
people the the the wrong way. They they
would say things in not so polite way
and then suddenly they started putting
their the you know what they're saying
through chat in order to kind of make it
kinder and nicer and they're more liked
now. And so not only is it uh making us
more intelligent but also it allows us
to be the best version of ourselves. And
the the scenario that you're talking
about, I don't think I don't know what's
wrong with that. Like, you know, you
know, the I would want less agency in
certain places. Like, I would want
something to help me not, you know, open
up a peanut butter jar at night, right?
You know, there are places in my life
where I need more control and I would
rather seed it to some kind of entity
that could help me make better choices.
I mean unfortunately even if there is
some small group of elites that are able
to go to Hawaii while something else
does the mundane details of their
business
building. We are rather soon going to be
faced with a world that has billions of
people who do not have the skills to
leverage AI. Some of them will be
necessary for a time. you're going to
need plumbers. But this is
also not a long-term solution because
not only are there not enough of those
jobs,
um, but of course we have humanoid
robots that once imbued with AI capacity
will also be able to take, you know,
they'll be able to crawl under your
house into the crawl space and fix your
plumbing.
So what typically happens when you have
a massive economic contraction that
arises from the fact that a huge number
of people are out of work is that the
elites start looking at those people and
thinking well we don't really need them
anyway. And so the idea that this AI
disruption doesn't lead us to some very
human catastrophe I think is overly
optimistic and that we need to start
preparing right now. What are the rights
of a person who has had whatever it is
that they've invested in completely
erased from the list of needs? Is that
person responsible for not having
anticipated AI coming? And is it their
problem that that they are now starving
and they're being eyed by others as you
know a useless eater? I don't think so.
How is it different than uh when the uh
uh what's it called the the looming
machine came and the textile workers you
know that the result of the in the lad
sort of revolution? uh h how is it how
is it different than any time in history
when uh technology uh automated a a lot
of people out of out of jobs? I would
say scale and speed that's how it's
different and the scale and speed is
going to result in a an unprecedented
catastrophe because the rate at which
people are going to be simultaneously
sidelined not just in one industry but
across every industry is just simply uh
and it also did actually happen. There
was a there was an uh for the first 50
years of industrialization from like
late 1700s to early 1800s, you actually
the Charles Dickens novels are
essentially people coming from the farms
who are displaced arriving in cities,
kids living on the streets. Uh the
British decided to pick everyone up and
send them over to the over to Australia,
which is where I came from. um and uh
you know there there were this there was
this massive issue of displacement. I
think we're going to go into a high
velocity economy where rather than this
long arc of career that lasts 45 years,
we're going to have these very fast
careers that last 10 months to 36
months. and you invent something, you
take it to market, you put together a
team of five to 10 people who work
together, you then get disrupted, you
come Can I mention a story here? Uh
there's an entrepreneur that used Replet
in a similar way. Uh his name is Billy
Howell. You can find him on YouTube on
the internet. He would go to Upwork and
he would find what people are asking for
different requests for certain apps,
technologies. Then he would take the
what they're asking for, put it into
replet, make it an application, call
them, tell them, "I already have your
application. Would you pay $10,000 for
it?" And so, so that's sort of an
arbitrage opportunity that's that's
there right now. That's not arbitrage.
That's theft. How is no what is it? How
is that theft? You have somebody who has
an idea that can be brought to market
and somebody else is cryptically
detecting it and then selling back their
own idea to them. Well, they're paying
them to do that. They're saying, "I will
give you $500 if you if someone makes
this for me." Right? But this is what I
more or less think is going to happen
across the whole economy is that yes,
from this perspective, we can see that
everybody is suddenly empowered to build
a great business. Well, what do we think
about the folks who are going to be
displaced from the top? What are they
going to think about all these people
building all of these highly competitive
businesses? And are they going to find a
way to do, you know, what venture
capital has done or what record
producers have done? What they're going
to do is they're going to take their
superior position at the top and they
are going to take most of the wealth
that is produced by all of these people
who have these ideas that in a proper
market would actually create businesses
for them and they're going to parasitize
them. I think that we with this in
introduction of AI and AI
agents old value has moved and now it's
not going to be the case that the idea
itself is the moat and it's not going to
be the case that resources are the moat.
So in such a scenario you still have to
figure out distribution. You still have
to have for example like an audience. So
if you're a podcaster now you have a
million followers on Twitter. you're in
a prime position because you now have
something that the the great guy with a
great idea with no audience has you have
inbuilt distribution. So I now think
actually much of the game might be
moving to like yeah still about taste
and idea but also the the mo is
distribution. Yeah. And speaking of
adaptive systems um the one of the
adaptation that will happen is people
will seek uh humans and will seek proof
of humanity. Oh, I agree that uh
authenticity is going to become the coin
of the realm and anything that can be
faked or cheated is going to be devalued
and things you know spontaneous jazz or
you know comedy that is interactive
enough that it couldn't possibly have
been generated with the aid of AI those
things are going to become prioritized
you know spontaneous oratory rather than
speeches answers some of your questions
no it answers it answers my question for
the tiny number of people who are in a
position to do those things. Stephen,
you use the mo word moat. Um, which I
think is a really important word for
entrepreneurs. We like we like have to
have a moat. We think a lot about moes
and it's an industrial age. A lot of
people don't even know what a moat what
you mean by moat. It's just I often
think about this idea of what are the
moes that are left. So to define how I
define a mo, you've got a castle and
it's got a like a small circle of water
around it. And once upon a time that
circle of water defended the castle from
attack and you can pull up the
drawbridge so nobody could attack you
very easily. It's a de defense from
something. So it's your it's your
shield. It's your your defense. And once
upon a time as an entrepreneur, you
know, I've got a software company in San
Francisco called Third Webb and we
raised almost $30 million. We have a
team of 50 great developers. And much of
our moat was you can't compete with us
if you don't have the 50 developers and
the $30 million in the bank. How much of
that 30 million went to coding? The vast
majority of it. I mean what else are we
going to do? What else we do? So this is
a good thing. I think modes are a bad
thing. Okay, let me make the argument
there. Uh so everyone is looking for
modes you know for example like one of
the more uh significant modes is network
effects. Yeah, you know, so you can't
compete with Facebook or Twitter
because to move people from Facebook or
Twitter, you need to it's the collective
action problem. You need to move them
all at once because if one of them
moves, then it's the network is not
valuable, they'll go back. So you have
this chicken and egg problem. Let's say
that we have a more decentralized way of
doing social networks that will remove
the power of Twitter to kind of censor
and I I think you're at the other end of
of censorship, right? And so part of my
optimism about humanity is that um
generally there's self-correction.
Democracy is a self-correcting system.
Uh free markets are largely
self-correcting systems. There are
obvious problems with with free markets
that that we can discuss. But take um
health, you know, there is obesity uh
epidemic. This period of time when uh
companies, you know, ran loose kind of
making this sugary, salty, fatty kind of
snacks and everyone gorged on them and
everyone got very uh you know,
unhealthy. And now you have Whole Foods
everywhere. Today, people in Silicon
Valley, they don't go to bars at all.
They go to running clubs. That's how you
meet. That's how you go find a date. You
go to running clubs. And so, there was a
shift that happened because there was a
reaction. Obviously, cigarettes is
another example. You know, you were
talking about phones and our addiction
to phones. And I see a shift right now
like in my uh friend circle like people
who are constantly kind of on their
phones is already kind of frowned upon
and they don't want to hang out with you
because you're you're constantly staring
on on your phone. So there's always
these reactions and and but the problem
is you you reference selfcorrection and
I agree that there's actually an
automatic feature of the universe in
which the self-correction happens. You
can't have a positive feedback that
isn't reigned in by some outer negative
feedback. But the corrections, the list
of corrections involves things like you
point to where people become enlightened
and they realize that they're doing
themselves harm with either the sugar
that they're consuming or the dopamine
traps on their phone and they get
better. But also on the list of
corrective patterns are genocide and war
and you know parasetism. And the problem
is these things are destructive of
wealth. And so you allude to
the superior fact of an open market
without moes. Presumably the benefit of
that is that more wealth gets created
because people aren't kept from doing
things that are productive. I see that.
But then what is the product of all of
this new wealth that is going to be
generated by a world empowered by AI?
Does it end up so highly concentrated
that you have a tiny number of ultra
elites and a huge number of people who
are utterly dependent on them? What
becomes of those people? The learning
process, the self-correction pro process
goes through harm in order to get to
that more enlightened solution. There's
nothing that protects us from the harm
phase being so apocalyptically terrible
that, you know, we get to the other side
of it and we say, "Well, that was a hell
of a correction." Or maybe there's
nobody there to even say that. Those are
also on the table. It reminds me of a
mouse trap where you see the cheese and
we're going, "Oh my god, my
grandmother's going to be able to do
some research and oh my god, my life's
going to get easier." So you head closer
and closer to the cheese.
And historically, if we look at all of
the last 10,000 years, it's a very small
number of elites who own absolutely
everything and a very large number of
surfs and peasants who have a
subsistence living. You know, if the
elites are too greedy and they freeze
out the peasants at too high a level and
they try to use brutality or Yeah.
eventually it comes back to haunt them.
And so what you get is a recognition
that you you need a system that does
balance these things and you know the
west has the best system that we've ever
seen. It's one in which we agree on a
level playing field. We never achieve it
but we agree that it's a desirable thing
and the closer we get to it the more
wealth we create. But again,
if AI empowers those with ill
intent at a higher rate that it empowers
those who are wealth creating and
pro-social, we may be in for a massive
regression in how fair the the market
that the West is. Is that your top
concern versus economic displacement?
And I think they're the same thing. How
are they the same thing?
because the economic displacement is
going to start. I don't know how
many million people are going to be
displaced from their jobs in the US.
Suddenly, we're going to have a question
about whether or not we have obligations
to them. And you agree with that, don't
you? Yes. But but but again, it's it's
the no pain, no gain. I mean, we're
going to go through a period of of
disruption. And I think at the other
end, the old, you know, sort of
oppressive systems will be broken and
we're going to create perhaps a fair
world, but it's going to have its own
its own problems. And what's the scale
of that disruption in your estimation?
It's hard to say because uh you know
there's this concept of limiting factors
like you know there is um regulation
there's the appetite of people to today
for example the health care system is
very resistant to innovation because of
regulation you know and that's a that's
a bad thing on the regulation point it's
worth saying that when Trump came into
power he signed in a new law which is
called removing barriers to American
leadership in AI which revokes previous
AI policies that were deemed to be
restrictive. And obviously when you
think about where the funding is going
in AI, it's going to two places. It's
going to America and it's basically
going to China. That's the the vast
majority of investment. So with those
two in competition, any regulation that
restricts air in any way is actually
self-sabotage. Mh. And this is, you
know, I live in Europe some of the time
and it's already annoying to me that
when Sam Alman and OpenAI released the
03 model, this new incredible model,
it's not in Europe because Europe has a
regulation which prevents it from coming
to Europe. So, we're now at a
competitive disadvantage
um which Sam Alman spoken about. And
more broadly on this point of
disruption, it was I was quite unnerved
when I heard that Sam Wolman's other
startup was called
Worldcoin. And Worldcoin was conceived
with the goal of facilitating universal
basic income,
i.e. helping to create a system where
people who don't have money are given
money by the government just for being
alive to help them cover their basic
food and housing needs. Which suggests
to me that the guy that has built the
biggest air company in the world can see
something that a lot of us can't see
which is there. Yeah. There's gonna need
to be a system to just hand out money to
people because they're not going to be
able to survive otherwise. I
fundamentally disagree with that. Which
part do you disagree with? I disagree
that first of all that humans would be
happy with UBI. I I think that you know
uh you know core value of humans and be
curious about the evolutionary reasons
is we want to be useful. It's really
important to know that a lot of the jobs
that are in at risk are the most high
status, highly paid jobs in the world.
Let's take the highest paid job in
America um which is an
anesthesiologist. Uh this is the highest
paid job and highest paid salary job.
Salari job. Yeah. And the majority of
that job is observing a patient, knowing
which type of medication would work best
with their body. um giving them the
exact right amount, monitoring the
impact of that uh on the on the body and
then making slight adjustments, the
right technology and any nurse will be
able to do that job. And you might have
one
anesthesiologist on on
site supervising 10, 20, 30, 40 wards
and the technology is, you know, doing
the job, but that one person is there
just to kind of supervise if something
went wrong or if there was an ethical
dilemma. What's wrong with that? I mean,
if if if the precision is better where
they are. No, there's nothing wrong with
that except for the fact that a lot of
people, hundreds of thousands of people
have spent their entire life training to
be that, they get an enormous amount of
purpose and satisfaction about the fact
that that's their career, that's their
job. They have mortgages, they have
houses, they have status, and that's
about to go away. Well, if it's highest
paid jobs, maybe you should start
saving.
Yeah. Well, I mean, but yeah, I I hear I
hear you, but you're talking about
people who have done vital work. Mhm.
Highly specialized work and are
therefore not in a great position to
pivot pivot based on the invention of a
technology that they didn't see coming
because frankly, I mean, in the
abstract, maybe we all saw AI coming
somewhere down the road, but we did not
know that it was going to suddenly dawn.
And we do have to figure out what to do
with those people. It's not their fault
that they've suddenly become obsolete
and it's inconceivable that people will
accept this. It is not. It is
fundamentally incompatible with our
nature. We have to have things to strive
for and you know you can sustain life
that way but you cannot um sustain a
meaningful existence and so it's a
short-term plan at best. Let's talk
about meaning. Um on that point of job
displacement this is already happening.
Cler CEO who has been on this podcast
before, a great guy, um said to on a
blog post that they published on Cler's
website saying that they now have AI
customer service agents handling 2.3
million chats per month, which is equal
to having to hire 700 full-time people
to do that. So, they've already been
able to save on 700 customer service
people by having air agents to do that.
And they actually they actually got rid
of those 700 jobs, right? I don't have
that information in front of me, but
I'll have a look. Um, I'll throw it up
on screen for anyone that wants context
on that. But that's already happening.
This isn't some antologist or something.
And these aren't high paid people in
every case. We've done something
similar, by the way. We've we internally
we've
we've replaced that function for 70%.
Yeah. I mean, our company, we're 65
people and, you know, we um, you know,
we make, you know, millions per per
head, you know. So, it's a Are you going
to need to hire more people to get up to
I think so, but we're we're hiring
slowly, like, you know, we're we're
using uh customer support, AI, and that
meant that we we need less uh customer
support, and we're trying to leverage AI
as much as possible. the the person in
HR at Replet writes software using
Replet. So, I'll give you an example.
She needed um Orc charts uh software and
she looked at a bunch of them, got a lot
of demos and they're all very expensive
and they they're missing the kind of
features that she want. For example, she
wanted like version control. She wanted
to know when when something changed and
to go back in history. She went into
replet in 3 days she got exactly the
kind of software that she wanted and
what was the cost you know perhaps $20
you know something like that $ 20 $30
once right and um how many employees in
HR do we need right now we have two uh
if if they're highly levered like that
maybe we do not need a 20 HR team on
this point of
meaning I've heard so many billionaires
in AI describe this as the age of
abundance and I'm not necessarily sure
If abundance is always a great thing
because you know when we look at mental
health and we look at why how people
derive their meaning and their purpose
in life much of it is having something
to strive towards and some struggle in a
meaningful direction to you and this is
maybe adjacent but when there was a
study done I think it was in Australia
where they looked at suicide letters and
in the suicide letters the sentiment of
men in those suicide letters was they
didn't
feel worthy they didn't feel like they
were worth it. They didn't feel like
they were needed by their families. And
this is much of what caused their
psychological state. And I wonder in a
world of abundance where we, you know, a
lot of these AI billionaires are telling
us that we're going to have so much free
time and we're not going to need to
work. If there is at all going to be a
crisis of meaning, a mental health
problem. I mean, there already is. And
it doesn't require AI and it's going to
get worse. I don't know what to do about
it because essentially as human beings
we are built like all organisms to find
opportunity and figure out how to
exploit it. That's what we do. And the
world you're describing is really the
opposite of that. It's one where you're
effectively having your biological needs
at the physiological level satisfied and
there isn't an obvious place for your
spare time if that's what you end up
with to be utilized in something that
you know there's no place to strive and
I do imagine almost at best what would
happen is you have people who are being
sustained by a universal basic income
come and then parasetized
uh you know whatever currency they have
to spend somebody will be targeting it
and they will be targeting it with a an
AI augmented system that spots their
defects of character. I mean, again,
we're already living in this world, but
it will be that much worse when the AI
is figuring out, you know, what kind of
porn to target you with specifically.
That's uh it's a nightmare scenario. And
I do think it would be worth our time as
a species to start considering if we are
about to find ourselves in this
situation and we find some way of
dealing with the basic needs of the
large number of people who are going to
be
sidelined. What would a world have to
look like in order for them to have real
meaning? Not pseudo meaning, not
something that you know superficially,
you know, a video game is not meaning
even if it feels very meaningful in the
moment. I I think that would be a a
worthy investment for us to figure out
how to produce it. But frankly, I'm not
expecting us to either have that
conversation or get very far down that
road. I think it's much more likely that
we will squander the wealth dividend
that will be produced by by AI.
Interestingly, you also see in Western
countries that when we get more
abundance, we start having less kids.
And we're already seeing this sort of
population decline in the Western world,
which is was kind of scary. I think it's
often associated with affluence like the
more money someone makes the less likely
they are to want to want to have
children the more they try and protect
their freedoms. But also on this point
of
AI relationships are hard you know my
girlfriend is happy sometimes and not
happy other times and I have to like you
know go through that struggle with her
of like working on the relationship.
Children are hard and if we are
optimizing ourselves and you know much
of the reason that I sustain the
struggle with my girlfriend is I'm sure
from some evolutionary reason because I
want to reproduce and I want to have kin
but if I didn't have to deal with the
struggle that comes with human
relationships romantic or platonic
there's going to be a proportion of
people that actually choose that outcome
and I wonder what's going to happen to
birth rates in such a scenario because
we're already struggling. We're already
in a situation where we used to be
having five children per woman in the
1950s to about two in
2021. And we're seeing a decline. If you
look at South Korea, their fertility
rate has fallen to 72, the lowest
recorded globally. And if this trend
continues, the country's population
could half by 20, 100.
So yeah, relationships, connections, and
and also I guess I guess we've got to
overlay that with the loneliness
epidemic, which
is they promised us social connection
when social media came about, when we
got Wi-Fi connections, the promise was
that we would become more connected. But
it's so clear that because we spend so
long alone, isolated, having our needs
met by Uber Eats drivers and social
media and Tik Tok and the internet, that
we're investing less in the very
difficult thing of like going and making
a friend and like going and finding a
girlfriend. Young people are having sex
less than ever before. Everything that
is associated with the difficult job of
making in real life connection seems to
be um falling away.
I I will make the case that everything
that we've discussed here, all the
negative things around loneliness, um
around meaning, they're already here.
And I don't think blaming technology for
all of it is is the right thing. Like I
think there are a lot of things that
happened because of existing human uh
you know, impulses and and motivations.
Um well I I wanted to go back to where
you started because I do think that this
maybe is the fundamental question. Why
is it that we are already living in a
world that is not making us happy? And
is that the responsibility of
technology? And I don't think it's
exactly technology. Human beings uh
among our gifts are fundamentally
technological whether we're talking
about quantum computing or
flintnapping an arrow
head. What has happened to us that has
created the growing, spreading, morphing
dystopia is a process that Heather and I
in our book, A Hunter Gather's Guide to
the 21st Century, call hyper
novelty. Hypern novelty is the fact of
the rate of change outpacing our
capacity to adapt to change. And we are
already well past the threshold here
where the world that we are young in is
not the world that we are adults in. And
that mismatch is making us sick across
multiple different domains. So the the
question that I ask is is the change
that you're talking about going to
reduce the rate of change in which case
we could build a world that would start
meeting human needs better open
opportunities for pursuing meaningful
work. or is it going to accelerate the
rate of change which is in my opinion
guaranteed to make us worse off. So if
it was a one-time shift, right, AI is
going to dawn. It's going to open all
sorts of new opportunities. There's
going to be a tremendous amount of
disruption, but from that we'll be able
to build a world. Is that world going to
be stable or is it going to be just, you
know, one event horizon after the next?
If it's the latter, then it effectively
says what it does to the humans, which
is it it's going to dismantle us. When I
look out at society, I I go, okay, it's
having a negative impact. When I look at
um individual use cases, it's having a
profoundly positive impact. Including
for me, it's having a very positive
impact. So, it's it's one of these
things where I wonder what is that what
is it that we need to teach people at
school so that they understand the world
that we're going into? Because one of
the biggest issues that we're having is
that we're sending kids to school with
this blueprint, this template that
they're going to have this long arc
career that no longer exists that
essentially we're treating them like
learning LLMs. And we're saying, "Okay,
we're going to prompt you. You're going
to give us the right answer. you're
going to hallucinate it if possible. And
you know, and and then we go, "Okay, now
go off into the world." And they go,
"Oh, but wait a second. I don't know how
money works. I don't know how society
works. I don't know how my brain works.
I don't know how I meant to handle this
novelty problem. I'm not sure how to
approach someone in a in a social
situation and ask if they want to go on
a date." Um so all the important things
that actually are the important
milestones that people want to be able
to hit and that technology can actually
have an impact on we get no user manual.
So I think one of the biggest things
that has to happen is we have to equip
uh young people all through school that
to actually prepare them for the world
that's coming or the world that's here.
Well on the one hand I think you you
outline the problem very well.
effectively we have a a model of what
school is supposed to do that you know
at best was sort of a match for the 50s
or something like that and it woefully
misses the mark with respect to
preparing people for the world they
actually
face if we were going to prepare them I
would argue that the only toolkit worth
having at the moment is a highly general
toolkit the capacity to think on your
feet and pivot as things change is the
only game in town with respect to our
ability to prepare you in advance. Maybe
the the other auxiliary component to
that would be teaching you what we know
which is frankly not enough about how to
live a healthy life. Right? If we could
if we could induce people into the kinds
of habits of behavior and the
consumption of food and then train them
to think on their feet, they might have
a chance in the world that's coming. But
uh the fly in the ointment is we don't
have the teachers to do it. We don't
have people who know. And that is the
question is could the AI actually be
utilized in this manner to actually
induce the right habits of mind for
people to live in that world. I I I
spent a lot of time in education
technology. One thing that is as we say
on the internet a black pill about
education in general, education
intervention is there's a lot of data
that shows that there are very little
interventions you can make in education
to generate better outcomes. Um and so
you know uh there's been a lot of
experiment around pedagogy around you
know how to configure the the the
classroom that have resulted in very
marginal improvements. There's only one
intervention and this this has been uh
reproduced many times that creates two
sigma two standard deviation
uh positive outcomes in education
meaning you're better than 99% of uh of
of everyone else and that is one-on-one
tutoring I thought so I was going to say
smaller classrooms and personalization
one-on-one tutoring yeah and and but by
the way if you look someone also did a
survey of all the geniuses the
understanders of the world and found
that they all had one-on-one tutoring.
They all had someone in their lives that
took interest in them and tutored them.
So, what can create one-on-one tutoring
opportunity for every child in the
world? AI. AI. My kids use it and it's
incredible. Yeah. As in like um they're
interacting and and it's adapting to
their speed. Yes. And um it's giving
them different analogies to work with.
So, like, you know, my son was learning
about division and it's asking him to
smash glass and how many pieces he
smashes it into with this hammer and,
you know, and it's saying things like,
"No, Xander, go for it. Really smash
it." And um and he's loving it, right?
Is that synthesis? Yeah. Yeah. I'm an
investor in this company. Oh, well, it
was it was it's great to watch that
simulated one-on-one tutoring because
it's talking to him. It's asking him
questions. Brett, you're an educator.
you uh spent much of your life teaching
people in universities. How do you
receive all of this? Well, on the one
hand, I agree that the uh the closer to
one to one you get, the better. But I
also personally believe that 0ero to one
is best. And what I mean by that
is part of what's gone wrong with our
educational system is that it is done
through abstraction.
And effectively the arbiter of whether
you have succeeded or failed in learning
the lesson is the person at the front of
the room. And that's okay if the person
at the front of the room is truly
insightful. And it's terrible if the
person at the front of the room is
lackluster, which happens a lot. So what
doesn't work that way is interaction
with the physical world in which nobody
has to tell you whether you've succeeded
or failed. If you're faced with an
engine that doesn't start, you can't
argue it into starting. You have to
figure out what the thing is that has
caused it to fail, and then there's a
great reward when you alter that thing
and suddenly it fires up. So, I'm a big
fan of being as light-handed as possible
and as concrete as possible in teaching.
In other words, uh, when I've done it,
and not just with students, but with my
own children, I like to say as little as
possible, and I like to let physical
systems tell the person when they've
succeeded or failed. And that creates an
understanding. You can extrapolate from
one system to the next. And you know
that you're not just extrapolating from
one person's misunderstanding. You're
extrapolating from the way things
actually work. So, I don't know if AI
can be leveraged in that context. My
sense is there's probably a way to do
it, but one would have to be deliberate
about it, especially with robotics and
humanoid robots. Actually, that is that
is the place uh where where you can do
this is with robotics that
um it seems to me. Yeah. Well, robotics
will teach you the physical computing
part of it. And then the question is how
do you infuse this with AI so that um it
is that it you know it provokes you out
of some eddy where you're caught and
moves you into the ability to solve some
next level problem uh that you you
wouldn't have found on your own. What do
what do you think should be taught in
the classroom with everything that you
now know? Well, you're all fathers here.
You all have your own children. So, it's
a good question for you. How old are
your kids? How old are your kids? Uh
three and five. 19 and 21 and six,
seven, and 10. My children are very
young, but uh we already do use AI and I
sit down with them in front of replet
and we generate ideas and make make
games. And um I would say, you know,
what Brett said about generality is very
important. The ability to pivot and kind
of learn skills quickly. Being
generative is very very important.
Having a you know a fast pace of
generating ideas and iterating on those
ideas. We sit down in front of Chad GPT
and my kid imagines scenario. Oh, what
if you know there's a there's a cat on
the moon and then you know what if the
moon is made of cheese and what if
there's a mouse inside it or and so we
keep generating these um variations of
these different ideas and I and I find
that you know makes them more
imaginative and and creative. Uh rule
number one that I tell my kids is stay
away from porn at all costs. I'd rather
you have a drug problem than a porn
problem. And I actually mean that. I
think it's I think porn is more
dangerous to the to the human being as
as bad as a drug problem is. But when we
get to the question of how to confront
the world and uh the things that you're
going to be um expected to to do in the
workplace and all of that, my point to
them is you are facing the uh the
dawning of the age of complex systems
that you are going to have to interact
with. And in the age of complex systems,
you have to understand that you cannot
blueprint a solution. And you have to
approach these systems with a upgraded
toolkit of humility because the ability
of the system to do something you don't
predict is much greater than a highly
complicated system. So you have to
anticipate that and be very sensitive to
the fact that what you intended to
happen is not what's going to happen. So
you have to monitor the unintended
consequences of whatever your action is
and that there are really two tools
which work. One of which you just
mentioned which is the prototyping. You
prototype things. You don't imagine that
I know the solution to this and I'm
going to build it. You imagine I think
there's a solution down there. I'm going
to make a proof of concept and then I'm
going to discover what I don't know and
I'm going to make the next version.
Discover what I don't know and
eventually you may get to something that
actually truly accomplishes the goal. So
prototyping is one thing.
And also instead of using the blueprint
as the metaphor in your mind uh navigate
you can navigate somewhere. And you know
that the way I think of it is a surfer
is in some ways mastering a complex
system but they're not doing it by
planning their days surfing down the
waves. You can't do that. What you can
do is you can be expert at absorbing
feedback and navigating your way down
the wave. and that that's the right
approach for a complex system. Nothing
else is going to work. And so I guess
the final piece is uh general tools
always no specialization. This is this
is the age of generalists and um invest
in those tools and they will pay.
So the guiding philosophy for me is uh
to produce high agency generalists. So
um ultimately I want them to be
motivated self-starters and have a wide
general toolkit. I imagine them very
much what you imagine which is
instructing robots, instructing agents,
coming up with ideas. Um, and I imagine
them having a very high velocity life
where they may be writing a book,
organizing a festival, having a podcast,
starting a business, and being part of
somebody else's business all at once as
they are of the ADHD. Yeah. Right.
Exactly. Um, so the high agency
generalist is the kind of guiding
philosophy. Some of the things that we
do is like we do chess, we do Brazilian
jiu-jitsu, we do dancing, we do acting
classes, playing in nature, uh
entrepreneurship, understanding that you
can start a lemonade. We just did
lemonade stands which was amazing. U we
sold lots of lemonade on the street. So
those kind of things and jumping from
one thing to the next thing, but also
trying to avoid too many screens and
forcing them into making stuff from
what's going on around the house. Um,
some distinctions that we try and give
them is the difference between creating
and consuming because I think AI has
this superpower of making you a hyper
consumer or a hyper creator. Um, and if
you don't understand the distinction
between creation and consumption, you
end up falling into the consumption
trap, whether it be porn or just news or
um, thing, you know, things that feel
like you're productive, but you're
actually just consuming stuff. Won't
that be the most successful AI? the one
that plays with my dopamine the most.
Yeah. And and makes you and makes you
think that you're achieving something
when you're actually just consuming
something. So trying to give them the
understanding that there is this
difference in their life between
creation and consumption and to be on
the creation side. I started my first
business at 12 years old and I started
more businesses at 14, 15, 16, 17 and
18. And at that time, what I didn't
realize is that being a founder with no
money meant that I also had to be the
marketeteer, the sales rep, the finance
team, customer service, and the
recruiter. But if you're starting a
business today, thankfully, there's a
tool that wears all of those hats for
you. Our sponsor today, which is
Shopify. Because of all of its AI
integrations, using Shopify feels a bit
like you've hired an entire growth team
from day one, taking care of writing
product descriptions, your website
design, and enhancing your products
images, not to mention the bits you'd
expect Shopify to handle, like the
shipping, like the taxes, like the
inventory. And if you're looking to get
your business started, go to
shopify.com/bartlet and sign up for a $1
per month trial. That's
shopify.com/bartlet.
The thing that we I think all agree on
is that this is inevitable. Do you agree
with that, Brett? I think it's sad that
it is inevitable, but at this point it
is. What part of it do you find sad?
We have squandered a long period of
productivity and peace in which we could
have prepared for this moment. and our
narrow focus on competition
has created
a a fragile world that I'm afraid is not
going to survive the disruption that's
coming. And it didn't have to be that
way. This was foreseeable. I mean,
frankly, the movie 2001, which came out
the year before I was born, anticipates
some of these problems. And you know we
treated it too much like education. I
mean like entertainment and not enough
like education. So we are now you know
we've had the AI era opened without a
discussion about its implications for
humanity. There is now for game
theoretic reasons no way to slow that
pace because as you point out if we
restrain ourselves we simply put the AI
in the hands of our competitors. That's
not a solution. So, I don't advocate it,
but there's a lot more preparation we
could have done. We could have
recognized that there were a lot of
people in jobs that were uh about to be
obliterated and we could have thought
deeply about what the moral implications
were and what the solutions at our
disposal might have been. And having not
prepared, it's going to be a lot more
carnage than it needed to be. Amjad, I
heard you say a second ago that what we
should be talking about is how we deal
with job displacement. Do you have any
theories if you were prime prime
minister or president of the world and
you your job was to deal with job
displacement let's just say in the
United States how would you go about
that the first thing I would do is uh
teach people about these systems whether
it's um programs on on the TV or
outreach or or what have you just trying
to get people to understand how chat GBT
works how these algorith algorithms work
and as the new jobs arrive um I think
you know there's going to be an
opportunity for people to be able to
detect that you know the this job
requires this set of skills and I I I
have this this kind of experience and
although my experience are potentially
outdated I can repurpose that experience
to do that job I'll give you an example
a teacher his name is Adil Khan you know
he started using at the time GPT3 and uh
felt like it does amazing work as a
tools for teachers or even potentially a
teacher itself. So he learned a little
bit of coding and he he went to
Unreallet and he built uh this company
and um just two years later they're
worth hundreds of millions of dollars.
Obviously, not everyone will be able to
create businesses of that scale, but
because you have an experience in a
certain domain, you'll be able to build
the next iteration of of of that using
technology. So, even if your job was
displaced, you'll be able to figure out,
you know, what's what's potentially what
potentially comes after that. So, so I I
I think people's expertise that they
built, I don't think they're all for
waste. Even if your job went away,
you can never really predict what jobs
are coming. I mean, I think of this
crazy situation where I tell my
grandfather, what is a personal fitness
trainer? And he would his mind would be
blown by this idea that well, okay, I
don't really want to go to the gym, so I
have to make an appointment and pay
someone to go to the gym and meet with
me there. And then he stands there and
tells me to lift heavy things that I
don't really want to lift. And then he
counts them and tells me that I've done
a good job and then I put the heavy
things down and then at the end of that
I feel really good and I pay him a bunch
of money. My grandfather would be like
what on earth have you been scammed? Is
this so we can never predict what what
this uh future of jobs would look like.
Even just 20 30 40 years apart the jobs
rapidly and convincingly just morph into
something else. I think it's very
dangerous the idea that we need to focus
on skills. I think the future is not in
skills. Skills are being replaced. It's
this idea that the education system has
to stop being compartmentalized and has
to be a lifelong learning approach. The
department of education needs to be
seeing people as lifelong learners who
are constantly disrupted and need
re-education. Interesting. That that's
going to be a thing. The Department of
Education needs to start as a kid and go
right through to maybe 70. Does the
Department of Education have a role
anymore at all? Depends on your
definition of education. I think if
you're trying to teach kids or if you're
trying to teach kids to, you know,
remember facts and figures from a
history book, then no. But if it's about
coaching, mentoring, being displaced,
finding the next thing, and maybe if
it's AIdriven and all of those kind of
things, then it's a different paradigm
shift around what education is and what
its purpose is. And if we see it as a
fluid thing where we wave into an
opportunity and then wave back into
education, spotting a new opportunity
and then back here. If we're learning
rather than skills, but we're learning
tools. So it's a tools-based education
as opposed to a skills-based education.
The the purpose of education for most of
human history was about virtue, about
becoming a great person who had good
judgment and who had good values. And we
don't really do much of that anymore.
But I think if we essentially said if we
get back to what is if we ask the
question what is the purpose of
education and where does it fit in our
lives and at what time frame does it go
for and then we just trust that people
are going to come up with weird and
wonderful jobs. You know this is sounds
crazy but also and this is a weird
analogy. My cat is incredibly happy. How
do you know? Well, it it demonstrates
all the characteristics of of being a
happy cat and it lives in a world of
super intelligence as far as it's
concerned. So, there's this house and
food just magically happens. It has no
idea that there's this Google calendar
that runs a lot of things that happen
around it. The food gets delivered. The
money is magically made by something
that is inconceivably more intelligent
than the cat. And yet the cat has
evolved to be living this life of
purpose and meaning inside the house.
And as far as it's aware, it it's got a
great life. But you have the power at
any moment if you're having a bad day to
do something not so pleasant to that
cat. And it can't really reciprocate
that. Exactly. But but what's in it for
me to hurt the cat?
Because the in this analogy, you might
want to move house and the landlord
doesn't allow cats. So you've got a
decision to make. Yeah, that there are
things that the cat is highly disrupted
by due to no fault of the cat. I get it.
But as far as cat existence goes and the
and the history of cats, if you were to
ask that cat, do you want to tra trade
places with any of the other cats that
came before you? It would probably say,
I don't want to take the risk because
all the other cats had to fend for
themselves in a way that I don't have
to. It's very possible that we live end
up living in a life a lot like the house
cat in the sense that from our
perspective we're extremely like we're
having very interesting lives and
purpose meaning and just there's this
massive higher intelligence that's just
running stuff and we don't know how it
works but it doesn't really matter
whether how it works. We we we are the
beneficiaries of it and it it's doing
important things and we're enjoying
being house cats in in its in its life.
I have a few things to say about this.
One, I'm pretty sure your cat's not as
impressed with your capacity as you are
or as you think he is. Um I just know
cats well enough to be pretty sure of
that. But oh, it looks down on me. Yeah,
you're right. I think it I think it's a
fair it's a fair point that there is an
existence and actually, you know, pets
really do have it. If they have loving
owners, they really do have it pretty
great. And I would also point out that
there's a way in which we already are
this way. Most of us do not understand
the process that results in electricity
coming out of the walls of our house or
the water that comes out of the tap. And
we're pretty much okay with the fact
that somebody takes care of that and we
can busy ourselves with whatever it
might be. But the place that I find
something troubling in your description
is that you say that the nature of what
we do is to deal with the fact that jobs
are always being upended. That's a very
new process. That is the hypern novelty
process. It used to be that it was only
very rarely that a population had a
circumstance where you didn't
effectively do exactly what your
immediate ancestors did. Right? Um, in
general, you took what the jobs were,
you picked something that was suited to
you, and you did that thing
intergenerationally.
Intergenerationally. And the point is,
we've now gotten to the point where even
within your lifetime, what is possible
to get paid for is going to shift
radically in ways that nobody can
predict. And that is a dangerous
situation. Like probably every two
years, like two or three years, right?
And so maybe there's some model by which
we can surf that wave and you can learn
a generalist toolkit and you know that
your survival doesn't depend on your
being able to you know switch up every
two years and never miss a beat or maybe
we can't but I do think it is worth
asking the question if the rate of
technological change has taken us out of
the normal human circumstance of being
able to deduce what you might do for a
living based on what your ancestors did
and put us in a situation where what
your ancestors did is going to be
perfectly irrelevant no matter what. But
that is effectively a choice that has
been made for us. And we could choose to
slow the rate of change so that we would
live in some kind of harmony where our
developmental environment and our adult
environment were a match. Now, as a
biologist, I would argue if we don't do
something like that, this is a matter of
time. Yeah. How would we change? How do
we slow the rate of change? Well, I I
mean, you can you can you can be the
Amish, right? You can be the Amish and
live in your own communities and and I
would assume some people would would
want that. Well, I'm you know, when
Heather and I wrote our book, I wanted
the first chapter to be, are the Amish
right? And the answer is they can't be
exactly right because they picked an
arbitrary moment to step off the
escalator. But are they right that
there's something dangerous about this
continuing pattern of technological
change? Clearly they are. What do the
Amish do for anyone that doesn't know?
The Amish live as if it was what 1850 or
something. So they live in a they don't
use cars. They I think they do have
phones but they do not have electricity.
Basically they they voluntarily accept a
techn they're basically a lite community
and they uh have turned out to fare
surprisingly well against many of the
things that have upended modern one of
them right yeah co they did beautifully
quite happy people very low autism rates
they they have all sorts of advantages
so anyway I'm not arguing that we should
live like the Amish I don't see that but
I do think the idea that they had an
insight which was you need to step off
that escalator because you're just going
to keep making yourselves sicker is
probably right now. Maybe this is a
one-time shift. We've stepped over the
event horizon. We are going to be living
in the AI world. And maybe if we're
careful about it, we can figure out how
to turn that landscape of infinite
possibility that you're describing into
a place that doesn't change. That you
always have the opportunity to decide
what needs to be done. But that none
that the that living over that event
horizon is not an everchanging process.
It's just the next frontier. I do want
to also propose or ask the question when
we talk about our hyperchanging world.
Isn't it harder for older people to
learn because of the the way that the
brain works in terms of processing speed
and memory flexibility?
So I was wondering if you're going to
get a situation where like my father h
because of his brain and the reduced
memory flexibility and processing speed
that happens when you're older is going
to struggle significantly more than my
niece who can seem to learn I mean my
niece knows five languages and she's
seven or something crazy like that five
languages but I mean the brain is much
more plastic isn't it? So the it's and
that's goes back to our evolutionary
psychology which you know much
evolutionary history which you know much
more than I do about of we're meant to
learn our lessons when we're young. use
that information for a lifetime. But if
that information is changing quickly,
well, that's I mean this is exactly what
I'm pointing to. It is not normal for
your developmental environment to fail
to prepare you for your adult
environment. The normal thing is as a
young person, you take on ever more of
the responsibilities of the adult
environment. And then at some point, you
know, in a properly functioning culture,
there's a right of passage. You go into
the bush for 10 days, you come back
with, you know, a large uh, you know,
game animal and now you're an adult and
you take that program that you've been
building and you activate it. And that
is normal. And, you know, you're a lot
happier person. You're a lot more
fulfilled if your life has that kind of
continuity to it. And you know, I'm not
against the idea that we have enabled
ourselves to do things that can't be
done if that's the the limit, but we
have also harmed ourselves gravely. And
I would like to somehow pry apart our
ability to improve our well-being from
our self-inflicted wounds that come from
this
neverending pace of change. And I don't
know if it's possible, but I think it's
a worthy goal. Something amusing. I
don't know if it's exactly a
counterpoint, but um during co
especially and you know through the the
recent technological change uh some
people have started living closer to the
more ancestral environment. Um so uh
people whose jobs are online, some of my
friends like went and built communities
like collectives where they you know
live and they they create farms and they
they eat and then they eat and they have
like an email job. They do their email
jobs for five hours and and go out and
they all have children and it's it's a
fascinating life. And there was so much
rethinking in in Silicon Valley about
how we live. And there's a bunch of
startups that are trying to create um
cities where they're like, okay, we know
that we've we're suffering because our
cities are not really walkable. And
there's so many reasons why we're
suffering. First, we're not getting the
movement. Second, there's a social
aspect of walkable city where you're
able to interact with people. You'll
make uh friends by just happening to be
in the same place as others. let's
actually build uh walkable cities and if
we want to you know uh transport faster
we'll have these self-driving cars on
the perimeter of the city that are going
around and I think there are ways in
which technology can afford us to uh to
live uh in a way that
reverses I I guess in a more local way.
I I like that vision, but I also am
aware that there's a different vision,
right? You see people in Palo Alto, for
example, actually exerting, you know,
very strong controls on how much their
children are exposed to, uh, you know,
to phones. And I live in Palo Alto.
Yeah. So, so you see that. On the other
hand, what I am worried about is that
the the elites of PaloAlto don't realize
that what they're doing is they're
figuring out how to reduce the harm to
their own families as they're exporting
the harm to the world of these
technologies that for everybody else are
unregulated. And so the question is, can
we bring everybody along? If the AI
revolution is going to alter our
relationship to work and everything
else, can we bring everybody along so
that at the end of this process instead
of saying well you know it's a shame
that uh you know three billion people
were sacrificed to this transition but
progress is progress we can really say
well we figured it out and everybody now
is living in a style that is closer to
their programming and closer to the
expectations of their physical bodies.
You know, if that were true, then I
would I would be I would love to be
wrong in my fears about what's coming.
Um, but unfortunately, the market is not
going to solve this problem without our
being deliberate about forcing it to.
What's your biggest fear? Like when you
say my fears about what's coming, what
do you like what's what's the picture
that comes in your mind? Oh, it's a
whole different topic actually. Um my my
fear coming stemming from technology
uh and AI is that this is a runaway
process and that that runaway process is
going to interface very badly with some
latent human programs. that in effect
the need for
workers largely disappears and the
people who are at the head of the
processes that result in that
elimination for the need for workers
start talking about useless eaters.
Maybe they come up with a new term this
time. Thin the herd. Yep. Or they allow
it to be thinned or something. Right.
I've heard you talk about the five key
concerns you have or the five key
threats you have before. Could you name
those five? So the first one is the one
I worry least about. I don't worry zero
about it, but I worry least about it,
which is the malevolent AI uh that the
doomers are so focused on. The second
one is the idea that you know an AI can
be misaligned not because it has
divergent interest but because it just
misunderstands what you've asked it.
these autonomous agents. You know, the
famous example is you ask them to
produce as many paper clips as possible
and they start liquidating the universe
to make paper clips and you know, it's a
it's a sorcerer apprentice kind of
issue. The third one I would say
is actually all of the remainder of them
I would say are guaranteed and
um the third of them is the derangement
of human intellect that we are already
living in a world where it's very
difficult to know what the facts even
mean. Right? We the facts are so
filtered and we are so persuaded by
algorithms that it's you know our
ability to be confident even in the
basic facts even within our own
discipline sometimes is uh at an
all-time low and it's getting worse and
that problem takes a giant leap forward
at the point that you have the ability
to generate undetectable deep
fakes. Right? that's going to alter the
world very radically when the fact that
you're looking at videotape of somebody
robbing a bank doesn't mean that they
robbed a bank or that a bank was even
robbed. Um, so anyway, I call this we
deal with this a lot by the way. I I
think every single week, every single
week I send my I have a chat people that
just are now basically spending I'd say
30% of their time dealing with deep
fakes of me doing crypto scams, inviting
people to Telegram groups, and then
asking them for credit card details. We
had one on X. I think you probably saw
it, Dan, didn't you of me? But that
someone was running ad deep fake ads on
X of me. And it wasn't just one ad. It
was like there were it was like swatting
flies. There was 10 of them. And I
messaged them to X and there was 10
more. Then the day after there was 10
more. Then the day after there was 10
more. Then it started happening on on
Meta. So it's a video of me basically
asking you to come to a Telegram group
where people are being scammed and
audience members of mine are being
scammed. And when I send them to Meta,
they thankfully remove them. But then
there's five more. And I went on
LinkedIn yesterday and my DMs are,
"Steve, by the way, there's this new
scam." And I actually at this point I c
I'd need someone fulltime just sending
this over to Meta. I'm the I'm the same
but on a smaller scale. Every week it's
did you really message me on Facebook
asking me for my crypto wallet and blah
blah blah. My my least favorite ones are
when the the single mother messages me
saying that she just paid £500 of her
money and how devastated she is and I
feel this moral obligation to give her
her money back um because she's fallen
for some kind of scam. That was me. It
was my voice. It was a video of me
telling her something. Yeah. And I don't
know I don't know how you deal with that
but sorry do continue. Well, I mean
that's actually on the list here. the
massive disruption to the way things
function both because people are going
to be unemployed in huge numbers and
because those who are not abiding by our
social contract are going to find
themselves empowered more than the
people who do. So in this case, not only
is this poor woman, you know, now out
500 bucks for whatever the scam was, but
you've also been robbed whether or not
you pay her back for the thing that she
thought she purchased. your credibility
is being stolen by somebody and you have
no capacity to prevent it. This has
happened to me also and it is profoundly
disturbing and it is only one of a dozen
different ways that AI enables those who
are absolutely willing to shrink the pie
from which we all derive in order to
enlarge their slice. you know, there
there are innumerable ways that this can
happen and um I think people do not see
it coming. They don't understand how
many different ways they are going to be
robbed every bit as surely as if
somebody was printing money. Um and then
the last one is that this just simply
accelerates demographic
uh processes that do potentially result
in the unleashing of technologies that
pre-existed AI. you know, this this can
easily result in an escalation
uh into wars that turn nuclear. Um, so
anyway, I think that list could probably
be augmented at this point now that
we've, you know, spent a little time in
the AI era. We can begin to put a little
more flesh on the bones both of what is
possible in this era and what we should
fear. One of those you you mentioned uh
truth, you know, the problem of truth.
Would you say just a thought experiment,
someone today like an average person,
college educated say
person, are they more
propagandized or led astray
than someone in Soviet Russia?
Well, I don't know because I didn't live
in Soviet Russia, but my understanding
from people who did was that there was a
wide awareness that the propaganda
wasn't true. Doesn't mean they knew what
to believe, but there was a cynicism,
which is one of my fears here, is that
the, you know, you're really stuck
choosing between two bad options in a
world where you can't tell what is true.
You can either be overly credulous and
be a sucker all the time, or you can
become a cynic and you can be paralyzed
by the fact that you just don't believe
anything. But neither of those is a
recipe for do you think Google search
first and maybe now chat GPT has helped
people more or
less to find truth? I
think it's not chat GPT exactly but all
of the various AI engines that we're
starting with Google have briefly
enhanced our capacity to know what's
true because in fact they allow us to
see through the algorithmic manipulation
because the AI is not well policed. you
can get it to recognize patterns that
people will swear are not true. Um, and
so anyway, a lot of us have found it
useful in just simply unhooking the
gaslighting. Um, so that's been very
positive. But I also remember the early
days of search and search used to be a
matter of there are some pages out
there. I don't know where they are.
Here's a mechanized something that's
looked through this stuff and just point
me at the direction of things that
contain these words. right before the
algorithmic manipulation started
steering us into believing pure nonsense
because somebody who controlled these
things decided it was useful for us to
believe those things. So my guess is at
the moment AI is enhancing our ability
to see more clearly but that really
depends on some kind of agreement to
protect that capacity that I'm not aware
of us having are you implying there that
AI will protect us from AI i.e. the
woman that got scammed in my audience,
the platforms would have a tool built in
which would be able to identify shortly
that that is not me and the ad is been
launched by someone in another country
potentially and then also when she
starts being asked for her credit card
details in such a way on Telegram 10
minutes later the system will able to
understand there that this is probably a
scam at that touch touch point too and
it will also be the defense not just the
offense.
First thing uh question is Meta
incentivized to solve this problem? Yes.
Yes. And so Meta is probably actively
working on AIs and again it's going to
be a cat and mouse game like every abuse
that happens out there. So I I think
that the market will naturally respond
to things like that in the same way that
you know we installed antiviruses as you
know annoying as they are. I think we'll
install uh AIS on our computers that
will allow us to at least help us kind
of sort the the fake from from the
truth. Well, but let's let's take the
example you say. Is Meta incentivized to
solve this problem? Superficially, it
seems that it should be, but how many
times in recent history have we watched
a corporation cannibalize its own
business over what at best is the
bizarre desires of its shareholders,
right? Why was X throwing off people
with large accounts or Facebook or
Google? It would seem that you would
expect based on the market choosing
search engines or social media sites,
you would expect these companies to be
absolutely mercenary and say, you know,
if Alex Jones has a big audience, who
are we to say? That's what I would have
expected. Instead, you had these
companies policing the morality of
thought even though it reduced the size
of the population using the platforms. I
have a hard time explaining why that
happened, but I have every reason to
expect the same thing will happen with
AI. What are you excited about with AI?
What's your your optimistic take?
Because at the start of this
conversation, you said that there's
infinite ways that it could improve our
lives and there's 10 times more ways
that it could hurt our lives. But let's
investigate some of those ways that it
could drastically improve our lives.
There's a couple of different ways. One,
we have, as we mentioned before, a der
of competent teachers and professors.
And that is a problem that will take
three generations at least to solve if
what we're going to do is start tomorrow
and start educating people in the right
way that would make them competent to
stand at the front of a room and
educate. But if we can augment that
process, if we can leverage a tool like
AI so that you know a small number of
competent teachers can maybe reach a
larger number of pupils, that's
plausible I think. Second thing is we
have a tremendous number of problems
that are obstacles to us living well on
this planet that AI might be able to
manage that human intellect alone
cannot. Right? Just in the same way that
you know compute power can calculate
things at a rate that human beings can't
keep up and there are certain things you
want calculated very well. There are
also some reasoning problems. You could
imagine that instead of having um static
laws that govern behavior poorly because
they get gamed that you could have a
dynamic interaction. You could specify a
an objective of something like a law and
then you could monitor whether or not a
particular intervention successfully
moved you in the direction that you were
hoping to go or did something
paradoxical which happens all the time
and you could have you could basically
have governance that is targeted to
navigation and prototyping rather than
to specifying a blueprint for how we are
to live. So we wouldn't need
politicians. Um, at the moment we're
stuck with, you know, constitutional
protections that are as good as has been
constructed and still inadequate to
modern realities.
Dan, what are you excited about with AI
from an individual level, but also from
a societal level? Yeah. Well, the big
ones are healthcare and education. I
mean, it's ridiculous that you uh are
sitting there in pain, having had an
MRI, and there just hasn't been someone
to look at that MRI yet. and and tell
you what to do. Um and that could easily
be solved there's all sorts of
healthcare issues where um and also not
only that throughout the entire world
there are places that just don't have
general practitioners and they don't
have you know medical advisers and and
you know the breakthroughs in global
healthcare will be phenomenal and the
breakthroughs in global education could
be transformational um on the planet. I
I'm excited at an individual level that
I think the industrial age created a
bunch of jobs that are very dehumanizing
and we've just kind of gotten used to
them and put up with them. The idea that
work should be repetitive and you know
you just repeat the same loop over and
over and over over again and over a
10-year period of time you might get you
know graduated up one gear and all that
kind of stuff. I don't think that's very
human. Um the idea that you could be
simultaneously writing a book, launching
a business, running a team, launching a
festival, having an event. Um that that
that you could actually be doing this
kind of like mini kingdom work where
you've got this little, you know, uh
ecosystem around you of fun things that
you're involved in that is actually made
possible for a vast majority of people
if they embrace these kind of tools. um
you can live an incredibly fulfilling
and amazing and impactful existence or I
know that I do as a result of having
these tools in my life. Like I'm I'm
doing things that I could have only
dreamed about uh as a kid. And what
would you say to entrepreneurs? I know
you you work with thousands of
entrepreneurs. What are you telling them
in terms of their current businesses or
business opportunities that you're
foreseeing? So I think that small teams
have infinite leverage now and that when
you have a team of say five to 10 people
who share an incredible passion for a
meaningful problem in the world and they
want to see that meaningful problem
solved and they come together in the
spirit of entrepreneurship to solve that
problem. That little 5 to 10 person team
armed with the technology that we now
have available, you can have a a big
impact. You can make a lot of money. You
can have a lot of fun. you can solve
meaningful problems in the world. You
can scale solutions. You can probably do
more in a three-year window than most
people did in a 30-year career. Uh and
then that little band of 5 to 10 people
could either go together onto a new
meaningful problem or they could disband
and you know work on other meaningful
problems with different teams. In such a
world where you have this sort of
infinite leverage but everyone else has
access to the same infinite leverage.
What becomes the USP? Going back to this
idea of the moat, like what is the thing
of value when we've all got access to
$20 infinite leverage? Well, first of
all, the first thing you need to do you
need to understand is that this moment
of time is the least competitive uh
moment. Like if you understand how to
use these tools, you can start making
money tomorrow. like we, you know, I see
countless examples of people making
thousands of dollars with these hustles
that I that I talked about or building
businesses that generates millions of
dollars in the first couple of months of
existence. So, I would say start moving
now. Start building things. So, it's an
unprecedented time of of of wealth
creation. Clearly at some point as the
market gets more efficient as people
more and more people understand how to
use these tools um there's less
potential for uh you know creating these
massive businesses quickly and we've
seen this like the dawn of the internet
or dawn of the web you know it was a lot
easier to create Facebook than it is now
then we had mobile and for three four
five years it was very easy to create
massive businesses and then it became
harder. Being just at the edge of what's
possible is going to be very very
important over the next couple years.
And that's that gets me really excited
because the entrepreneurs who are paying
attention are going to are going to be
having the most amount of fun, but
they're also going to be able to make a
lot of money. How many applications have
been built on Replet to date? So, you
know, I can talk about the millions of
things that have been built since the
since we started the company, but just
since uh September when we launched
Replent, there's been about 3 million
applications built purely in natural
language with no uh with no coding at
all, purely natural language. uh of
those I think uh 300 400,000 of them
were deployed in a real um in the site
was deployed and it is having people are
using it some kind of business some kind
of internal tool. I built one last night
by the way an internal tool or I uh
built an application to track um how my
kids earn pocket money. Amazing. So, I
just told it that I wanted to track the
tasks that are happening around the
house and put an assign a value to them
and I want to be able to at the end of
the week push a button and get a summary
of how much to pay each child um for
their pocket money. we are so screwed.
And within 15 minutes, it had created
this application and it was amazing.
Like you could toggle between like
here's the place where you have the kids
and here's the weekly reports and here's
the um how much per task and you can
tick off the tasks or remove tasks or
add tasks. So then I now have this
application for which took 15 minutes of
just talking about what I wanted and now
I have an application to run the pocket
money uh situation in the in the house.
And this by the way it's something
having run an IT agency years ago.
That's something that we would have
charged five to10,000 pounds to create
or5 to 10,000 US dollars to to create
and how much time probably talking
something that would have been a three
four week project and we're at the start
of the S-curve now that you're
describing and it's already if you if
it's a $20 replet's roughly $20 a month
25 for the for the base case you did one
day of usage let's say it's a dollar it
cost you and it cost you minutes and a
dollar now and we're at the start of the
scurve and and you talk to it like
you're chatting to a developer. So one
of the things that slows down the
development process is you have to send
the information to a developer and they
need to understand it and then they need
to create something and then come back
to you. This just happens in front of
your eyes uh while you're watching it
and it's actually showing you what's
being built and it's it's really wild.
This one change has transformed how my
team and I move, train and think about
our bodies. When Dr. Daniel Lieberman
came on the DEO. He explained how modern
shoes with their cushioning and support
are making our feet weaker and less
capable of doing what nature intended
them to do. We've lost the natural
strength and mobility in our feet and
this is leading to issues like back pain
and knee pain. I'd already purchased a
pair of Viva barefoot shoes. So, I
showed them to Daniel Lieberman and he
told me that they were exactly the type
of shoe that would help me restore
natural foot movement and rebuild my
strength. But I think it was
plantficitis that I had where suddenly
my feet started hurting all the time.
And after that I decided to start
strengthening my own foot by using the
Vivo barefoot. And research from
Liverpool University has backed this up.
They've shown that wearing Vivo barefoot
shoes for 6 months can increase foot
strength by up to
60%. Visit vivo
barefoot.com/doac and use code diary 20
from my sponsor for 20% off. A strong
body starts with strong feet. This has
never been done before. A newsletter
that is ran by 100 of the world's top
CEOs. All the time people say to me,
they say, "Can you mentor me? Can you
get this person to mentor me? How do I
find a mentor?" So, here is what we're
going to do. You're going to send me a
question. And the most popular question
you send me, I'm going to text it to 100
CEOs, some of which are the top CEOs in
the world running a hundred billion
dollar companies. And then I'm going to
reply to you via email with how they
answered that question. You might say,
"How do you hold on to a relationship
when you're building a startup? What is
the most important thing if I've got an
idea and don't know where to start?" We
email it to the CEOs. They email back.
We take the five, six top best answers.
We email it to you. I was nervous
because I thought the marketing might
not match the reality. But then I I saw
what the founders were replying with and
their willingness to reply and I thought
actually this is really good and all
you've got to do is sign up completely
free. I don't think we've spent a lot of
time talking about autonomous weapons.
This is the thing that really worries
me. And the thing that worries people
about AI is this idea is that it is uh
this you know emergent system and
there's no one thing behind it and it
can be it can act in a way that's uh
unpredictable and not really guided by
humans. also think it's true of
corporations of governments and so I
think individual people uh can often
have the best intentions but the
collective can land on doing things in a
way that's harmful or morally irrepant
and I think um we talked about China
versus the US and that creates a certain
race dynamics where um they're both
incentivized to cut corners and
potentially do do harmful things and in
the world of geopolitics
um and wars, you know, what really
scares me is is autonomous weapons. And
why does it scare you? Because
uh you know you can imagine
uh autonomous drones being trained on
someone's face and you can send a a
swarm of of drones and they can be this
um sort of autonomous killing
assassination machine and it can sort of
uh function as a you know country verse
country technology in in the world of
for which is still crazy but it can also
become a tool for governments to
subjugate the citizens and and people
think we're we're safe in the west but I
think the experience with co showed that
even the systems in the west can very
quickly become draconian. Yeah.
Apparently, I've heard in um Iran that
uh they have facial recognition cameras
that detect where the women are wearing
hijabs in their own cars and it
automatically detains the car. If you're
driving and you're not wearing a hijab
and if you're certainly if you're
walking down the street, it just picks
that up and immediately you're you're in
trouble. uh you can like it acts as a
police officer and a judge and you know
a law m lawmaker it's the judge jury and
executioner essentially and it's just
happens instantaneously what happened in
Canada with the truckers uh uh sort of
protest where they froze their bank
account by virtue of just being there
just by being in that location and just
to confirm that Iran has implemented a
comprehensive surveillance system to
enforce its mandatory hijab laws
utilizing various ious technologies, one
of which is cameras and facial
recognition. So they've put cameras in
public spaces to identify women who are
not adearing to the hijab dress code.
Yeah. And just on that, London has just
put those face cam facial recognition
systems into London and also all
throughout Wales. um and they're being
rolled out at speed
and like all you would need is a change
of government that wanted to implement
something similar and all the base layer
technologies already in there. It gets a
little bit worse in Iran because they
have this new app called the Nazar app
where the government has introduced the
Nazar mobile application which allows
you as a citizen to report another
citizen who is not wearing their hijab
and it it logs their location, their
time um when they weren't wearing it and
the vehicle license plate with the
crowdsource data. It can then go after
that individual. I would also just point
out that I think we're not being
imaginative enough. I agree with you. I
have the same concern about these
autonomous weapons, but I also think
this doesn't have to occur in the
context of war or even governmental
oppression that it is perfectly
conceivable that effectively this allows
this drops the price of a an
undetectable or an unprosecutable crime.
And maybe economic moes return in the
form of people taking out their
competitors or anybody who attempts to
compete with them using an autonomous
drone that can't be traced back to them.
You know, that follows facial
recognition. And you know, you don't
have to kill very many people for others
to get the message that uh this is a a
zone that uh you shouldn't mess around
in. So, I could imagine, you know,
effectively a new high-tech organized
crime that uh protects rackets and makes
tons of money and subjugates people who
haven't done anything wrong. I had
Mustafa Sullivan on the podcast in 2023
when this all of this stuff started
kicking off and he is the CEO of
Microsoft AI. You're familiar with
Mustafa? Of course. Yeah. Um and he one
of the things he said to me at the time
was one of my fears is a tiny group of
people who wish to cause harm are going
to have access to tools that can
instantly destabilize our world. That's
the challenge. How to stop something
that can cause harm or potentially kill.
That's where we need containment. And it
sounds a little bit like what you're
saying amad that we will now have these
these tools. you were talking in the
context of the military, but as Brett
said there, even smaller groups of
people that might have been, I don't
know, cartels or gangs can do similar
harm. And at the moment, in terms of
autonomous weapons, both the US and
China are investing heavily in AI
powered weapons, autonomous drones, and
cyber warfare because they're scared of
the other one getting it first. And we
we talked about how much of our lives
run on the internet, but cyber weapons
and cyber AI agents that could be
deployed to take down China's X, Y, or
Zed or vice versa are real concern.
Yeah. Yeah. I I think all of that is is
um is a real concern. You know, unlike
Mustafa, I I don't think containment is
is possible. Part of the reason why this
game theoretic system uh of competition
between the US, China, corporations,
individuals makes it so that this this
technology is already, you know, is
already out and really hard to put it
back in the in the bag. I did ask him
this question and I remember the answer
because it was such a stark moment for
me. I said to Mustafa, "Do you think
it's possible to contain it?" And he
replied, "We must." So I asked him
again. I said, "Do you think it's
possible to contain it?" and he replied
we must and I asked him again I said do
you think it's possible we must so the
problem with that uh uh chain of
thinking is that it might lead to an
oppressive system
uh there is uh one of the say doomers or
philosophers of of AI which I I respect
his work his name is Nick Bostonramm and
he he uh he's he was trying to think of
ways in which we can contain AI and the
thing that he came up with is perhaps
more oppressive than something that the
AI would come up with is like total
surveillance state. You need total
surveillance on compute on people's
computers on people's ideas to not
invent AI or AGI. It's like taking the
guns or something or Right. Exactly. I
mean there's always there's always this
problem of containing any sort of
technology is that you do need um
oppression and draconian policies to do
that. Are you scared of anything else or
concerned about anything else as it
relates to AI outside of autonomous
weapons? You know, we talked at the
about
the birthight crisis and I think a more
generalized problem there is creating
virtualized environments
uh via VR where everyone is living in
their own created universe and uh it's
so enticing and even create simulates
work and simulates struggle uh such that
you don't really need to leave this this
world and so every one of us will be
solopcystistic, you know, similar to the
Matrix. Ready Player One. Ready Player
One. We're all kind of uh plug even
worse than Ready Player One. At least
that's a massivelyworked
environment. I'm talking about AI
simulating everything uh for us and
therefore you're literally in the
matrix. You know, maybe that this is I
was about I had that same thought. I've
enjoyed this great simulation. Yes. and
and and so I mean are you familiar with
the Fermy's paradox? No, I'm not. So
Fermy's paradox is um the question uh
the you know professor his name is uh
Fermy he asked the question uh if the
universe is is that vast then where are
the aliens? the fact that humans exist,
you can deduce that other civilizations
exist. And if they do exist, then why
don't we see them? And then that spurred
a bunch of Fermy solutions. So there's
uh I don't know, you can find hundreds
of solutions on the internet. One of
them is the uh sort of house cat on a
thought experiment where actually aliens
exist, but they kind of put us in an
environment like the Amish in a certain
time and do not expose us to what's
going on out there. So they we're pets.
Maybe they're watching us and kind of
enjoying uh what we're doing, stopping
us from hurting ourselves, stopping us
from hurting ourselves. There are so
many things, but one of the things that
I think is potentially a solution to the
phrases paradox and one of the saddest
outcomes is that civilizations progress
until they invent technology that will
lock us into infinite pleasure and
infinite simulation such that we we
don't have the motivation to go into
space to seek out the explor
exploration, potentially other alien
civilizations. And perhaps that is a
determined outcome of humanity or like a
highly likely outcome of any species
like humanity. We like pleasure.
Pleasure and pain is the main
motivators. And so if you create an
infinite pleasure machine, does that
mean that we're just at home in our VR
environment with everything taking care
for us and literally like the matrix and
the world the real world would suck in
such a scenario? Yes. Be terrible. I
mean the other simpler explanation of
the Fermy paradox is that you generate
sufficient technology that you can end
your species and it's only a matter of
time from that point which you know we
can have that discussion about nuclear
weapons. We can have it about AI, but
does some technology, if if we stay on
that escalator, does some technology
that we generate ultimately whatever
allows you to get off the planet allows
you to blow up the planet? There you go.
I want to get everyone's closing
thoughts and closing remarks. And
hopefully in your closing remarks, you
can capture something actionable for the
individual that's listening to this now
on their commute to work or the single
mother, the average person who maybe
isn't as technologically advanced as
many of us at this table, but is trying
to navigate through this to figure out
how to live a good life over the next
10, 20, 30 years. Yeah. Take as long as
you need. I think we live in the most uh
interesting time in human history. So
for the single mother that's listening,
for someone who wouldn't be the
stereotype of a tech row, don't assume
that you can't do this stuff. It's never
been more accessible today within your
work. You can be an entrepreneur. You
don't have to take massive risk to go
create a business um by you quit your
job and go create a business. There are
countless examples. We uh we have a user
who's a product manager at a larger real
estate business and he built something
that created 10% lift in conversion
rates which generated millions and
millions of dollars of that business and
that person became celebrity at that
company and became someone who is
lifting everyone else up and teaching
them how to use these tools and
obviously that that is like a really
great for for anyone's career and you're
going to get a promotion and your
example of building a piece of software
for your family for your kids to to
improve and and to learn more to be
better kids uh as an example of being
entrepreneur in your family. So I really
want people to break away from this
concept of entrepreneurship being this
is your podcast a diary of a CEO. You
started this podcast by talking to CEOs
I assume right and over time uh it
changed to everyone can be a CEO
everyone is some kind of CEO in their
life and so uh I think that we
have unprecedented access to tools for
that vision to actually come to reality.
Well, it is obviously a moment of a kind
of human phase transition. Something
that I believe will be the equal of a
discovery of farming or writing or
electricity. And the darkness that I
think is valid in looking at all of the
possible outcomes of this scenario is
actually potentially part of a different
story as well. In evolutionary biology,
we talk about an adaptive landscape in
which a niche is represented as a peak
and a higher niche, a better niche is
represented as a higher peak. But to get
from the lower niche to the higher
niche, you have to cross through what we
call an adaptive valley. And there's no
guarantee that you make it through the
adaptive valley. And in fact, the
drawing that we put on the board, I
think, is overly hopeful because it
makes it in two dimensions. It looks
like you know exactly where to go to
climb that next peak. And in fact, it's
more like the peaks are islands in an
archipelago that is in fog where you
can't figure out what direction that
peak is and you have to reason out it's
probably that way and you hope not to
miss it by a few degrees. But in any
case, that darkness is exactly what you
would expect if we were about to
discover a better phase for humans. And
I think we should be very deliberate
about it this time. I think we should
think carefully about how it is that we
do not allow the combination of this
brand new extremely powerful technology
and market forces to turn this into some
new kind of enslavement. And I don't
think it has to be. I think the
potential here does allow us to refactor
just about everything. Maybe we have
finally arrived at the place where
mundane work doesn't need to exist
anymore and the pursuit of meaning can
replace it. But that's not going to
happen automatically if we don't figure
out how to make it happen. And I hope
that we can recognize that the peril of
this moment is best utilized if it
motivates us to confront that question
directly.
Each one of us has two parents, four
grandparents, eight great-grandparents,
16, 32, 64. You've got this inc like
this long line of ancestors who all had
to meet each other. They all had to
survive wars. They all had to survive
illness and disease. Everything had to
happen for us. One, each individual, all
of this this stuff had to happen for us
to get here. And if we think about all
the people in those thousands and
thousands of people, every single one of
them would trade places in a heartbeat
if they had the opportunity to be alive
at this particular moment. They would
say that their life was struggle,
disease, that their life was a lot of
mundane and meaningless work. It was
dangerous. You know, every single one of
us has probably got ancestors that were
enslaved, probably got ancestors that
died too young, uh, probably got
ancestors that worked horrific
conditions. We all have that. And they
would all just look at this moment and
say, "Wow." So, are you telling me that
you have the ability to solve meaningful
problems, to come up with adventures, to
travel the world, to pick the brains of
anyone on the planet that you want to
pick the brains of? You can just listen
to a podcast. You can just watch a
video. You can talk to an AI. Like, are
you telling me that you're alive at this
particular moment? Please make the most
of that. Like, do something with that.
You know, you can sit around
pontificating about society and how
society might work. But ultimately, it
all boils down to what you do with this
moment. and solving meaningful problems,
being brave, having fun, making your
little dent in the universe. You know,
that's that's what it's all about. And I
feel like there's an obligation to your
ancestors to make the most of the
moment.
Thank you so much to everybody for being
here. I I've learned a lot and I've
developed my thinking, which is much the
reason why I wanted to bring us all
together because I know you all have
different experiences, different
backgrounds in education. and you're
doing different things, but together it
helps me sort of pass through all of
these ideas to figure out where I land.
And I I I ask a lot of questions, but I
am actually a believer in humans. I'm I
I was thinking about this a second ago.
I was thinking, do I am I optimistic
about humans ability to navigate this
just because I have no other choice?
Because as you said, the alternative
actually isn't worth thinking about. And
so I do have a optimism towards how I
think we're going to navigate this in
part because we're having these kinds of
conversations and we in history haven't
always had them at the birth of a new
revolution when we think about social
media and the implications that had.
We're playing catchup with the with the
downstream
consequences. And I am hopeful. Maybe
that's the entrepreneur in me. I'm
excited. Maybe that's also the
entrepreneur in me. But at the same
time, to many of the points Brett's
raised and Amjud's raised and Dan's
raised, there are serious considerations
as we swim from one island to another.
And because of the speed and scale of
this transformation that Brett
highlights and you look at the stats of
the growth of this technology and how
it's spreading like wildfire and how
once I tried Replet, I walked straight
out and I told Cozy immediately I was
like, "Cussie, try this." And she was on
it and she was hooked. And then I called
my girlfriend in Bali who's the breath
work practitioner and I was like, "Type
this into your browser. R E P L I T."
And then she's making these breath work
schedules with all of her clients
information ahead of the retreat she's
about to do. It's spreading like
wildfire because we're internet native.
We were native to this technology. So
it's not a new technology. It's
something on top of something that's
intuitive to us. So that transition, as
Brett describes it, from one peak to the
other or one island to another, I think
is going to be incredibly destabilizing.
And I've having interviewed so many
leaders in this space from Reed Hoffman
who's the founder of LinkedIn to the CEO
of Google to Mustafa who I mentioned
they don't agree on much but the thing
that they all agree on and that Sam
Alman agrees on is that the long-term
future the long-term way that our
society functions is radically
different. People squab squabble over
the short term. They sometimes even
squabble over the midterm or the
timeline but they all agree that the
future is going to look completely
different. Amjud, thank you for doing
what you're doing. you're what we didn't
get to spend a lot of time on it today
and this is typically what I do here but
your story is incredibly inspiring
incredibly inspiring from where you came
from what you've done what you're
building and you are democratizing and
creating a level of playing field for
entrepreneurs in Bangladesh to Cape Town
to San Francisco to be able to turn
their ideas into reality and I do think
just on the surface that that's such a
wonderful thing that you know I was born
in Botswana in in Africa and that I
could have the same access to turn my
imagination into something to change my
life because of the work that you're
doing at Replet. And I highly recommend
everybody go check it out. You you
probably won't sleep that night because
it's so it's so for someone like me it
was so addictive to get to be able to do
that because it's been the barrier to
creation my whole life. I've always had
to call someone to build something. Dan,
thank you again so much because you
represent the voice of entrepreneurs and
you've really become a titan as a a
thought leader for entrepreneurs in the
UK and that perspective that balance is
incredibly important. So, I really
really appreciate you being here as
always and you're a huge fan favorite of
our show and Brett, thank you a
gazillion times over for being a a human
lens on complicated challenges and you
do it with a fearlessness that I think
is imperative for us finding the truth
in these kind of situations where some
of us can run off with optimism and we
can be hurtling towards the mouse trap
because we love cheese and I think
you're an important
counterbalance and voice in the world at
this time. So, thank all of you for
being here. I really really appreciate
it and um we shall
see. These things live forever.
So, this has always blown my mind a
little bit. 53% of you that listen to
this show regularly haven't yet
subscribed to the show. So, could I ask
you for a favor? If you like the show
and you like what we do here and you
want to support us, the free simple way
that you can do just that is by hitting
the subscribe button. And my commitment
to you is if you do that, then I'll do
everything in my power, me and my team,
to make sure that this show is better
for you every single week. We'll listen
to your feedback. We'll find the guests
that you want me to speak to, and we'll
continue to do what we do. Thank you so
much.
[Music]
[Music]
Ask follow-up questions or revisit key timestamps.
The video discusses the profound impact of Artificial Intelligence (AI) on society, covering its potential for both immense good and significant harm. It explores the concept of AI agents and their ability to perform tasks autonomously, the implications for job displacement, the ethical considerations of AI development, and the potential for AI to reshape industries like healthcare and education. The speakers debate whether AI will lead to a utopian future of abundance or a dystopian one with widespread unemployment and societal disruption. Key themes include the unprecedented speed and scale of AI's advancement, the challenges of controlling complex AI systems, and the need for humanity to adapt and prepare for a future profoundly altered by this technology.
Videos recently processed by our community