HomeVideos

AI AGENTS DEBATE: These Jobs Won't Exist In 24 Months!

Now Playing

AI AGENTS DEBATE: These Jobs Won't Exist In 24 Months!

Transcript

3903 segments

0:00

I think a lot of people don't realize

0:01

how massive the positive impact AI is

0:04

going to have on their life. Well, I

0:06

would argue that the idea that this AI

0:07

disruption doesn't lead us to human

0:09

catastrophe is optimistic. For example,

0:12

people are going to be unemployed in

0:14

huge numbers. You agree with that, don't

0:16

you? Yes. If your job is as routine as

0:19

it comes, it's gone in the next couple

0:21

years. But it's going to create new

0:23

opportunities for wealth creation. Let

0:26

me put it to you this way. We have

0:27

created a new species and nobody on

0:29

earth can predict what's going to

0:31

happen. We are joined by three leading

0:33

voices to debate the most disruptive

0:35

shift in human history, the rise of AI.

0:38

And they're answering the questions

0:39

you're most scared about. This

0:41

technology is going to get so much more

0:43

powerful. And yes, we're going to go

0:44

through a period of disruption. But at

0:46

the other end, we're going to create a

0:47

fair world. It's enabling people to run

0:49

their businesses, make a lot of money,

0:51

and you can solve meaningful problems

0:53

such as the breakthroughs in global

0:54

healthcare and education will be

0:56

phenomenal. and you can live an

0:57

incredibly fulfilling existence. Well, I

0:59

would just say on that front, this has

1:01

always been the fantasy of technologists

1:02

to do marvelous things with our spare

1:04

time, but we end up doom scrolling,

1:06

loneliness epidemic, right? Falling

1:07

birth rates. So, the potential for good

1:09

here is infinite and the potential for

1:11

bad is 10 times. For example, there's

1:14

war, undetectable beat fakes and scams.

1:17

So, people don't understand how many

1:18

different ways they are going to be

1:20

robbed. Look, I don't think blaming

1:22

technology for all of it is the right

1:23

thing. all these issues, they're already

1:25

here. You're all fathers here. So, what

1:27

are you saying to your children? Well,

1:29

first of all, this has always blown my

1:32

mind a little bit. 53% of you that

1:34

listen to this show regularly haven't

1:36

yet subscribed to the show. So, could I

1:38

ask you for a favor before we start? If

1:40

you like the show and you like what we

1:41

do here and you want to support us, the

1:42

free simple way that you can do just

1:44

that is by hitting the subscribe button.

1:46

And my commitment to you is if you do

1:47

that, then I'll do everything in my

1:49

power, me and my team, to make sure that

1:51

this show is better for you every single

1:52

week. We'll listen to your feedback.

1:54

We'll find the guest that you want me to

1:56

speak to and we'll continue to do what

1:58

we do. Thank you so

1:59

[Music]

2:05

much. The reason why I wanted to have

2:08

this conversation with all of you is

2:10

because the subject matter of AI, but

2:12

more specifically AI agents, has

2:15

occupied my free time for several weeks

2:18

in a row. And actually amad when I

2:20

started using replet for me it was a

2:24

paradigm shift. There was two paradigm

2:26

shifts in a row that happened about a

2:28

week apart. Chat GPT released their

2:30

image generation model where you could

2:31

create any image. It was incredibly

2:33

detailed with text and all those things.

2:35

That was a huge paradigm shift. And then

2:37

in the same week I finally gave in to

2:39

try and figure out what this term AI

2:41

agent was that I was hearing all over

2:42

the internet. I heard vibe coding. I

2:44

heard AI agent. I was like I will give

2:45

it a shot. Mhm. And when I used Replet,

2:48

20 minutes in to using Replet, my mind

2:51

was blown. And I think that night I

2:53

stayed up till 3 or 4 a.m. in the

2:55

morning. For anyone that doesn't know,

2:58

Replet is a piece of software that

3:00

allows you to create software. Mhm. And

3:04

pretty much any software you you want.

3:06

So someone like me with absolutely no

3:07

coding skills was able to build a

3:10

website, build in Stripe, take payment,

3:13

integrate AI into my website, add Google

3:16

login to the front of my website and do

3:18

it within minutes. I then got the piece

3:21

of software that I had built with no

3:22

coding skills, sent it to my friends,

3:23

and one of my friends put his credit

3:25

card in and paid. Amazing. So I just

3:27

launched a SAS company with no coding

3:28

skills.

3:30

to demonstrate an AI agent in a very

3:33

simple way. I used an online AI agent

3:36

called operator to order us all some

3:38

water from a CVS around the corner. The

3:40

AI agent did everything end to end and

3:42

people will be watching on the screen.

3:43

It put my credit card details and it

3:45

picked the water for me. It gave the

3:46

person a tip. It put some delivery notes

3:48

in at some point a guy is going to walk

3:50

in. He has not interacted with a human.

3:53

He's interacted with my AI agent. And I

3:55

just the reason I use this as an example

3:57

is again it was a paradigm shift moment

3:58

for me when I heard about agents. Mhm.

4:00

about a month ago and I went on and I

4:02

ordered a bottle of electrolytes and

4:04

when it when my doorbell rang I freaked

4:07

out. I freaked out. But am who are you

4:11

and what are you doing? So uh I started

4:15

programming at a very young age. You

4:17

know I I built my first business when I

4:19

was a teenager. I used to go to uh

4:22

internet cafes and program there. And I

4:24

realized that they don't have software

4:26

to manage the business. I was like oh

4:27

why don't you create accounts? I don't

4:29

have a server. It took me two years to

4:31

build that piece of software. And that's

4:33

sort of embedded in my mind this idea

4:35

that hey like you know there there's a

4:38

lot of people in the world with really

4:40

amazing ideas especially in the context

4:42

where they live in that allows them to

4:44

build uh businesses. However, the main

4:49

source of uh friction between an idea

4:52

and software or call it an idea and

4:55

wealth creation is u infrastructure is

4:59

physical infrastructure is a meaning a

5:01

computer in front of you. It is um an

5:04

internet connection. It is the set of

5:07

tools and skills that you need to build

5:09

that. If we make it so that anyone who

5:12

has ideas who who wants to solve

5:15

problems will be able to do it. I mean

5:17

imagine the kind of world that we we

5:19

could live in where no one can be anyone

5:22

who has merit anyone who can think

5:25

clearly anyone who can generate a lot of

5:28

ideas can generate wealth. I mean that's

5:31

an amazing world to live in right

5:33

anywhere in the world. So with with

5:34

Replet the company that I started in

5:36

2016 the the idea was like okay coding

5:40

is difficult how do we solve coding and

5:43

um we built every part of the process

5:45

the hosting the code editor the only

5:48

missing thing was the you know the AI

5:50

agent and so over the past two years

5:53

we've been working on this AI agent that

5:57

you can just you know similar since chat

5:58

GPT you know this revolution with Genai

6:01

and you can just uh speak your ideas

6:03

into existence. I mean this starts you

6:05

know sounding religious like this is

6:07

like the you know the gods you know that

6:09

the myths that that um that humans have

6:12

created they used to imagine a world

6:14

where you can you can be everywhere and

6:17

anywhere at once that's sort of the

6:18

internet and you can also speak your

6:20

ideas into into existence and um you

6:23

know it's it's still early I think uh

6:25

replet agent is is a fantastic tool and

6:27

I think this technology is going to get

6:29

so much more powerful

6:31

specifically what is an AI agent. I've

6:34

got this um graph actually here which I

6:36

don't need to pass to any of you for you

6:38

to be able to see the growth of AI

6:41

agents. But this graph is Google search

6:43

trend data. This also resembles our

6:45

revenue too.

6:47

Oh, okay. Right. The the water has

6:49

arrived. Hello. Thank you. You can come

6:51

on in. Can I have a go, please? Yes.

6:54

It's

6:55

3951. Great. Thank you so much. Thank

6:58

you. Thank you. I mean this is this is

7:00

like a supernatural kind of power. You

7:03

conjured water. I conjured water from my

7:05

mind. Yeah. And it's shown up here with

7:08

us and it clearly thinks we need a lot.

7:11

But but just to define the term AI agent

7:14

for someone that's never heard the term

7:15

before. Yeah. Yeah. So uh I assume most

7:18

of the audience now are familiar with

7:20

chat, right? You can go in and you can

7:22

talk to an AI. It can search the web for

7:25

you. It has a limited amount of tools.

7:27

Uh maybe it can call a calculator to do

7:30

some addition subtraction for you, but

7:32

that's about it. It's a request response

7:35

style. Agents are when you give it a

7:38

request and they can work indefinitely

7:42

until they achieve a goal or they run

7:45

into an error and they need your help.

7:47

It's an AI bot that has access to tools.

7:50

Those tools are access to the to a web

7:53

browser like operator, access to a

7:55

programming environment say like replet,

7:58

access to um you know credit cards. The

8:02

more tools you give the agent, the more

8:03

powerful it is. Of course there's all

8:05

these consideration around security and

8:07

safety and all of that stuff. But uh the

8:10

the most important thing is the AI agent

8:12

will determine when it finished

8:15

executing. Uh today AI agents can run

8:18

for anywhere between you know 30 seconds

8:22

to 30 minutes. Uh there's a recent paper

8:25

that came out that's showing that every

8:28

7 months the number of minutes that the

8:31

agent can run for is doubling. So we're

8:34

at like 30 minutes now. In seven months

8:36

we're going to be at an hour then you

8:38

know 2 hours. Pretty soon we're going to

8:40

be at days. And at that point, you know,

8:43

AI agent is doing labor is doing kind of

8:45

humanlike labor and actually uh OpenAI's

8:49

new model 03 beat the expectation. So it

8:52

it sort of doubled coherence over long

8:55

horizon tasks in just three or four

8:58

months. So we're in this massive and I

9:01

mean this looks this exponential graph,

9:04

you know, that shows you the massive

9:06

trend we're on.

9:09

Brett, give us a little bit of of your

9:11

background, but also I saw you writing

9:13

some notes there. There was a couple of

9:15

words used there that I thought were

9:16

quite interesting, especially

9:18

considering what I know about you. The

9:19

word God was used a few times.

9:22

Well, uh, let me just say I'm an

9:24

evolutionary biologist, and probably for

9:27

the purposes of this conversation, it

9:30

would be best to think of me as a

9:32

complex systems theorist. One of the

9:34

things that I believe is true about AI

9:36

is that this is the first time that we

9:40

have built machines that have crossed

9:42

the threshold from the highly

9:44

complicated into the truly complex.

9:48

And I will say I'm listening to this

9:51

conversation with a um a a mixture of

9:55

profound hope and dread

9:59

because seems to me that it is obvious

10:03

that the potential good that can come

10:06

from this technology is effectively

10:09

infinite. But I would say that the harm

10:11

is probably 10 times. It's a bigger

10:13

infinity. And the question of how we are

10:16

going to get to a place where we can

10:19

leverage the obvious power that is here

10:22

to do good and dodge the worst harms. I

10:25

have no idea. I I know we're not

10:27

prepared. So I hear you talking about

10:31

agents and I think um that's marvelous.

10:34

We can all use such a thing right away

10:37

and the more powerful it is, the better.

10:38

The idea of something that can solve

10:40

problems on your behalf while you're

10:43

doing something else is marvelous. But

10:45

of course, that is the precondition for

10:50

absolute devastation to arise out of a

10:53

miscommunication, right? to have

10:55

something acting autonomously to

10:58

accomplish a goal, you damn well better

11:01

understand what the goal really is and

11:02

how to how to pull back the reins that

11:06

it starts accomplishing something that

11:08

wasn't the goal. The potential for abuse

11:11

is also utterly profound. You know, you

11:14

can imagine, just pick your your dark

11:18

mirror

11:19

uh fantasy dystopia where something has

11:24

been told to hunt you down until you're

11:25

dead and it sees that as a, you know, a

11:29

technical challenge.

11:31

So, I don't know quite how

11:34

to balance a discussion about all of the

11:38

things that can clearly come from this

11:39

that are utterly transcendent. I mean, I

11:42

do think it is not inappropriate to be

11:45

invoking God or biblical metaphors here.

11:49

You know, you're uh producing water

11:52

seemingly from thin air. I believe that

11:54

does have an exact biblical parallel.

11:56

Uh so, uh any case, the the power is

12:00

here, but so so too is the need for

12:04

cautionary tales, which we don't have.

12:06

That's the problem is that there's no

12:07

body of myth that will warn us properly

12:10

of this tool because we've just crossed

12:11

a threshold that is similar in its

12:16

capacity to alter the world as the

12:19

invention of writing. I really think

12:21

that's that's where we are. We're

12:23

talking about something that is going to

12:24

fundamentally alter what humans are with

12:28

no plan. You know, writing alters the

12:31

world slowly because the number of

12:33

people who can do it is tiny at first

12:35

and remain so for thousands of years.

12:38

This is changing things weakly and

12:41

that's an awful lot of power to just

12:43

simply have dumped on a system that

12:45

wasn't well regulated to begin with.

12:48

Dan, yeah. So, I'm an I'm an

12:50

entrepreneur. Um, I've been building

12:51

businesses for the last 20 plus years.

12:53

I'm completely well positioned between

12:56

the two of you here. the excitement of

12:58

the opportunity and the terror uh of

13:01

what could go on. There's this image

13:02

that I saw of New York City in 1900 and

13:08

every single vehicle on the street is a

13:10

horse and cart and then 13 years later

13:12

the same photo from the same vantage

13:15

point and every single vehicle on the

13:17

street is a car. And in 13 years all the

13:20

horses had been removed and cars had

13:22

been put in place. And um if you had

13:25

have interviewed the horses in 1900 and

13:28

said uh how do you feel about your level

13:30

of confidence uh in in the world? The

13:33

horses would have said well we've been

13:34

part of uh humanity for you know horse

13:36

and hoof hand and hoof for for many many

13:39

years for for thousands of years.

13:41

There's one horse for every three

13:42

humans. Like how bad could it be? You

13:45

know we'll always have a special place.

13:46

will always be part of society. Um, and

13:51

little did the horses realize that that

13:53

was not the case. That the horses were

13:55

going to be put out of of uh business

13:57

very very rapidly. And to reason through

14:01

analogy, you know, there's a lot of us

14:02

who are now sitting there going, "Hey,

14:04

wait a second. Does this make me a horse

14:05

in 1900?" I think a lot of people don't

14:08

realize how massive these kind of

14:10

technologies are going to have as an

14:11

impact.

14:12

You know, one minute we're ordering a

14:14

water and that's cute and the next

14:16

minute it can run for days and in your

14:19

words uh it doesn't stop until it

14:21

achieves its goal and it comes up with

14:23

as many different ways as it could

14:24

possibly come up with to achieve its

14:26

goal and in your words it better know

14:29

what that goal is. I'm thinking a lot as

14:32

Daniel's speaking about the vast

14:35

application of AI agents and where are

14:37

the bounds because if if this thing is

14:40

going to get incrementally smarter well

14:43

incrementally might be an understatement

14:44

it's going to get incredibly smart

14:46

incredibly quick and we're seeing this

14:48

AI race where all of these large

14:50

language models are competing for

14:51

intelligence with one another and if

14:53

it's able to traverse the internet and

14:56

click things and order things and write

14:58

things and create things and all of our

15:00

lives run off the internet today. What

15:02

can't it do? It's going to be smarter

15:04

than me.

15:06

No doubt it already is. And it's going

15:08

to be able to take actions across the

15:10

internet, which is pretty much where

15:11

most of my professional life operates.

15:13

It's like how I build my businesses.

15:15

Even this podcast is an internet product

15:17

at the end of the day because you can

15:19

create we've done experiments now and I

15:21

can show the graphs on my phone to make

15:22

AI podcasts and they have we've just

15:25

managed to get it to have the same

15:26

retention as the dire CEO. Now with the

15:29

image generation model retention as in

15:30

viewer retention the percentage of

15:32

people that get to one hour wow is the

15:34

same now. So we can make the video, we

15:37

can publish it, we can script it, you

15:39

can synthesize my voice sounds like me.

15:42

So what what is it going to be able to

15:44

do? Mhm. And can you give me the variety

15:47

of use cases that the average person

15:49

might not have intuitively conceived?

15:51

Yeah. So I I tend to be an an optimist

15:54

and and part of the reason is because I

15:56

try to understand the limits of the of

15:58

the technology. What can it do is

16:00

anything that we can any sort of set of

16:03

human data that we can train it on. What

16:06

can it not do is anything that uh humans

16:10

don't know what to do because we don't

16:12

have the training data. Of course, it's

16:14

super smart because it integrates

16:17

massive amount of knowledge that you

16:18

wouldn't be able to read, right? It also

16:21

much faster. It can run through massive

16:24

amount of computation that you know your

16:26

brain you can't even comprehend because

16:28

all of that they're smart. They can take

16:31

actions but we know the limits of what

16:34

they can do because we trained them.

16:37

They're able to simulate what a human

16:39

can do. So the reason you were able to

16:41

order the water there is because it was

16:44

trained NSF data that includes clicking

16:48

on Door Dash and ordering water. I

16:51

applaud your optimism and I like the way

16:52

you think about these puzzles, but I

16:54

think I see you making a mistake that we

16:57

are about to discover is very common

17:00

place. So we have several different

17:02

categories of systems. We have simple

17:05

systems, we have complicated systems, we

17:09

have complex systems and then we have

17:11

complex adaptive systems.

17:14

And to most of us, a highly complicated

17:18

system appears like a complex system. We

17:20

don't understand the distinction.

17:22

Technologists often master highly

17:25

complicated systems and they know, you

17:28

know, for example, a computer is a

17:31

perfectly predictable system inside.

17:32

There's it's deterministic. Mhm. But to

17:35

most of us, it functions, it's it is

17:38

mysterious enough that it feels like a

17:40

complex

17:41

system. And if you're in the position of

17:44

having mastered highly complicated

17:46

systems and you look at complex systems

17:48

and you think it's a natural extension,

17:50

you fail to

17:51

anticipate just how unpredictable they

17:54

are. So even if it is true that today

17:57

there are limits to what these machines

18:00

can do based on their training data, I

18:04

think the problem

18:05

is to see what's going to happen. You

18:09

really want to start thinking of this as

18:10

the evolution of a new species that will

18:13

continue to evolve. It will partially be

18:16

shaped by what we ask it to do, the

18:18

direction we lead it, and it will

18:19

partially be shaped by things we don't

18:21

understand. So, how does this computer

18:25

that that we have work? Well, one of the

18:27

things that it does is we plug them into

18:30

each other using language. It's almost

18:33

as if you've plugged an Ethernet cable

18:35

in between human minds. And that means

18:38

that the cognitive potential exceeds the

18:41

sum of the individual minds in question.

18:45

Your AIS are going to do that. And that

18:47

means that our ability to say what they

18:48

are capable of does not come down to

18:51

well we didn't train it on that data. As

18:54

they begin to interact that feedback is

18:56

going to take them to capabilities we

18:59

don't anticipate and may not even

19:00

recognize once they become present.

19:02

That's one of my fears. This is an

19:04

evolving creature and it's not even an

19:08

animal. If it were an animal, you could

19:09

say something about what the limits of

19:11

that capability are. But this is a new

19:14

type of biological creature and it will

19:18

become capable of things that we don't

19:20

even have names for. Even if it didn't

19:23

do that, even if it just stayed within

19:25

the boundaries that you're talking

19:26

about, you mentioned about it having

19:28

median level intelligence. Well, that by

19:30

definition means 50% of the people on

19:32

the planet are less intelligent than uh

19:35

AI. uh you know to a degree it's almost

19:38

as if we've just invented a new

19:40

continent of remote workers. Um there's

19:43

billions of them. They've all got a

19:45

masters or a PhD. They all speak all the

19:47

languages. Anything that you could call

19:49

someone or ask someone over the internet

19:51

to do, they're there 24/7 and they're 25

19:54

cents an hour. Uh if that um so like if

19:59

that really happened like if we really

20:01

did just discover that there were a

20:03

billion extra people on the planet who

20:04

all had PhDs and were happy to work

20:06

almost for free that would have a

20:08

massive disruptive impact on society.

20:10

Like society would have to rethink how

20:12

everyone lives and works and gets

20:15

meaning. Um so like and that's that's if

20:18

it just stays at a median level of

20:19

intelligence. Like it's it's pretty

20:21

profound. I still think it's it's a

20:23

tool. this is power that is there to be

20:26

harnessed by entrepreneurs. You know, I

20:29

I think that the world is gonna get

20:31

disrupted, right? Um and the the you

20:34

know, the this you know post-war world

20:36

that we created where you go through

20:39

life, you go through 12 years of

20:40

education, you get to college and you

20:43

just check the boxes, you get a job. We

20:46

can already see the fractures of that

20:48

that it is, you know, this American

20:50

dream is perhaps no longer there. And so

20:52

I think the world has already changed.

20:55

So but but like what are the

20:56

opportunities? Obviously there are

20:58

downsides. The opportunities is for the

21:00

first time access to opportunity is is

21:03

equal. And I I do think there's going to

21:05

be more inequality. And the reason for

21:08

this inequality is because actually

21:11

Steve Jobs uh you know made this

21:13

analogy. It's like the the best taxi

21:15

driver in New York is like 20% better

21:18

than the you know the you know average

21:21

taxi driver. the best programmer uh can

21:24

be 10x better. You know, we call we say

21:26

the 10x engineer. Now, the variance will

21:30

be in the thousandx, right? Like the

21:33

best the best entrepreneur that can

21:36

leverage those agents could be better uh

21:39

could be a thousand times better than

21:41

than someone who doesn't have the grit,

21:44

doesn't have the skill, doesn't have the

21:46

ambition. Right? So, so that that will

21:48

create a world. Yes, there's massive

21:50

access to opportunity, but there are

21:52

people who will take seize it and then

21:55

they'll seize it and then there'll be

21:57

people who don't. I imagine it almost

21:59

like a um a marathon race and AI has two

22:02

superpowers. One superpower is to

22:04

distract people um such as Tik Tok

22:06

algorithm. That's right. And the other

22:08

superpower is to make you hyper

22:10

creative. So you become a hyper consumer

22:11

or a hyper creator. And in this marathon

22:14

race, the vast majority of people have

22:16

got their shoes tied together cuz AI is

22:18

distracting them. Some people are

22:20

running traditional race. Some people

22:23

have got a bicycle and some people have

22:25

got a Formula 1 vehicle. And it's going

22:28

to be very confronting when the results

22:31

go on the scoreboard and you see, oh,

22:33

wait a second. There's a few people who

22:35

finished this marathon in about 30

22:37

minutes. And there's a lot of us who

22:38

finished in like 18 hours because we had

22:42

our shoes tied together. And I can't

22:44

understand if we've got equal

22:46

opportunity why there's so much

22:47

disparity between how fast it you know

22:50

and do, you know, I'm using an analogy,

22:51

but this idea that, you know, someone

22:54

like a lot of people are going to start

22:55

earning a million dollars a month and a

22:57

lot of people are going to say, "Hey, I

22:58

can't even get a job for $15 an hour."

23:00

there's going to be this kind of

23:02

interesting wedge. Well, but I I hear in

23:05

what both of you are saying

23:07

a kind of assumption that this will all

23:12

be done on the up and up. And I do want

23:17

to just I am not a doomer. I agree that

23:21

the doomers are likely incorrect that

23:24

their fears are misplaced. But I do

23:27

think we have a question of a related

23:29

rates problem.

23:30

You know I said the potential for good

23:32

here is infinite and the potential for

23:35

bad is 10 times.

23:37

Right? What I mean is there are lots of

23:41

ways in which this obviously empowers

23:43

people to do things that they were going

23:45

to be otherwise stuck in the mundane

23:48

process of learning to code and then

23:51

figuring out how to make the code work

23:53

and bring it to market and all of that.

23:55

And this solves a lot of those problems

23:56

and that's obviously a good thing.

23:58

really the what we should want is the

24:00

wealth creation object as quickly as we

24:03

can get there. But the problem is you

24:06

know as much as it that hyper creative

24:10

individual is empowered to make wealth

24:13

the person who is interested in stealing

24:16

may be even more empowered. And I'm

24:18

concerned about that at at a pretty high

24:21

level. the the abuse cases may outnumber

24:24

the use cases and we don't have a plan

24:27

for what to do about that. Um to can I

24:31

can I give you a quick uh like

24:32

introduction here like the optimistic

24:34

view? OpenAI invented uh GP the first

24:37

version GPT came out in 2019. 2020 was

24:41

GPT2 and so OpenAI you know now they get

24:44

a lot of criticism and lawsuit from IL

24:47

Musk that they're no longer open source

24:50

right they used to be. The reason is in

24:52

GPT2 they said we are we are no longer

24:56

gonna uh open source this technology

24:58

because it's going to create um

25:00

opportunities for abuse such as you know

25:03

influencing elections um you know

25:06

stealing you know grandma's credit card

25:09

and so on so forth. Wouldn't you say

25:11

Brett that it is kind of surprising how

25:14

little abuse we've seen so far?

25:17

I don't know how much abuse we've seen

25:19

so far. I don't know how any of us do.

25:21

And I also even the example that you

25:25

suggest where chat GPT is no longer open

25:29

source to prevent abuse. I'm taking

25:31

their word for it that that's the

25:33

motivation. Where as a systems theorist,

25:36

I would say, well, if you had a

25:38

technology that was excellent at

25:42

enhancing your capacity to wield power,

25:46

then open sourcing it is a failure to

25:49

capitalize on that. and that the most

25:52

remunerative use is to keep it private

25:55

and then either sell the ability to

25:58

manipulate elections to people who want

26:00

to do so or sell the ability to have it

26:02

kept off the table for people who don't.

26:05

And I would expect that that's probably

26:07

what's going on. There's no if you have

26:09

a technology as transformative as this,

26:12

giving it away for free is

26:13

counterintuitive, which leaves those of

26:16

us in the public more or less at the

26:18

mercy of the people who have it.

26:21

So I I don't see the reason for comfort

26:25

there. We are at the dawn of this

26:29

radical transformation of humans that by

26:33

its very nature as a truly complex and

26:38

emergent innovation. Nobody on earth can

26:40

predict what's going to happen. We can

26:43

we're on the event horizon of something.

26:45

And the problem is you know we can talk

26:48

about the obvious disruptions the job

26:49

disruption and that's going to be

26:51

massive. And does that lead some group

26:54

of elites to decide, oh well, suddenly

26:56

we have a lot of useless eaters and what

26:58

are we going to do about that? Because

27:00

that conversation tends to lead

27:01

somewhere very dark very quickly. Um,

27:04

but I think that's just the beginning of

27:06

the the various ways in which this could

27:08

go wrong without the doomer scenarios

27:10

coming into play. This is an

27:12

uncontrolled experiment in which all of

27:15

humanity is downstream. Yeah. So I I was

27:18

trying to make the point that OpenAI has

27:21

been sort of wrong about the uh the sort

27:25

of uh how big of a potential for harm it

27:27

is like you know I think we would have

27:30

heard about it in the news like uh the

27:33

sort of how much harm it's done and

27:35

maybe you know some of it is working in

27:36

the shadows but like the few incidents

27:39

that we've heard about where you know

27:41

the cause of LLM's large language models

27:43

the technology that's powering chat has

27:46

been a huge headliners uh like New York

27:49

Times talked about this kid that was,

27:52

you know, perhaps goated by some kind of

27:54

chat software that, you know, helps

27:56

teenagers to be less lonely into into

27:58

suicide, which is which is tragic. And

28:00

obviously, these are the kind of safety

28:02

and and abuse uh issues that we want to

28:04

we want to worry about. But these are

28:06

kind of these isolated uh incidents and

28:09

we do have open- source large language

28:12

models. Obviously, the thing that

28:13

everyone talks about is DeepSeek.

28:15

Deepseek is uh coming from China. So

28:18

what is Deepseek's incentive? You know

28:21

perhaps the incentive is to destroy the

28:24

AI industry in the US. Uh you know when

28:26

they released Deepseek, you know the

28:28

market tanked the market for Nvidia, the

28:30

market for AI and all of that. But there

28:32

is an incentive to open source. Meta is

28:34

open sourcing Llama. Llama is another AI

28:37

similar to Chat GPT. The reason they're

28:39

open sourcing Llama and Zuckerberg just

28:41

says that out loud is basically they

28:44

don't want to be beholden to open AI.

28:47

They don't sell AI as a service. They

28:49

use it to build products. And there's

28:52

this concept in business called

28:54

commoditize your compliment because you

28:56

need AI as technology to run your

28:58

service. The best strategy to do is to

29:01

open source it. So these market forces

29:04

are going to create conditions that I

29:07

think are actually beneficial. So I'll

29:10

give you a few few examples. One is

29:13

first of all the AI companies are

29:15

motivated to create AI that is safe so

29:17

that they can sell it. Second there are

29:20

security companies investing in AIs that

29:23

allows them to protect from the sort of

29:25

malicious uh acting of of AI. And so you

29:28

have the free market and we've always

29:30

had that you know but generally as

29:32

humanity we've been able to leverage uh

29:36

the same technology to protect against

29:38

the abuse. So I I don't really

29:41

understand this and maybe this is

29:43

actually this is the exact discussion

29:45

that you would expect between somebody

29:47

at the frontier of the highly

29:49

complicated staring at a complex system

29:51

and a biologist who comes from the land

29:53

of the complex and is looking back at

29:56

highly complicated

29:58

systems. In game theory we have

30:00

something called a colle collective

30:01

action problem. And in the market that

30:05

you're

30:06

describing, an individual company has no

30:10

capacity to hold back the abuses of AI.

30:15

The most you can do is not participate

30:17

in them. You can't stop other people

30:19

from programming LLMs in some dangerous

30:23

way. And you can limit your own ability

30:26

to earn based on your own limitations of

30:28

what you're willing to do. And then

30:30

effectively what happens is the

30:31

technology gets invented anyway. It just

30:34

that the dollars end up in somebody

30:35

else's pocket. So the incentive is not

30:38

to restrain yourself so that you can at

30:40

least compete and participate in the

30:42

market that's going to be opened. And

30:46

so the number of ways in which you can

30:49

abuse this technology. Let's take a

30:51

couple.

30:53

What is to stop somebody from training

30:56

LLMs

30:58

on an

31:00

individual's creative output and then

31:03

creating an LLM that can out compete

31:07

that individual can effectively not only

31:09

produce what they would naturally

31:11

produce over the course of a lifetime

31:12

but can extrapolate from it and can even

31:15

hybridize it with the insights of other

31:17

people so that effectively those who

31:20

have the LLM can train it on the

31:22

creativity ity of others, not cut them

31:24

in on the use of that insight. You can

31:27

effectively end up putting yourself out

31:29

of business by putting your creative

31:31

ideas in the world where they get sucked

31:32

up as training data for future LLMs.

31:36

That is unscrupulous, but it's

31:38

effectively guaranteed. In fact, it's

31:40

already happening. So, that's a problem.

31:43

And likewise, what what would stop

31:45

somebody from interacting with an

31:48

individual and training an LLM to become

31:51

like a personalized con artist?

31:53

Something that would play exactly to

31:55

your blind spot. That does happen. That

31:56

that is starting to happen. Um people

31:58

get phone calls and it sounds like their

32:00

daughter and I've I've lost my phone and

32:02

I'm borrowing a friend's phone and all

32:04

of that sort of stuff. What's what's

32:06

interesting is that I I think you make a

32:08

a really good point.

32:10

I worry about the impact on society. And

32:12

yet when I look at every single

32:14

individual who uses AI regularly, it

32:18

almost has nothing but profoundly

32:20

positive impact on their life. I look at

32:23

people like um I was just spending some

32:25

time with my parents-in-law um who are

32:28

in their 70s and early 80s and they use

32:31

AI regularly for all sorts of things

32:33

that they find incredibly valuable and

32:36

that improves the quality of their life.

32:38

I personally did a an M&A a mergers and

32:40

acquisitions deal where I bought a

32:42

company last year and the AI was so

32:46

powerful at helping that process. The

32:48

conversations were transcribed and they

32:50

were turned into letters of intent and

32:52

then press releases and uh legal

32:55

documents and we probably shaved

32:58

$100,000 worth of uh costs and and we

33:01

sped up the whole process and it was

33:04

pretty magical to see how how it could

33:06

happen. With that said, you know,

33:09

there's there's all of these like, well,

33:10

$100,000 worth of lawyers didn't get

33:12

paid, right? So, well, what I want to

33:14

know, yeah,

33:17

people upset about, but if we look back

33:20

at the invention of the cell phone or

33:24

the invention of the social media

33:26

platforms, there would be every reason

33:28

to have the exactly the same

33:30

perspective, right? I remember the

33:31

beginning of Facebook and I remember the

33:33

idea that suddenly the process that used

33:36

to afflict people where you would just

33:38

lose touch with most of the people who

33:40

had been important to you, that was not

33:43

something that needed to happen anymore.

33:44

You could just retain them permanently

33:46

as a part of a diffuse

33:49

uh social grouping that just simply grew

33:52

and value was added. There's no end to

33:54

how much good that did, but what it did

33:58

to us was profound and not evident in

34:01

the first chapter. Say the same thing

34:04

about the cell phones and the dopamine

34:05

traps and the way this has disconnected

34:08

us from each other, the way it has

34:10

disconnected us from nature, the way it

34:13

has altered the very patterns with which

34:16

we think. It has altered every

34:19

classroom. Mhm. So, and those things I

34:22

think are going to turn out to have been

34:24

sort of minor foreshadowings of the

34:28

disruption that AI will produce. So, I

34:31

agree with you today. The amount you can

34:33

do with AI, there's a tremendous amount

34:34

of good. There's a little bit of harm.

34:36

Maybe that's something we need to worry

34:37

about. But as this develops, as we get

34:41

to, you know, to peer over the edge of

34:44

this cliff that we're headed to, I think

34:45

we're going to discover that we can't

34:48

yet detect the nature of the alteration

34:50

that's coming. So, I just wanted to add

34:52

some context to that cuz Amjad, I I saw

34:53

the interview you did in a newsletter in

34:55

2023 where you said, "I wouldn't prepare

34:57

for AGI in the same way that I wouldn't

34:59

prepare for the end of days." It's

35:00

effectively the end of days if the

35:02

vision of AGI that some of these

35:04

companies have comes to bear because

35:05

it's called the singularity moment

35:07

because you can't really predict what

35:08

happens after that. And so like how

35:11

would you even prepare for that and you

35:13

want to prepare for the more likely

35:15

world and that world that you can

35:16

actually predict is a world where yes

35:19

there's like a massive improvements of

35:21

technology and there's like insane

35:23

compounding effects of technology and

35:24

it's pretty hard to keep up. From that

35:26

it appeared that in 2023 you were saying

35:29

a similar thing to Brett in terms of we

35:31

can't see around the corner here because

35:32

it is a singularity.

35:34

Sorry you also used AGI artificial

35:37

general intelligence. It'd be

35:38

interesting to know what your definition

35:40

of AGI is. say that. Yeah. So, so what

35:42

what I was saying there is even if I'm

35:44

wrong that you can actually create a

35:49

unbounded seemingly conscious artificial

35:53

intelligence that can entirely replace

35:57

humans and can act autonomously in a way

36:00

that even humans can't act and can

36:03

coordinate across different AIs,

36:05

different data centers to take over the

36:07

world. Even if if that's so so the

36:09

definition of AGI is artificial general

36:12

intelligence meaning that AI can acquire

36:15

new skills in efficiently in the same

36:18

way that a humans can acquire skills.

36:20

Right now AIs don't acquire skills

36:22

efficiently you know they they require

36:24

massive amount of energy and compute

36:26

entire data set of compute to acquire

36:27

these these skills and I think there's a

36:31

again a limit on how general

36:34

intelligence can get I think for most of

36:37

the time we're lagging in terms of what

36:41

humans are are capable of doing the

36:44

singularity is based on this concept of

36:47

intellig intelligence explosion so once

36:49

Once you create an AGI, once you create

36:52

an artificial general

36:53

intelligence, that intelligence will be

36:56

able to modify its own source code and

36:59

create the next version that is much

37:01

more intelligent. And the next version

37:03

creates the next version and the next,

37:06

you know, for infinity, right? Within a

37:09

week, within a week, perhaps within

37:11

milliseconds at some point. Yeah. Right.

37:13

Uh because it might invent new computing

37:15

substrate and and and all of that.

37:18

perhaps they'll use quantum computing

37:19

and and so then you have this

37:22

intelligence

37:24

explosion in a way that it is impossible

37:27

to predict how the world is going to be

37:29

and what I'm saying is this is sort of

37:31

like an end of time story like how would

37:35

you even prepare for that so if if

37:38

that's coming like why would I spend my

37:41

time preparing for I think it's unlikely

37:43

to happen can't see around the corner

37:45

yeah but I'd rather prepare

37:48

what I was saying there. I'd rather

37:49

prepare for the more likely world in

37:53

which we have access to tremendous

37:55

power, but the world's not ending and

37:58

humans are still uh important.

38:02

I don't I don't know why you say more

38:05

likely. I mean, I I think the structure

38:07

of your argument is is sound. You would

38:10

prepare for the world that might happen

38:12

for which you can prepare. There's

38:14

literally no point in trying to prepare

38:16

for a world You can't predict it all.

38:18

The only thing you can do is just sort

38:20

of upgrade your own skills and pay

38:23

attention. But if I have one message for

38:25

the technologists, it's that your

38:28

confidence about what this can and

38:32

cannot do is misplaced because you have

38:35

without noticing stepped into the realm

38:38

of the truly complex. In the truly

38:42

complex, your confidence should drop to

38:45

near zero. that you know what's going

38:47

on. Are these things conscious? I don't

38:50

know. But will they be highly likely

38:52

they will become conscious and that we

38:53

will not have a test to tell us whether

38:56

that has happened. Elon Musk predicts

38:58

that by

39:00

2029 we will have AI with us AGI that

39:04

surpasses the combined intelligence of

39:06

all

39:07

humans. And Sam Alman actually wrote a

39:10

blog three months ago that I read where

39:11

he said we are confident now Sam Alman

39:14

being the founder of open arrow which um

39:15

created chatbt we are confident now that

39:17

we know how to build AGI as we have

39:20

traditionally understood it. When I put

39:22

these things together, I go back to the

39:24

central question of what role do humans

39:26

have in in this in the sort of

39:29

professional output in GDP creation. If

39:32

it's smarter than all humans combined,

39:36

if Elon Musk is correct there, and it's

39:38

able to take actions across the internet

39:41

and continue to learn. This is like a

39:43

central question that I'm hoping I can

39:44

answer today, which is like where do we

39:46

go? Yeah. I I mean in my vision of the

39:49

world we are in the creative seat. We're

39:52

sitting there where um we we are

39:56

controlling swarms of intelligent being

39:59

to do our job. You know the way you run

40:01

your business for example you're sitting

40:03

at a computer you have an hour to work.

40:05

Yeah. and you're going to launch like a

40:07

thousand SDR you know sales uh you know

40:11

representative to go like grab as as

40:13

many leads as possible and you're

40:15

generating new update on replet for for

40:18

your website here and then uh on this

40:20

side you you you're actually you you

40:23

have an AI that's crunching data about

40:26

your uh your existing business to to

40:28

figure out how to improve it and these

40:30

AIs are kind of somehow all coordinating

40:33

together and I am trying to privilege

40:36

the human like this is my my mission is

40:39

to uh build tools for people. I'm not

40:42

building tools for agents and agents are

40:45

a tool and so ultimately not only do I

40:48

think that humans have a privileged

40:51

position in the in the world and in the

40:53

universe. We don't know where conscious

40:56

consciousness is coming from. We don't

40:58

really have the science to explain it.

41:01

Um I think humans are special. That's

41:04

one side is is my belief that humans are

41:07

are special in the world and another

41:09

side which I understand that the

41:11

technology today and I think for the

41:14

foreseeable future y is going to be a

41:17

function of us training data. So there

41:19

was this whole idea like what if chat

41:21

GPT generates uh pathogens. Well have

41:24

you trained it on pathogens? They were

41:26

doing that kind of stuff in Wuhan. know

41:28

I mean I mean a lot of the biotech

41:30

companies are essentially using

41:32

artificial intelligence like I can think

41:34

of Abselera I think it's Abselera in

41:37

Canada their whole business is using AI

41:39

to create new vaccines using artificial

41:42

intelligence and bigger data sets than

41:44

we've never have had before and I know

41:46

cuz I was I was very close to one of the

41:47

founders of people involved in Absela so

41:50

that that work is going on anyway and if

41:51

we think about Wuhan that's it's quite

41:53

probably well known now that it came out

41:55

of a lab and people working in a lab and

41:57

in that scenario that had a huge impact

41:59

and shut down the world. What I'm the

42:01

central question I'd love to answer

42:02

before I throw it back open to the the

42:04

room is what jobs because I know that

42:07

you have this perspective. What jobs are

42:10

going to be made redundant in a world

42:11

where I am sat here as a CEO with a

42:13

thousand AI agents, right? I was

42:15

thinking of all the names of my of the

42:17

people in my company who are currently

42:18

doing those jobs. I was thinking about

42:19

my CFO when you talked about processing

42:21

business data, my graphic designers, my

42:23

video editors, etc.

42:25

So what what jobs are going to be

42:26

impacted? Yeah, all of those. Uh so I I

42:29

think and what do they do? You maybe

42:31

this is useful for for the audience. I

42:34

think if your job is as routine as it

42:37

comes, your job is gone in the next

42:40

couple years. So meaning if you if in in

42:43

those jobs for example uh quality

42:46

assurance jobs data entry jobs you're

42:49

sitting in front of a computer and

42:50

you're supposed to click uh and and type

42:53

things in a certain order operator and

42:55

those technologies are coming on the

42:57

market really quickly and those are

42:58

going to displace a lot of a lot of uh

43:01

labor accountants accountants noise yes

43:05

I mean I've just pulled a ligament in my

43:07

in my foot and they did an MRI scan and

43:09

I had to wait a couple of days for

43:10

someone to look at the MRI scan and tell

43:12

me what it meant. Yeah, I'm guessing

43:14

that that's gone. Yeah, I think I think

43:16

the healthcare ecosystem is hard to

43:18

predict because of regulation and and

43:21

again there there's so many limiting

43:22

factors on how this technology can

43:24

permeates the economy because of

43:26

regulations and and people's willingness

43:28

to to take it. But you know things

43:30

unregulated jobs uh that are

43:34

purely text in text out. if your job,

43:36

you know, you get a you get a message

43:38

and you produce some kind of artifact

43:40

that's like probably text or images that

43:43

that job is is at risk. So, just to give

43:46

you some stats here as well, about 50%

43:48

of Americans who have a college degree

43:50

currently use AI. The stats are

43:52

significantly lower for Americans

43:54

without a college degree. So, you can

43:55

see how a splinter might emerge there

43:57

and that crack will write widen because

44:00

people like us at this table are all

44:01

messing around with it. But my mom and

44:03

dad in Plymouth in the southwest, rural

44:05

England, haven't got like they just

44:07

figured out iPhones. So like I got them

44:08

an iPhone and now they're like texting

44:09

me back. AI is a million miles away. And

44:12

if I start running off with my AGI, my

44:13

agents, that gap is going to widen.

44:15

Women are disproportionately affected by

44:18

automation, which is what you were

44:19

talking about there. with about 80% of

44:21

working women in an at risk job compared

44:23

to just over 50% of men according to the

44:25

Harvard Business Review and jobs

44:27

requiring only a high school diploma

44:29

have an automation risk of 80% while

44:32

those requiring a bachelor's degree have

44:34

an automation risk of just 20%. So we

44:37

can see again how how this will cause a

44:39

sort of a it's also a huge risk um with

44:42

business processor out outsourcing which

44:44

is essentially western countries sending

44:47

jobs to India to the Philippines like at

44:50

the moment millions of people have been

44:51

lifted out of poverty through the

44:53

ability to do those kind of business

44:55

process auto outsourcing jobs and those

44:58

are all going to go but these they're

45:00

going to have a thousand employees but

45:01

but but uh also uh these people are

45:04

actually already transitioning to

45:05

training AI eyes. Mhm. You know, so so

45:08

there's going to be a massive industry

45:09

around training AI until they're

45:11

trained. Well, no, you you have to

45:14

continuously acquire new skills and this

45:15

is what I'm talking about. I mean, this

45:17

is again if AI is a function of its

45:18

data, then you need increasingly more

45:20

data. And by the way, we ran out of

45:22

internet data. I was actually thinking

45:24

interestingly that this might not be

45:25

great for the United States or the UK,

45:26

the Western world because it is going to

45:28

be a leveler where now a kid in India

45:30

doesn't need a Silicon Valley office and

45:33

$7 million in investment to throw up a a

45:35

software company basically. My yeah my

45:38

belief is that so I have a I have a more

45:40

broad definition of AGI and the

45:42

singularity and for me AGI is do we have

45:45

artificial general intelligence in terms

45:47

of generally speaking can AI just do

45:50

stuff that humans used to be able to do

45:52

and we've already crossed that point we

45:54

have this general intelligence that we

45:56

can now all access and 800 million

45:58

people a week are now using uh chat GBT

46:00

it's it's exploded in the last 3 months

46:03

and then to me a

46:05

singularity Uh when the first tractor

46:08

went out onto a farm, for me that was a

46:10

singularity moment. Uh because everyone

46:12

who worked in farming, it used to take a

46:14

100 people to plow a field and now a

46:16

tractor comes along and two guys with

46:18

the tractor can now plow the field in

46:20

just as much time and now 98 people out

46:22

of 100 are completely out of a job. We

46:25

also always underestimate a technology

46:28

if it does go on to change history. When

46:30

you look back through cars, horses,

46:32

planes, the Wright brothers just thought

46:34

of a plane as being something that that

46:35

army could use, we had no idea of the

46:38

application. So someone said to me

46:40

recently, they said, "When it does

46:41

change the world, we underestimate the

46:43

impact that it will change the world."

46:45

And I see people now with their

46:47

estimations of AI and AI agents already

46:49

incredibly optimistic. And so if history

46:52

holds here, we're undershooting the

46:54

impact it's going to have. And I think

46:56

this is the first time in my life where

46:57

the industrial revolution analogies seem

47:00

to fall a little bit short. Yeah.

47:02

Because we've never seen intelligent

47:04

that's like you could I could think of

47:06

this as a I'm not an intelligent person

47:07

on this, but I could see that as like

47:08

the disruption of muscles, whereas this

47:11

is the disruption of intelligence. That

47:14

that's that's exactly the thing is that

47:16

what makes human beings special is our

47:18

cognitive capacity and very specifically

47:21

our ability to plug our minds into each

47:23

other. So that the sum is is uh or the

47:27

whole is greater than the sum of the

47:29

parts. That's what makes human cognition

47:31

special. And what we are doing is we are

47:34

creating a something that can

47:37

technologically surpass it without any

47:39

of the preconditions that make that a

47:44

safe process. So yes, we've

47:45

revolutionized the world how many

47:47

different times? It's innumerable. But

47:49

you know, we've we've made farming

47:51

vastly more efficient. That's different

47:53

than taking our core competency as a

47:56

species and surpassing ourselves with

47:58

the product of our of our labor. I think

48:01

your question is a good one. Then then

48:03

what does become? We only have one thing

48:05

left. Um we have our muscles which we

48:08

got rid of in the industrial revolution

48:10

and then we have our intellect which is

48:11

this digital revolution. Now we're left

48:13

with emotions and agency. So we

48:15

essentially the the the agency idea I

48:19

think we used to judge people on IQ and

48:21

now IQ is the big leveler and now going

48:23

forward for the next 10 years we're

48:25

going to look at are you a high agency

48:27

person or a low agency person. Do you

48:29

have the ability to get things done and

48:32

coordinate agents? Do you have the

48:33

ability to start businesses or give

48:36

orders to digital armies? uh you know

48:39

and and essentially these high agency

48:41

people are going to thrive in this new

48:44

world because they have this other thing

48:46

that's been bubbling under the service

48:47

surface which is really interesting when

48:49

you said agency is going to remain as an

48:51

important thing we're sat here talking

48:54

about AI agents and the crazy thing in a

48:56

world of AI agents that have super

48:58

intelligence is I can just tell my agent

49:00

listen I'm going on holiday please build

49:02

me a SAS company that spots a market

49:04

opportunity throw up the website post it

49:06

on my social media channel I'll be in

49:08

Hawaii I and this new agentic world is

49:12

stealing that too cuz now it can take

49:14

action in the same way that I can browse

49:16

the internet. I can call Domino's Pizza,

49:19

speak to their agentic agent, organize

49:21

my pizza to be there before I even wake

49:22

up. And in fact, predictability, you

49:24

know, OpenAI now learns and Sam Alman

49:27

said that they've expanded the memory

49:29

feature. So, it knows every it's knowing

49:30

more and more and more and more about

49:32

me. It'll almost be able to predict what

49:34

I want when I want it. It'll know

49:35

Steve's calendar. He's arriving at the

49:37

studio. Make sure his cadence is on the

49:39

side. Make sure his iPad has the brief

49:42

on it. Do the brief. Do the research for

49:44

me. And everything else say remember

49:46

Brett's birthday so when I arrive

49:48

there'll be something. In fact, it's

49:50

removing my need for any agency. Yes.

49:52

And you know again I don't know how to

49:54

make this point so that it occurs to

49:57

people what I'm really suggesting

49:59

but today maybe it's not conscious but

50:04

well let me put it to you this way.

50:07

If you're

50:08

conscious, you started out as a child

50:11

that wasn't. And although this may not

50:14

fully encapsulate it, you are

50:15

effectively an LLM, right? You go from

50:18

an unconscious infant to a highly

50:21

conscious adult. And the process by

50:24

which you do that has a lot to do with

50:27

being trained effectively on words and

50:30

other things in an environment in

50:31

exactly the way that we now train these

50:33

AIs. So the idea that we can take

50:36

consciousness off the table, it won't be

50:38

there till we figure out how to program

50:39

it in and we're safe because we don't

50:41

know how consciousness works. I take the

50:42

opposite lesson. We've created the exact

50:45

thing that will produce that phenomenon

50:47

and then we can have philosophers debate

50:49

whether it's real consciousness or it

50:51

just behaves exactly as if it were. And

50:52

the answer is those aren't different.

50:54

Doesn't matter. And the same thing is

50:56

true for agency. you know, especially if

50:58

you've created an environment in which

51:00

these AIs are de facto competitors, what

51:04

you're effectively doing is creating an

51:06

evolutionary environment in which they

51:08

will evolve to fill whatever niches are

51:10

there. And we didn't spell out the

51:11

niches. So, I have the sense we have um

51:15

we have invited we have we have created

51:17

something that truly is going to

51:19

function like a new kind of life and

51:21

it's especially troubling because it

51:23

speaks our language. So that leads us to

51:26

believe it's more like us than it is and

51:28

it's actually potentially quite

51:30

different. So but by the way, he's the

51:33

optimist here, right? Like he's so

51:36

optimistic about LMS and how how they're

51:39

going to they're going to evolve. Yes.

51:42

It's amazing. It's amazing technology.

51:44

Like I think it raised global IQ, right?

51:47

Like 800 million people like 800 million

51:49

people are that much more intelligent

51:51

and emotionally intelligent as well.

51:53

Like I know people who previously were

51:56

very coarse and they kind of robbed

51:58

people the the the wrong way. They they

52:00

would say things in not so polite way

52:03

and then suddenly they started putting

52:05

their the you know what they're saying

52:08

through chat in order to kind of make it

52:11

kinder and nicer and they're more liked

52:13

now. And so not only is it uh making us

52:16

more intelligent but also it allows us

52:18

to be the best version of ourselves. And

52:21

the the scenario that you're talking

52:22

about, I don't think I don't know what's

52:24

wrong with that. Like, you know, you

52:26

know, the I would want less agency in

52:29

certain places. Like, I would want

52:32

something to help me not, you know, open

52:34

up a peanut butter jar at night, right?

52:38

You know, there are places in my life

52:40

where I need more control and I would

52:44

rather seed it to some kind of entity

52:46

that could help me make better choices.

52:51

I mean unfortunately even if there is

52:54

some small group of elites that are able

52:58

to go to Hawaii while something else

53:00

does the mundane details of their

53:03

business

53:04

building. We are rather soon going to be

53:08

faced with a world that has billions of

53:11

people who do not have the skills to

53:14

leverage AI. Some of them will be

53:17

necessary for a time. you're going to

53:19

need plumbers. But this is

53:23

also not a long-term solution because

53:27

not only are there not enough of those

53:29

jobs,

53:30

um, but of course we have humanoid

53:33

robots that once imbued with AI capacity

53:37

will also be able to take, you know,

53:39

they'll be able to crawl under your

53:40

house into the crawl space and fix your

53:42

plumbing.

53:43

So what typically happens when you have

53:47

a massive economic contraction that

53:49

arises from the fact that a huge number

53:52

of people are out of work is that the

53:54

elites start looking at those people and

53:56

thinking well we don't really need them

53:57

anyway. And so the idea that this AI

54:01

disruption doesn't lead us to some very

54:03

human catastrophe I think is overly

54:06

optimistic and that we need to start

54:08

preparing right now. What are the rights

54:10

of a person who has had whatever it is

54:12

that they've invested in completely

54:14

erased from the list of needs? Is that

54:17

person responsible for not having

54:18

anticipated AI coming? And is it their

54:21

problem that that they are now starving

54:23

and they're being eyed by others as you

54:25

know a useless eater? I don't think so.

54:27

How is it different than uh when the uh

54:30

uh what's it called the the looming

54:31

machine came and the textile workers you

54:33

know that the result of the in the lad

54:35

sort of revolution? uh h how is it how

54:38

is it different than any time in history

54:41

when uh technology uh automated a a lot

54:45

of people out of out of jobs? I would

54:47

say scale and speed that's how it's

54:49

different and the scale and speed is

54:51

going to result in a an unprecedented

54:54

catastrophe because the rate at which

54:56

people are going to be simultaneously

54:58

sidelined not just in one industry but

54:59

across every industry is just simply uh

55:03

and it also did actually happen. There

55:05

was a there was an uh for the first 50

55:07

years of industrialization from like

55:10

late 1700s to early 1800s, you actually

55:13

the Charles Dickens novels are

55:15

essentially people coming from the farms

55:17

who are displaced arriving in cities,

55:20

kids living on the streets. Uh the

55:23

British decided to pick everyone up and

55:24

send them over to the over to Australia,

55:27

which is where I came from. um and uh

55:31

you know there there were this there was

55:33

this massive issue of displacement. I

55:36

think we're going to go into a high

55:37

velocity economy where rather than this

55:39

long arc of career that lasts 45 years,

55:42

we're going to have these very fast

55:45

careers that last 10 months to 36

55:48

months. and you invent something, you

55:51

take it to market, you put together a

55:53

team of five to 10 people who work

55:55

together, you then get disrupted, you

55:58

come Can I mention a story here? Uh

56:00

there's an entrepreneur that used Replet

56:03

in a similar way. Uh his name is Billy

56:05

Howell. You can find him on YouTube on

56:07

the internet. He would go to Upwork and

56:10

he would find what people are asking for

56:13

different requests for certain apps,

56:15

technologies. Then he would take the

56:18

what they're asking for, put it into

56:19

replet, make it an application, call

56:22

them, tell them, "I already have your

56:24

application. Would you pay $10,000 for

56:26

it?" And so, so that's sort of an

56:27

arbitrage opportunity that's that's

56:29

there right now. That's not arbitrage.

56:31

That's theft. How is no what is it? How

56:33

is that theft? You have somebody who has

56:35

an idea that can be brought to market

56:38

and somebody else is cryptically

56:40

detecting it and then selling back their

56:42

own idea to them. Well, they're paying

56:44

them to do that. They're saying, "I will

56:46

give you $500 if you if someone makes

56:48

this for me." Right? But this is what I

56:50

more or less think is going to happen

56:51

across the whole economy is that yes,

56:53

from this perspective, we can see that

56:56

everybody is suddenly empowered to build

56:58

a great business. Well, what do we think

57:00

about the folks who are going to be

57:02

displaced from the top? What are they

57:04

going to think about all these people

57:05

building all of these highly competitive

57:07

businesses? And are they going to find a

57:08

way to do, you know, what venture

57:11

capital has done or what record

57:13

producers have done? What they're going

57:15

to do is they're going to take their

57:16

superior position at the top and they

57:18

are going to take most of the wealth

57:20

that is produced by all of these people

57:22

who have these ideas that in a proper

57:24

market would actually create businesses

57:26

for them and they're going to parasitize

57:28

them. I think that we with this in

57:33

introduction of AI and AI

57:35

agents old value has moved and now it's

57:39

not going to be the case that the idea

57:40

itself is the moat and it's not going to

57:43

be the case that resources are the moat.

57:44

So in such a scenario you still have to

57:46

figure out distribution. You still have

57:48

to have for example like an audience. So

57:50

if you're a podcaster now you have a

57:51

million followers on Twitter. you're in

57:53

a prime position because you now have

57:55

something that the the great guy with a

57:58

great idea with no audience has you have

58:00

inbuilt distribution. So I now think

58:01

actually much of the game might be

58:02

moving to like yeah still about taste

58:05

and idea but also the the mo is

58:08

distribution. Yeah. And speaking of

58:10

adaptive systems um the one of the

58:13

adaptation that will happen is people

58:15

will seek uh humans and will seek proof

58:19

of humanity. Oh, I agree that uh

58:21

authenticity is going to become the coin

58:23

of the realm and anything that can be

58:27

faked or cheated is going to be devalued

58:30

and things you know spontaneous jazz or

58:34

you know comedy that is interactive

58:36

enough that it couldn't possibly have

58:38

been generated with the aid of AI those

58:40

things are going to become prioritized

58:42

you know spontaneous oratory rather than

58:44

speeches answers some of your questions

58:47

no it answers it answers my question for

58:49

the tiny number of people who are in a

58:51

position to do those things. Stephen,

58:54

you use the mo word moat. Um, which I

58:56

think is a really important word for

58:57

entrepreneurs. We like we like have to

58:59

have a moat. We think a lot about moes

59:01

and it's an industrial age. A lot of

59:02

people don't even know what a moat what

59:04

you mean by moat. It's just I often

59:06

think about this idea of what are the

59:08

moes that are left. So to define how I

59:11

define a mo, you've got a castle and

59:12

it's got a like a small circle of water

59:15

around it. And once upon a time that

59:17

circle of water defended the castle from

59:19

attack and you can pull up the

59:20

drawbridge so nobody could attack you

59:21

very easily. It's a de defense from

59:23

something. So it's your it's your

59:25

shield. It's your your defense. And once

59:27

upon a time as an entrepreneur, you

59:28

know, I've got a software company in San

59:29

Francisco called Third Webb and we

59:31

raised almost $30 million. We have a

59:33

team of 50 great developers. And much of

59:36

our moat was you can't compete with us

59:39

if you don't have the 50 developers and

59:40

the $30 million in the bank. How much of

59:42

that 30 million went to coding? The vast

59:45

majority of it. I mean what else are we

59:46

going to do? What else we do? So this is

59:48

a good thing. I think modes are a bad

59:50

thing. Okay, let me make the argument

59:52

there. Uh so everyone is looking for

59:54

modes you know for example like one of

59:56

the more uh significant modes is network

59:59

effects. Yeah, you know, so you can't

60:02

compete with Facebook or Twitter

60:05

because to move people from Facebook or

60:08

Twitter, you need to it's the collective

60:09

action problem. You need to move them

60:11

all at once because if one of them

60:13

moves, then it's the network is not

60:17

valuable, they'll go back. So you have

60:19

this chicken and egg problem. Let's say

60:21

that we have a more decentralized way of

60:24

doing social networks that will remove

60:27

the power of Twitter to kind of censor

60:31

and I I think you're at the other end of

60:33

of censorship, right? And so part of my

60:36

optimism about humanity is that um

60:39

generally there's self-correction.

60:41

Democracy is a self-correcting system.

60:43

Uh free markets are largely

60:45

self-correcting systems. There are

60:47

obvious problems with with free markets

60:49

that that we can discuss. But take um

60:52

health, you know, there is obesity uh

60:55

epidemic. This period of time when uh

60:59

companies, you know, ran loose kind of

61:02

making this sugary, salty, fatty kind of

61:06

snacks and everyone gorged on them and

61:08

everyone got very uh you know,

61:10

unhealthy. And now you have Whole Foods

61:13

everywhere. Today, people in Silicon

61:15

Valley, they don't go to bars at all.

61:17

They go to running clubs. That's how you

61:19

meet. That's how you go find a date. You

61:21

go to running clubs. And so, there was a

61:24

shift that happened because there was a

61:27

reaction. Obviously, cigarettes is

61:29

another example. You know, you were

61:31

talking about phones and our addiction

61:33

to phones. And I see a shift right now

61:35

like in my uh friend circle like people

61:39

who are constantly kind of on their

61:40

phones is already kind of frowned upon

61:42

and they don't want to hang out with you

61:44

because you're you're constantly staring

61:45

on on your phone. So there's always

61:47

these reactions and and but the problem

61:49

is you you reference selfcorrection and

61:52

I agree that there's actually an

61:54

automatic feature of the universe in

61:55

which the self-correction happens. You

61:57

can't have a positive feedback that

61:58

isn't reigned in by some outer negative

62:01

feedback. But the corrections, the list

62:04

of corrections involves things like you

62:06

point to where people become enlightened

62:08

and they realize that they're doing

62:10

themselves harm with either the sugar

62:12

that they're consuming or the dopamine

62:14

traps on their phone and they get

62:16

better. But also on the list of

62:18

corrective patterns are genocide and war

62:22

and you know parasetism. And the problem

62:27

is these things are destructive of

62:30

wealth. And so you allude to

62:35

the superior fact of an open market

62:38

without moes. Presumably the benefit of

62:40

that is that more wealth gets created

62:42

because people aren't kept from doing

62:44

things that are productive. I see that.

62:46

But then what is the product of all of

62:49

this new wealth that is going to be

62:50

generated by a world empowered by AI?

62:53

Does it end up so highly concentrated

62:55

that you have a tiny number of ultra

62:58

elites and a huge number of people who

62:59

are utterly dependent on them? What

63:02

becomes of those people? The learning

63:04

process, the self-correction pro process

63:07

goes through harm in order to get to

63:09

that more enlightened solution. There's

63:11

nothing that protects us from the harm

63:14

phase being so apocalyptically terrible

63:17

that, you know, we get to the other side

63:18

of it and we say, "Well, that was a hell

63:20

of a correction." Or maybe there's

63:21

nobody there to even say that. Those are

63:23

also on the table. It reminds me of a

63:24

mouse trap where you see the cheese and

63:26

we're going, "Oh my god, my

63:27

grandmother's going to be able to do

63:28

some research and oh my god, my life's

63:30

going to get easier." So you head closer

63:31

and closer to the cheese.

63:34

And historically, if we look at all of

63:37

the last 10,000 years, it's a very small

63:39

number of elites who own absolutely

63:41

everything and a very large number of

63:44

surfs and peasants who have a

63:46

subsistence living. You know, if the

63:48

elites are too greedy and they freeze

63:50

out the peasants at too high a level and

63:52

they try to use brutality or Yeah.

63:55

eventually it comes back to haunt them.

63:56

And so what you get is a recognition

63:59

that you you need a system that does

64:01

balance these things and you know the

64:03

west has the best system that we've ever

64:05

seen. It's one in which we agree on a

64:07

level playing field. We never achieve it

64:09

but we agree that it's a desirable thing

64:11

and the closer we get to it the more

64:13

wealth we create. But again,

64:17

if AI empowers those with ill

64:23

intent at a higher rate that it empowers

64:26

those who are wealth creating and

64:28

pro-social, we may be in for a massive

64:32

regression in how fair the the market

64:34

that the West is. Is that your top

64:35

concern versus economic displacement?

64:38

And I think they're the same thing. How

64:41

are they the same thing?

64:43

because the economic displacement is

64:46

going to start. I don't know how

64:48

many million people are going to be

64:50

displaced from their jobs in the US.

64:52

Suddenly, we're going to have a question

64:54

about whether or not we have obligations

64:56

to them. And you agree with that, don't

64:58

you? Yes. But but but again, it's it's

65:01

the no pain, no gain. I mean, we're

65:02

going to go through a period of of

65:03

disruption. And I think at the other

65:05

end, the old, you know, sort of

65:08

oppressive systems will be broken and

65:10

we're going to create perhaps a fair

65:13

world, but it's going to have its own

65:15

its own problems. And what's the scale

65:16

of that disruption in your estimation?

65:19

It's hard to say because uh you know

65:22

there's this concept of limiting factors

65:24

like you know there is um regulation

65:28

there's the appetite of people to today

65:30

for example the health care system is

65:33

very resistant to innovation because of

65:35

regulation you know and that's a that's

65:37

a bad thing on the regulation point it's

65:39

worth saying that when Trump came into

65:41

power he signed in a new law which is

65:44

called removing barriers to American

65:46

leadership in AI which revokes previous

65:48

AI policies that were deemed to be

65:50

restrictive. And obviously when you

65:52

think about where the funding is going

65:52

in AI, it's going to two places. It's

65:54

going to America and it's basically

65:56

going to China. That's the the vast

65:58

majority of investment. So with those

66:00

two in competition, any regulation that

66:02

restricts air in any way is actually

66:03

self-sabotage. Mh. And this is, you

66:06

know, I live in Europe some of the time

66:09

and it's already annoying to me that

66:11

when Sam Alman and OpenAI released the

66:13

03 model, this new incredible model,

66:15

it's not in Europe because Europe has a

66:17

regulation which prevents it from coming

66:19

to Europe. So, we're now at a

66:20

competitive disadvantage

66:22

um which Sam Alman spoken about. And

66:24

more broadly on this point of

66:26

disruption, it was I was quite unnerved

66:29

when I heard that Sam Wolman's other

66:31

startup was called

66:33

Worldcoin. And Worldcoin was conceived

66:36

with the goal of facilitating universal

66:39

basic income,

66:41

i.e. helping to create a system where

66:43

people who don't have money are given

66:46

money by the government just for being

66:47

alive to help them cover their basic

66:49

food and housing needs. Which suggests

66:51

to me that the guy that has built the

66:52

biggest air company in the world can see

66:54

something that a lot of us can't see

66:56

which is there. Yeah. There's gonna need

66:58

to be a system to just hand out money to

67:00

people because they're not going to be

67:01

able to survive otherwise. I

67:03

fundamentally disagree with that. Which

67:04

part do you disagree with? I disagree

67:06

that first of all that humans would be

67:08

happy with UBI. I I think that you know

67:12

uh you know core value of humans and be

67:16

curious about the evolutionary reasons

67:17

is we want to be useful. It's really

67:19

important to know that a lot of the jobs

67:21

that are in at risk are the most high

67:23

status, highly paid jobs in the world.

67:26

Let's take the highest paid job in

67:27

America um which is an

67:29

anesthesiologist. Uh this is the highest

67:32

paid job and highest paid salary job.

67:35

Salari job. Yeah. And the majority of

67:37

that job is observing a patient, knowing

67:40

which type of medication would work best

67:42

with their body. um giving them the

67:45

exact right amount, monitoring the

67:47

impact of that uh on the on the body and

67:50

then making slight adjustments, the

67:52

right technology and any nurse will be

67:55

able to do that job. And you might have

67:57

one

67:58

anesthesiologist on on

68:01

site supervising 10, 20, 30, 40 wards

68:05

and the technology is, you know, doing

68:07

the job, but that one person is there

68:09

just to kind of supervise if something

68:11

went wrong or if there was an ethical

68:12

dilemma. What's wrong with that? I mean,

68:13

if if if the precision is better where

68:15

they are. No, there's nothing wrong with

68:17

that except for the fact that a lot of

68:20

people, hundreds of thousands of people

68:21

have spent their entire life training to

68:23

be that, they get an enormous amount of

68:25

purpose and satisfaction about the fact

68:27

that that's their career, that's their

68:28

job. They have mortgages, they have

68:30

houses, they have status, and that's

68:33

about to go away. Well, if it's highest

68:35

paid jobs, maybe you should start

68:36

saving.

68:38

Yeah. Well, I mean, but yeah, I I hear I

68:43

hear you, but you're talking about

68:45

people who have done vital work. Mhm.

68:50

Highly specialized work and are

68:51

therefore not in a great position to

68:53

pivot pivot based on the invention of a

68:56

technology that they didn't see coming

68:58

because frankly, I mean, in the

69:00

abstract, maybe we all saw AI coming

69:02

somewhere down the road, but we did not

69:04

know that it was going to suddenly dawn.

69:06

And we do have to figure out what to do

69:08

with those people. It's not their fault

69:10

that they've suddenly become obsolete

69:12

and it's inconceivable that people will

69:16

accept this. It is not. It is

69:18

fundamentally incompatible with our

69:20

nature. We have to have things to strive

69:22

for and you know you can sustain life

69:26

that way but you cannot um sustain a

69:30

meaningful existence and so it's a

69:31

short-term plan at best. Let's talk

69:33

about meaning. Um on that point of job

69:36

displacement this is already happening.

69:38

Cler CEO who has been on this podcast

69:40

before, a great guy, um said to on a

69:43

blog post that they published on Cler's

69:45

website saying that they now have AI

69:47

customer service agents handling 2.3

69:50

million chats per month, which is equal

69:52

to having to hire 700 full-time people

69:55

to do that. So, they've already been

69:57

able to save on 700 customer service

70:00

people by having air agents to do that.

70:03

And they actually they actually got rid

70:04

of those 700 jobs, right? I don't have

70:07

that information in front of me, but

70:08

I'll have a look. Um, I'll throw it up

70:10

on screen for anyone that wants context

70:12

on that. But that's already happening.

70:13

This isn't some antologist or something.

70:15

And these aren't high paid people in

70:17

every case. We've done something

70:19

similar, by the way. We've we internally

70:20

we've

70:22

we've replaced that function for 70%.

70:26

Yeah. I mean, our company, we're 65

70:29

people and, you know, we um, you know,

70:32

we make, you know, millions per per

70:34

head, you know. So, it's a Are you going

70:36

to need to hire more people to get up to

70:38

I think so, but we're we're hiring

70:40

slowly, like, you know, we're we're

70:41

using uh customer support, AI, and that

70:44

meant that we we need less uh customer

70:47

support, and we're trying to leverage AI

70:50

as much as possible. the the person in

70:53

HR at Replet writes software using

70:56

Replet. So, I'll give you an example.

70:57

She needed um Orc charts uh software and

71:01

she looked at a bunch of them, got a lot

71:03

of demos and they're all very expensive

71:06

and they they're missing the kind of

71:08

features that she want. For example, she

71:09

wanted like version control. She wanted

71:11

to know when when something changed and

71:13

to go back in history. She went into

71:15

replet in 3 days she got exactly the

71:18

kind of software that she wanted and

71:20

what was the cost you know perhaps $20

71:22

you know something like that $ 20 $30

71:24

once right and um how many employees in

71:27

HR do we need right now we have two uh

71:30

if if they're highly levered like that

71:33

maybe we do not need a 20 HR team on

71:37

this point of

71:38

meaning I've heard so many billionaires

71:41

in AI describe this as the age of

71:42

abundance and I'm not necessarily sure

71:44

If abundance is always a great thing

71:47

because you know when we look at mental

71:50

health and we look at why how people

71:53

derive their meaning and their purpose

71:54

in life much of it is having something

71:56

to strive towards and some struggle in a

71:58

meaningful direction to you and this is

72:00

maybe adjacent but when there was a

72:03

study done I think it was in Australia

72:04

where they looked at suicide letters and

72:06

in the suicide letters the sentiment of

72:08

men in those suicide letters was they

72:11

didn't

72:12

feel worthy they didn't feel like they

72:14

were worth it. They didn't feel like

72:16

they were needed by their families. And

72:19

this is much of what caused their

72:20

psychological state. And I wonder in a

72:23

world of abundance where we, you know, a

72:25

lot of these AI billionaires are telling

72:26

us that we're going to have so much free

72:28

time and we're not going to need to

72:30

work. If there is at all going to be a

72:31

crisis of meaning, a mental health

72:33

problem. I mean, there already is. And

72:36

it doesn't require AI and it's going to

72:37

get worse. I don't know what to do about

72:39

it because essentially as human beings

72:42

we are built like all organisms to find

72:47

opportunity and figure out how to

72:50

exploit it. That's what we do. And the

72:52

world you're describing is really the

72:53

opposite of that. It's one where you're

72:55

effectively having your biological needs

72:59

at the physiological level satisfied and

73:02

there isn't an obvious place for your

73:06

spare time if that's what you end up

73:08

with to be utilized in something that

73:12

you know there's no place to strive and

73:14

I do imagine almost at best what would

73:17

happen is you have people who are being

73:21

sustained by a universal basic income

73:24

come and then parasetized

73:26

uh you know whatever currency they have

73:29

to spend somebody will be targeting it

73:30

and they will be targeting it with a an

73:32

AI augmented system that spots their

73:36

defects of character. I mean, again,

73:38

we're already living in this world, but

73:40

it will be that much worse when the AI

73:42

is figuring out, you know, what kind of

73:45

porn to target you with specifically.

73:47

That's uh it's a nightmare scenario. And

73:50

I do think it would be worth our time as

73:54

a species to start considering if we are

73:57

about to find ourselves in this

73:58

situation and we find some way of

74:00

dealing with the basic needs of the

74:02

large number of people who are going to

74:04

be

74:04

sidelined. What would a world have to

74:07

look like in order for them to have real

74:09

meaning? Not pseudo meaning, not

74:10

something that you know superficially,

74:12

you know, a video game is not meaning

74:14

even if it feels very meaningful in the

74:16

moment. I I think that would be a a

74:19

worthy investment for us to figure out

74:20

how to produce it. But frankly, I'm not

74:22

expecting us to either have that

74:24

conversation or get very far down that

74:27

road. I think it's much more likely that

74:29

we will squander the wealth dividend

74:32

that will be produced by by AI.

74:34

Interestingly, you also see in Western

74:36

countries that when we get more

74:37

abundance, we start having less kids.

74:40

And we're already seeing this sort of

74:41

population decline in the Western world,

74:43

which is was kind of scary. I think it's

74:46

often associated with affluence like the

74:48

more money someone makes the less likely

74:49

they are to want to want to have

74:50

children the more they try and protect

74:52

their freedoms. But also on this point

74:53

of

74:54

AI relationships are hard you know my

74:58

girlfriend is happy sometimes and not

75:01

happy other times and I have to like you

75:04

know go through that struggle with her

75:05

of like working on the relationship.

75:07

Children are hard and if we are

75:10

optimizing ourselves and you know much

75:12

of the reason that I sustain the

75:13

struggle with my girlfriend is I'm sure

75:15

from some evolutionary reason because I

75:16

want to reproduce and I want to have kin

75:19

but if I didn't have to deal with the

75:23

struggle that comes with human

75:24

relationships romantic or platonic

75:26

there's going to be a proportion of

75:28

people that actually choose that outcome

75:29

and I wonder what's going to happen to

75:30

birth rates in such a scenario because

75:32

we're already struggling. We're already

75:34

in a situation where we used to be

75:35

having five children per woman in the

75:40

1950s to about two in

75:45

2021. And we're seeing a decline. If you

75:47

look at South Korea, their fertility

75:48

rate has fallen to 72, the lowest

75:51

recorded globally. And if this trend

75:53

continues, the country's population

75:55

could half by 20, 100.

76:00

So yeah, relationships, connections, and

76:03

and also I guess I guess we've got to

76:04

overlay that with the loneliness

76:05

epidemic, which

76:07

is they promised us social connection

76:09

when social media came about, when we

76:12

got Wi-Fi connections, the promise was

76:13

that we would become more connected. But

76:15

it's so clear that because we spend so

76:17

long alone, isolated, having our needs

76:19

met by Uber Eats drivers and social

76:21

media and Tik Tok and the internet, that

76:23

we're investing less in the very

76:25

difficult thing of like going and making

76:27

a friend and like going and finding a

76:29

girlfriend. Young people are having sex

76:30

less than ever before. Everything that

76:33

is associated with the difficult job of

76:36

making in real life connection seems to

76:39

be um falling away.

76:41

I I will make the case that everything

76:43

that we've discussed here, all the

76:46

negative things around loneliness, um

76:48

around meaning, they're already here.

76:52

And I don't think blaming technology for

76:55

all of it is is the right thing. Like I

76:57

think there are a lot of things that

76:59

happened because of existing human uh

77:04

you know, impulses and and motivations.

77:07

Um well I I wanted to go back to where

77:10

you started because I do think that this

77:12

maybe is the fundamental question. Why

77:15

is it that we are already living in a

77:17

world that is not making us happy? And

77:19

is that the responsibility of

77:21

technology? And I don't think it's

77:22

exactly technology. Human beings uh

77:25

among our gifts are fundamentally

77:27

technological whether we're talking

77:28

about quantum computing or

77:32

flintnapping an arrow

77:35

head. What has happened to us that has

77:38

created the growing, spreading, morphing

77:42

dystopia is a process that Heather and I

77:45

in our book, A Hunter Gather's Guide to

77:47

the 21st Century, call hyper

77:49

novelty. Hypern novelty is the fact of

77:54

the rate of change outpacing our

77:57

capacity to adapt to change. And we are

78:00

already well past the threshold here

78:02

where the world that we are young in is

78:05

not the world that we are adults in. And

78:07

that mismatch is making us sick across

78:10

multiple different domains. So the the

78:13

question that I ask is is the change

78:16

that you're talking about going to

78:18

reduce the rate of change in which case

78:21

we could build a world that would start

78:23

meeting human needs better open

78:25

opportunities for pursuing meaningful

78:27

work. or is it going to accelerate the

78:30

rate of change which is in my opinion

78:32

guaranteed to make us worse off. So if

78:35

it was a one-time shift, right, AI is

78:38

going to dawn. It's going to open all

78:39

sorts of new opportunities. There's

78:41

going to be a tremendous amount of

78:42

disruption, but from that we'll be able

78:44

to build a world. Is that world going to

78:46

be stable or is it going to be just, you

78:48

know, one event horizon after the next?

78:51

If it's the latter, then it effectively

78:54

says what it does to the humans, which

78:56

is it it's going to dismantle us. When I

79:00

look out at society, I I go, okay, it's

79:02

having a negative impact. When I look at

79:04

um individual use cases, it's having a

79:07

profoundly positive impact. Including

79:09

for me, it's having a very positive

79:10

impact. So, it's it's one of these

79:13

things where I wonder what is that what

79:15

is it that we need to teach people at

79:17

school so that they understand the world

79:20

that we're going into? Because one of

79:21

the biggest issues that we're having is

79:24

that we're sending kids to school with

79:26

this blueprint, this template that

79:28

they're going to have this long arc

79:30

career that no longer exists that

79:33

essentially we're treating them like

79:34

learning LLMs. And we're saying, "Okay,

79:37

we're going to prompt you. You're going

79:38

to give us the right answer. you're

79:40

going to hallucinate it if possible. And

79:42

you know, and and then we go, "Okay, now

79:44

go off into the world." And they go,

79:45

"Oh, but wait a second. I don't know how

79:47

money works. I don't know how society

79:48

works. I don't know how my brain works.

79:50

I don't know how I meant to handle this

79:52

novelty problem. I'm not sure how to

79:54

approach someone in a in a social

79:56

situation and ask if they want to go on

79:57

a date." Um so all the important things

80:00

that actually are the important

80:01

milestones that people want to be able

80:03

to hit and that technology can actually

80:06

have an impact on we get no user manual.

80:09

So I think one of the biggest things

80:10

that has to happen is we have to equip

80:14

uh young people all through school that

80:17

to actually prepare them for the world

80:18

that's coming or the world that's here.

80:22

Well on the one hand I think you you

80:25

outline the problem very well.

80:27

effectively we have a a model of what

80:29

school is supposed to do that you know

80:30

at best was sort of a match for the 50s

80:33

or something like that and it woefully

80:36

misses the mark with respect to

80:38

preparing people for the world they

80:39

actually

80:41

face if we were going to prepare them I

80:44

would argue that the only toolkit worth

80:46

having at the moment is a highly general

80:50

toolkit the capacity to think on your

80:53

feet and pivot as things change is the

80:55

only game in town with respect to our

80:57

ability to prepare you in advance. Maybe

81:00

the the other auxiliary component to

81:02

that would be teaching you what we know

81:05

which is frankly not enough about how to

81:08

live a healthy life. Right? If we could

81:11

if we could induce people into the kinds

81:14

of habits of behavior and the

81:18

consumption of food and then train them

81:20

to think on their feet, they might have

81:22

a chance in the world that's coming. But

81:24

uh the fly in the ointment is we don't

81:28

have the teachers to do it. We don't

81:29

have people who know. And that is the

81:32

question is could the AI actually be

81:34

utilized in this manner to actually

81:38

induce the right habits of mind for

81:40

people to live in that world. I I I

81:42

spent a lot of time in education

81:44

technology. One thing that is as we say

81:47

on the internet a black pill about

81:49

education in general, education

81:51

intervention is there's a lot of data

81:54

that shows that there are very little

81:57

interventions you can make in education

81:59

to generate better outcomes. Um and so

82:03

you know uh there's been a lot of

82:05

experiment around pedagogy around you

82:07

know how to configure the the the

82:08

classroom that have resulted in very

82:11

marginal improvements. There's only one

82:14

intervention and this this has been uh

82:16

reproduced many times that creates two

82:19

sigma two standard deviation

82:22

uh positive outcomes in education

82:24

meaning you're better than 99% of uh of

82:27

of everyone else and that is one-on-one

82:31

tutoring I thought so I was going to say

82:33

smaller classrooms and personalization

82:34

one-on-one tutoring yeah and and but by

82:36

the way if you look someone also did a

82:38

survey of all the geniuses the

82:40

understanders of the world and found

82:42

that they all had one-on-one tutoring.

82:44

They all had someone in their lives that

82:46

took interest in them and tutored them.

82:48

So, what can create one-on-one tutoring

82:51

opportunity for every child in the

82:53

world? AI. AI. My kids use it and it's

82:56

incredible. Yeah. As in like um they're

82:59

interacting and and it's adapting to

83:01

their speed. Yes. And um it's giving

83:04

them different analogies to work with.

83:06

So, like, you know, my son was learning

83:08

about division and it's asking him to

83:10

smash glass and how many pieces he

83:13

smashes it into with this hammer and,

83:15

you know, and it's saying things like,

83:16

"No, Xander, go for it. Really smash

83:18

it." And um and he's loving it, right?

83:21

Is that synthesis? Yeah. Yeah. I'm an

83:23

investor in this company. Oh, well, it

83:24

was it was it's great to watch that

83:27

simulated one-on-one tutoring because

83:29

it's talking to him. It's asking him

83:31

questions. Brett, you're an educator.

83:33

you uh spent much of your life teaching

83:35

people in universities. How do you

83:38

receive all of this? Well, on the one

83:40

hand, I agree that the uh the closer to

83:43

one to one you get, the better. But I

83:46

also personally believe that 0ero to one

83:50

is best. And what I mean by that

83:54

is part of what's gone wrong with our

83:56

educational system is that it is done

84:01

through abstraction.

84:03

And effectively the arbiter of whether

84:07

you have succeeded or failed in learning

84:09

the lesson is the person at the front of

84:11

the room. And that's okay if the person

84:14

at the front of the room is truly

84:15

insightful. And it's terrible if the

84:18

person at the front of the room is

84:19

lackluster, which happens a lot. So what

84:23

doesn't work that way is interaction

84:26

with the physical world in which nobody

84:28

has to tell you whether you've succeeded

84:29

or failed. If you're faced with an

84:31

engine that doesn't start, you can't

84:34

argue it into starting. You have to

84:36

figure out what the thing is that has

84:37

caused it to fail, and then there's a

84:40

great reward when you alter that thing

84:42

and suddenly it fires up. So, I'm a big

84:45

fan of being as light-handed as possible

84:49

and as concrete as possible in teaching.

84:51

In other words, uh, when I've done it,

84:53

and not just with students, but with my

84:55

own children, I like to say as little as

84:58

possible, and I like to let physical

85:00

systems tell the person when they've

85:03

succeeded or failed. And that creates an

85:06

understanding. You can extrapolate from

85:08

one system to the next. And you know

85:10

that you're not just extrapolating from

85:11

one person's misunderstanding. You're

85:13

extrapolating from the way things

85:15

actually work. So, I don't know if AI

85:18

can be leveraged in that context. My

85:21

sense is there's probably a way to do

85:22

it, but one would have to be deliberate

85:24

about it, especially with robotics and

85:26

humanoid robots. Actually, that is that

85:28

is the place uh where where you can do

85:31

this is with robotics that

85:34

um it seems to me. Yeah. Well, robotics

85:38

will teach you the physical computing

85:40

part of it. And then the question is how

85:42

do you infuse this with AI so that um it

85:45

is that it you know it provokes you out

85:49

of some eddy where you're caught and

85:51

moves you into the ability to solve some

85:53

next level problem uh that you you

85:55

wouldn't have found on your own. What do

85:57

what do you think should be taught in

85:58

the classroom with everything that you

86:00

now know? Well, you're all fathers here.

86:03

You all have your own children. So, it's

86:05

a good question for you. How old are

86:08

your kids? How old are your kids? Uh

86:09

three and five. 19 and 21 and six,

86:13

seven, and 10. My children are very

86:15

young, but uh we already do use AI and I

86:18

sit down with them in front of replet

86:19

and we generate ideas and make make

86:21

games. And um I would say, you know,

86:23

what Brett said about generality is very

86:25

important. The ability to pivot and kind

86:27

of learn skills quickly. Being

86:30

generative is very very important.

86:33

Having a you know a fast pace of

86:35

generating ideas and iterating on those

86:37

ideas. We sit down in front of Chad GPT

86:41

and my kid imagines scenario. Oh, what

86:43

if you know there's a there's a cat on

86:45

the moon and then you know what if the

86:47

moon is made of cheese and what if

86:49

there's a mouse inside it or and so we

86:51

keep generating these um variations of

86:55

these different ideas and I and I find

86:57

that you know makes them more

86:58

imaginative and and creative. Uh rule

87:01

number one that I tell my kids is stay

87:04

away from porn at all costs. I'd rather

87:07

you have a drug problem than a porn

87:08

problem. And I actually mean that. I

87:10

think it's I think porn is more

87:11

dangerous to the to the human being as

87:13

as bad as a drug problem is. But when we

87:16

get to the question of how to confront

87:18

the world and uh the things that you're

87:20

going to be um expected to to do in the

87:23

workplace and all of that, my point to

87:26

them is you are facing the uh the

87:30

dawning of the age of complex systems

87:36

that you are going to have to interact

87:37

with. And in the age of complex systems,

87:39

you have to understand that you cannot

87:42

blueprint a solution. And you have to

87:44

approach these systems with a upgraded

87:49

toolkit of humility because the ability

87:52

of the system to do something you don't

87:54

predict is much greater than a highly

87:55

complicated system. So you have to

87:58

anticipate that and be very sensitive to

87:59

the fact that what you intended to

88:02

happen is not what's going to happen. So

88:04

you have to monitor the unintended

88:06

consequences of whatever your action is

88:08

and that there are really two tools

88:10

which work. One of which you just

88:12

mentioned which is the prototyping. You

88:14

prototype things. You don't imagine that

88:16

I know the solution to this and I'm

88:18

going to build it. You imagine I think

88:19

there's a solution down there. I'm going

88:21

to make a proof of concept and then I'm

88:23

going to discover what I don't know and

88:25

I'm going to make the next version.

88:26

Discover what I don't know and

88:27

eventually you may get to something that

88:29

actually truly accomplishes the goal. So

88:32

prototyping is one thing.

88:34

And also instead of using the blueprint

88:36

as the metaphor in your mind uh navigate

88:40

you can navigate somewhere. And you know

88:42

that the way I think of it is a surfer

88:47

is in some ways mastering a complex

88:50

system but they're not doing it by

88:52

planning their days surfing down the

88:54

waves. You can't do that. What you can

88:56

do is you can be expert at absorbing

88:59

feedback and navigating your way down

89:01

the wave. and that that's the right

89:03

approach for a complex system. Nothing

89:04

else is going to work. And so I guess

89:07

the final piece is uh general tools

89:10

always no specialization. This is this

89:13

is the age of generalists and um invest

89:16

in those tools and they will pay.

89:19

So the guiding philosophy for me is uh

89:21

to produce high agency generalists. So

89:24

um ultimately I want them to be

89:26

motivated self-starters and have a wide

89:29

general toolkit. I imagine them very

89:31

much what you imagine which is

89:33

instructing robots, instructing agents,

89:36

coming up with ideas. Um, and I imagine

89:39

them having a very high velocity life

89:41

where they may be writing a book,

89:43

organizing a festival, having a podcast,

89:46

starting a business, and being part of

89:47

somebody else's business all at once as

89:49

they are of the ADHD. Yeah. Right.

89:51

Exactly. Um, so the high agency

89:54

generalist is the kind of guiding

89:56

philosophy. Some of the things that we

89:58

do is like we do chess, we do Brazilian

90:00

jiu-jitsu, we do dancing, we do acting

90:03

classes, playing in nature, uh

90:05

entrepreneurship, understanding that you

90:07

can start a lemonade. We just did

90:08

lemonade stands which was amazing. U we

90:11

sold lots of lemonade on the street. So

90:13

those kind of things and jumping from

90:15

one thing to the next thing, but also

90:17

trying to avoid too many screens and

90:19

forcing them into making stuff from

90:22

what's going on around the house. Um,

90:25

some distinctions that we try and give

90:26

them is the difference between creating

90:28

and consuming because I think AI has

90:30

this superpower of making you a hyper

90:32

consumer or a hyper creator. Um, and if

90:34

you don't understand the distinction

90:36

between creation and consumption, you

90:38

end up falling into the consumption

90:39

trap, whether it be porn or just news or

90:44

um, thing, you know, things that feel

90:45

like you're productive, but you're

90:47

actually just consuming stuff. Won't

90:48

that be the most successful AI? the one

90:51

that plays with my dopamine the most.

90:54

Yeah. And and makes you and makes you

90:56

think that you're achieving something

90:58

when you're actually just consuming

91:00

something. So trying to give them the

91:02

understanding that there is this

91:04

difference in their life between

91:05

creation and consumption and to be on

91:07

the creation side. I started my first

91:09

business at 12 years old and I started

91:11

more businesses at 14, 15, 16, 17 and

91:14

18. And at that time, what I didn't

91:17

realize is that being a founder with no

91:19

money meant that I also had to be the

91:21

marketeteer, the sales rep, the finance

91:23

team, customer service, and the

91:25

recruiter. But if you're starting a

91:27

business today, thankfully, there's a

91:29

tool that wears all of those hats for

91:31

you. Our sponsor today, which is

91:33

Shopify. Because of all of its AI

91:35

integrations, using Shopify feels a bit

91:37

like you've hired an entire growth team

91:40

from day one, taking care of writing

91:42

product descriptions, your website

91:44

design, and enhancing your products

91:46

images, not to mention the bits you'd

91:48

expect Shopify to handle, like the

91:50

shipping, like the taxes, like the

91:51

inventory. And if you're looking to get

91:53

your business started, go to

91:56

shopify.com/bartlet and sign up for a $1

91:59

per month trial. That's

92:01

shopify.com/bartlet.

92:04

The thing that we I think all agree on

92:07

is that this is inevitable. Do you agree

92:09

with that, Brett? I think it's sad that

92:12

it is inevitable, but at this point it

92:14

is. What part of it do you find sad?

92:18

We have squandered a long period of

92:24

productivity and peace in which we could

92:27

have prepared for this moment. and our

92:31

narrow focus on competition

92:36

has created

92:38

a a fragile world that I'm afraid is not

92:42

going to survive the disruption that's

92:43

coming. And it didn't have to be that

92:45

way. This was foreseeable. I mean,

92:47

frankly, the movie 2001, which came out

92:51

the year before I was born, anticipates

92:54

some of these problems. And you know we

92:58

treated it too much like education. I

93:00

mean like entertainment and not enough

93:03

like education. So we are now you know

93:07

we've had the AI era opened without a

93:11

discussion about its implications for

93:12

humanity. There is now for game

93:14

theoretic reasons no way to slow that

93:18

pace because as you point out if we

93:20

restrain ourselves we simply put the AI

93:23

in the hands of our competitors. That's

93:24

not a solution. So, I don't advocate it,

93:27

but there's a lot more preparation we

93:29

could have done. We could have

93:30

recognized that there were a lot of

93:31

people in jobs that were uh about to be

93:34

obliterated and we could have thought

93:36

deeply about what the moral implications

93:39

were and what the solutions at our

93:42

disposal might have been. And having not

93:45

prepared, it's going to be a lot more

93:46

carnage than it needed to be. Amjad, I

93:49

heard you say a second ago that what we

93:50

should be talking about is how we deal

93:51

with job displacement. Do you have any

93:54

theories if you were prime prime

93:56

minister or president of the world and

94:00

you your job was to deal with job

94:01

displacement let's just say in the

94:03

United States how would you go about

94:05

that the first thing I would do is uh

94:09

teach people about these systems whether

94:11

it's um programs on on the TV or

94:16

outreach or or what have you just trying

94:18

to get people to understand how chat GBT

94:21

works how these algorith algorithms work

94:24

and as the new jobs arrive um I think

94:28

you know there's going to be an

94:29

opportunity for people to be able to

94:31

detect that you know the this job

94:34

requires this set of skills and I I I

94:37

have this this kind of experience and

94:39

although my experience are potentially

94:41

outdated I can repurpose that experience

94:43

to do that job I'll give you an example

94:46

a teacher his name is Adil Khan you know

94:49

he started using at the time GPT3 and uh

94:53

felt like it does amazing work as a

94:55

tools for teachers or even potentially a

94:58

teacher itself. So he learned a little

95:00

bit of coding and he he went to

95:01

Unreallet and he built uh this company

95:04

and um just two years later they're

95:07

worth hundreds of millions of dollars.

95:08

Obviously, not everyone will be able to

95:10

create businesses of that scale, but

95:13

because you have an experience in a

95:15

certain domain, you'll be able to build

95:18

the next iteration of of of that using

95:22

technology. So, even if your job was

95:23

displaced, you'll be able to figure out,

95:27

you know, what's what's potentially what

95:30

potentially comes after that. So, so I I

95:32

I think people's expertise that they

95:35

built, I don't think they're all for

95:37

waste. Even if your job went away,

95:41

you can never really predict what jobs

95:42

are coming. I mean, I think of this

95:44

crazy situation where I tell my

95:47

grandfather, what is a personal fitness

95:50

trainer? And he would his mind would be

95:53

blown by this idea that well, okay, I

95:56

don't really want to go to the gym, so I

95:58

have to make an appointment and pay

95:59

someone to go to the gym and meet with

96:01

me there. And then he stands there and

96:02

tells me to lift heavy things that I

96:04

don't really want to lift. And then he

96:06

counts them and tells me that I've done

96:08

a good job and then I put the heavy

96:10

things down and then at the end of that

96:12

I feel really good and I pay him a bunch

96:14

of money. My grandfather would be like

96:16

what on earth have you been scammed? Is

96:18

this so we can never predict what what

96:21

this uh future of jobs would look like.

96:23

Even just 20 30 40 years apart the jobs

96:26

rapidly and convincingly just morph into

96:29

something else. I think it's very

96:31

dangerous the idea that we need to focus

96:34

on skills. I think the future is not in

96:36

skills. Skills are being replaced. It's

96:38

this idea that the education system has

96:40

to stop being compartmentalized and has

96:42

to be a lifelong learning approach. The

96:44

department of education needs to be

96:47

seeing people as lifelong learners who

96:49

are constantly disrupted and need

96:51

re-education. Interesting. That that's

96:53

going to be a thing. The Department of

96:54

Education needs to start as a kid and go

96:58

right through to maybe 70. Does the

97:00

Department of Education have a role

97:01

anymore at all? Depends on your

97:03

definition of education. I think if

97:04

you're trying to teach kids or if you're

97:06

trying to teach kids to, you know,

97:09

remember facts and figures from a

97:11

history book, then no. But if it's about

97:14

coaching, mentoring, being displaced,

97:16

finding the next thing, and maybe if

97:18

it's AIdriven and all of those kind of

97:20

things, then it's a different paradigm

97:22

shift around what education is and what

97:23

its purpose is. And if we see it as a

97:26

fluid thing where we wave into an

97:28

opportunity and then wave back into

97:30

education, spotting a new opportunity

97:32

and then back here. If we're learning

97:35

rather than skills, but we're learning

97:36

tools. So it's a tools-based education

97:39

as opposed to a skills-based education.

97:41

The the purpose of education for most of

97:43

human history was about virtue, about

97:45

becoming a great person who had good

97:47

judgment and who had good values. And we

97:49

don't really do much of that anymore.

97:51

But I think if we essentially said if we

97:53

get back to what is if we ask the

97:56

question what is the purpose of

97:57

education and where does it fit in our

97:58

lives and at what time frame does it go

98:01

for and then we just trust that people

98:04

are going to come up with weird and

98:06

wonderful jobs. You know this is sounds

98:09

crazy but also and this is a weird

98:13

analogy. My cat is incredibly happy. How

98:17

do you know? Well, it it demonstrates

98:20

all the characteristics of of being a

98:22

happy cat and it lives in a world of

98:25

super intelligence as far as it's

98:27

concerned. So, there's this house and

98:29

food just magically happens. It has no

98:31

idea that there's this Google calendar

98:33

that runs a lot of things that happen

98:35

around it. The food gets delivered. The

98:38

money is magically made by something

98:40

that is inconceivably more intelligent

98:42

than the cat. And yet the cat has

98:44

evolved to be living this life of

98:46

purpose and meaning inside the house.

98:48

And as far as it's aware, it it's got a

98:50

great life. But you have the power at

98:53

any moment if you're having a bad day to

98:55

do something not so pleasant to that

98:57

cat. And it can't really reciprocate

98:59

that. Exactly. But but what's in it for

99:02

me to hurt the cat?

99:04

Because the in this analogy, you might

99:08

want to move house and the landlord

99:09

doesn't allow cats. So you've got a

99:11

decision to make. Yeah, that there are

99:13

things that the cat is highly disrupted

99:15

by due to no fault of the cat. I get it.

99:17

But as far as cat existence goes and the

99:20

and the history of cats, if you were to

99:23

ask that cat, do you want to tra trade

99:25

places with any of the other cats that

99:27

came before you? It would probably say,

99:28

I don't want to take the risk because

99:30

all the other cats had to fend for

99:32

themselves in a way that I don't have

99:33

to. It's very possible that we live end

99:35

up living in a life a lot like the house

99:38

cat in the sense that from our

99:41

perspective we're extremely like we're

99:44

having very interesting lives and

99:46

purpose meaning and just there's this

99:48

massive higher intelligence that's just

99:50

running stuff and we don't know how it

99:53

works but it doesn't really matter

99:54

whether how it works. We we we are the

99:57

beneficiaries of it and it it's doing

100:00

important things and we're enjoying

100:01

being house cats in in its in its life.

100:03

I have a few things to say about this.

100:04

One, I'm pretty sure your cat's not as

100:06

impressed with your capacity as you are

100:10

or as you think he is. Um I just know

100:12

cats well enough to be pretty sure of

100:14

that. But oh, it looks down on me. Yeah,

100:15

you're right. I think it I think it's a

100:17

fair it's a fair point that there is an

100:19

existence and actually, you know, pets

100:21

really do have it. If they have loving

100:22

owners, they really do have it pretty

100:24

great. And I would also point out that

100:25

there's a way in which we already are

100:27

this way. Most of us do not understand

100:30

the process that results in electricity

100:33

coming out of the walls of our house or

100:35

the water that comes out of the tap. And

100:37

we're pretty much okay with the fact

100:39

that somebody takes care of that and we

100:41

can busy ourselves with whatever it

100:42

might be. But the place that I find

100:45

something troubling in your description

100:48

is that you say that the nature of what

100:51

we do is to deal with the fact that jobs

100:54

are always being upended. That's a very

100:57

new process. That is the hypern novelty

101:00

process. It used to be that it was only

101:03

very rarely that a population had a

101:06

circumstance where you didn't

101:07

effectively do exactly what your

101:09

immediate ancestors did. Right? Um, in

101:12

general, you took what the jobs were,

101:15

you picked something that was suited to

101:18

you, and you did that thing

101:20

intergenerationally.

101:21

Intergenerationally. And the point is,

101:23

we've now gotten to the point where even

101:26

within your lifetime, what is possible

101:29

to get paid for is going to shift

101:31

radically in ways that nobody can

101:33

predict. And that is a dangerous

101:35

situation. Like probably every two

101:37

years, like two or three years, right?

101:39

And so maybe there's some model by which

101:41

we can surf that wave and you can learn

101:43

a generalist toolkit and you know that

101:46

your survival doesn't depend on your

101:49

being able to you know switch up every

101:51

two years and never miss a beat or maybe

101:54

we can't but I do think it is worth

101:56

asking the question if the rate of

102:00

technological change has taken us out of

102:03

the normal human circumstance of being

102:07

able to deduce what you might do for a

102:09

living based on what your ancestors did

102:11

and put us in a situation where what

102:13

your ancestors did is going to be

102:15

perfectly irrelevant no matter what. But

102:17

that is effectively a choice that has

102:19

been made for us. And we could choose to

102:22

slow the rate of change so that we would

102:26

live in some kind of harmony where our

102:29

developmental environment and our adult

102:32

environment were a match. Now, as a

102:33

biologist, I would argue if we don't do

102:35

something like that, this is a matter of

102:38

time. Yeah. How would we change? How do

102:40

we slow the rate of change? Well, I I

102:42

mean, you can you can you can be the

102:43

Amish, right? You can be the Amish and

102:45

live in your own communities and and I

102:47

would assume some people would would

102:49

want that. Well, I'm you know, when

102:52

Heather and I wrote our book, I wanted

102:54

the first chapter to be, are the Amish

102:56

right? And the answer is they can't be

102:59

exactly right because they picked an

103:01

arbitrary moment to step off the

103:03

escalator. But are they right that

103:04

there's something dangerous about this

103:06

continuing pattern of technological

103:07

change? Clearly they are. What do the

103:09

Amish do for anyone that doesn't know?

103:11

The Amish live as if it was what 1850 or

103:18

something. So they live in a they don't

103:21

use cars. They I think they do have

103:23

phones but they do not have electricity.

103:27

Basically they they voluntarily accept a

103:30

techn they're basically a lite community

103:33

and they uh have turned out to fare

103:37

surprisingly well against many of the

103:39

things that have upended modern one of

103:42

them right yeah co they did beautifully

103:44

quite happy people very low autism rates

103:46

they they have all sorts of advantages

103:48

so anyway I'm not arguing that we should

103:50

live like the Amish I don't see that but

103:51

I do think the idea that they had an

103:53

insight which was you need to step off

103:56

that escalator because you're just going

103:57

to keep making yourselves sicker is

103:59

probably right now. Maybe this is a

104:02

one-time shift. We've stepped over the

104:05

event horizon. We are going to be living

104:07

in the AI world. And maybe if we're

104:09

careful about it, we can figure out how

104:11

to turn that landscape of infinite

104:14

possibility that you're describing into

104:17

a place that doesn't change. That you

104:20

always have the opportunity to decide

104:23

what needs to be done. But that none

104:26

that the that living over that event

104:28

horizon is not an everchanging process.

104:31

It's just the next frontier. I do want

104:33

to also propose or ask the question when

104:36

we talk about our hyperchanging world.

104:38

Isn't it harder for older people to

104:41

learn because of the the way that the

104:43

brain works in terms of processing speed

104:44

and memory flexibility?

104:47

So I was wondering if you're going to

104:48

get a situation where like my father h

104:50

because of his brain and the reduced

104:52

memory flexibility and processing speed

104:54

that happens when you're older is going

104:55

to struggle significantly more than my

104:57

niece who can seem to learn I mean my

105:00

niece knows five languages and she's

105:01

seven or something crazy like that five

105:04

languages but I mean the brain is much

105:05

more plastic isn't it? So the it's and

105:07

that's goes back to our evolutionary

105:09

psychology which you know much

105:10

evolutionary history which you know much

105:11

more than I do about of we're meant to

105:14

learn our lessons when we're young. use

105:16

that information for a lifetime. But if

105:18

that information is changing quickly,

105:20

well, that's I mean this is exactly what

105:21

I'm pointing to. It is not normal for

105:24

your developmental environment to fail

105:27

to prepare you for your adult

105:29

environment. The normal thing is as a

105:31

young person, you take on ever more of

105:34

the responsibilities of the adult

105:37

environment. And then at some point, you

105:40

know, in a properly functioning culture,

105:42

there's a right of passage. You go into

105:44

the bush for 10 days, you come back

105:46

with, you know, a large uh, you know,

105:49

game animal and now you're an adult and

105:52

you take that program that you've been

105:53

building and you activate it. And that

105:55

is normal. And, you know, you're a lot

105:59

happier person. You're a lot more

106:00

fulfilled if your life has that kind of

106:02

continuity to it. And you know, I'm not

106:05

against the idea that we have enabled

106:07

ourselves to do things that can't be

106:09

done if that's the the limit, but we

106:13

have also harmed ourselves gravely. And

106:16

I would like to somehow pry apart our

106:21

ability to improve our well-being from

106:26

our self-inflicted wounds that come from

106:29

this

106:30

neverending pace of change. And I don't

106:32

know if it's possible, but I think it's

106:34

a worthy goal. Something amusing. I

106:36

don't know if it's exactly a

106:37

counterpoint, but um during co

106:41

especially and you know through the the

106:44

recent technological change uh some

106:47

people have started living closer to the

106:50

more ancestral environment. Um so uh

106:54

people whose jobs are online, some of my

106:58

friends like went and built communities

107:00

like collectives where they you know

107:02

live and they they create farms and they

107:05

they eat and then they eat and they have

107:07

like an email job. They do their email

107:09

jobs for five hours and and go out and

107:11

they all have children and it's it's a

107:14

fascinating life. And there was so much

107:16

rethinking in in Silicon Valley about

107:18

how we live. And there's a bunch of

107:19

startups that are trying to create um

107:21

cities where they're like, okay, we know

107:25

that we've we're suffering because our

107:28

cities are not really walkable. And

107:31

there's so many reasons why we're

107:32

suffering. First, we're not getting the

107:34

movement. Second, there's a social

107:36

aspect of walkable city where you're

107:38

able to interact with people. You'll

107:40

make uh friends by just happening to be

107:42

in the same place as others. let's

107:44

actually build uh walkable cities and if

107:47

we want to you know uh transport faster

107:50

we'll have these self-driving cars on

107:52

the perimeter of the city that are going

107:53

around and I think there are ways in

107:56

which technology can afford us to uh to

107:59

live uh in a way that

108:02

reverses I I guess in a more local way.

108:06

I I like that vision, but I also am

108:08

aware that there's a different vision,

108:10

right? You see people in Palo Alto, for

108:12

example, actually exerting, you know,

108:15

very strong controls on how much their

108:17

children are exposed to, uh, you know,

108:20

to phones. And I live in Palo Alto.

108:22

Yeah. So, so you see that. On the other

108:24

hand, what I am worried about is that

108:28

the the elites of PaloAlto don't realize

108:32

that what they're doing is they're

108:34

figuring out how to reduce the harm to

108:36

their own families as they're exporting

108:38

the harm to the world of these

108:40

technologies that for everybody else are

108:42

unregulated. And so the question is, can

108:45

we bring everybody along? If the AI

108:47

revolution is going to alter our

108:49

relationship to work and everything

108:52

else, can we bring everybody along so

108:54

that at the end of this process instead

108:56

of saying well you know it's a shame

108:58

that uh you know three billion people

109:01

were sacrificed to this transition but

109:03

progress is progress we can really say

109:05

well we figured it out and everybody now

109:07

is living in a style that is closer to

109:10

their programming and closer to the

109:12

expectations of their physical bodies.

109:15

You know, if that were true, then I

109:16

would I would be I would love to be

109:18

wrong in my fears about what's coming.

109:21

Um, but unfortunately, the market is not

109:24

going to solve this problem without our

109:25

being deliberate about forcing it to.

109:30

What's your biggest fear? Like when you

109:32

say my fears about what's coming, what

109:33

do you like what's what's the picture

109:35

that comes in your mind? Oh, it's a

109:36

whole different topic actually. Um my my

109:39

fear coming stemming from technology

109:43

uh and AI is that this is a runaway

109:48

process and that that runaway process is

109:50

going to interface very badly with some

109:53

latent human programs. that in effect

109:56

the need for

109:58

workers largely disappears and the

110:01

people who are at the head of the

110:03

processes that result in that

110:05

elimination for the need for workers

110:07

start talking about useless eaters.

110:08

Maybe they come up with a new term this

110:10

time. Thin the herd. Yep. Or they allow

110:12

it to be thinned or something. Right.

110:14

I've heard you talk about the five key

110:17

concerns you have or the five key

110:19

threats you have before. Could you name

110:21

those five? So the first one is the one

110:23

I worry least about. I don't worry zero

110:25

about it, but I worry least about it,

110:27

which is the malevolent AI uh that the

110:29

doomers are so focused on. The second

110:33

one is the idea that you know an AI can

110:36

be misaligned not because it has

110:38

divergent interest but because it just

110:40

misunderstands what you've asked it.

110:41

these autonomous agents. You know, the

110:43

famous example is you ask them to

110:45

produce as many paper clips as possible

110:47

and they start liquidating the universe

110:48

to make paper clips and you know, it's a

110:50

it's a sorcerer apprentice kind of

110:52

issue. The third one I would say

110:56

is actually all of the remainder of them

110:58

I would say are guaranteed and

111:02

um the third of them is the derangement

111:07

of human intellect that we are already

111:11

living in a world where it's very

111:12

difficult to know what the facts even

111:16

mean. Right? We the facts are so

111:18

filtered and we are so persuaded by

111:21

algorithms that it's you know our

111:24

ability to be confident even in the

111:25

basic facts even within our own

111:27

discipline sometimes is uh at an

111:29

all-time low and it's getting worse and

111:32

that problem takes a giant leap forward

111:35

at the point that you have the ability

111:39

to generate undetectable deep

111:42

fakes. Right? that's going to alter the

111:45

world very radically when the fact that

111:47

you're looking at videotape of somebody

111:49

robbing a bank doesn't mean that they

111:51

robbed a bank or that a bank was even

111:53

robbed. Um, so anyway, I call this we

111:56

deal with this a lot by the way. I I

111:57

think every single week, every single

112:00

week I send my I have a chat people that

112:02

just are now basically spending I'd say

112:04

30% of their time dealing with deep

112:06

fakes of me doing crypto scams, inviting

112:09

people to Telegram groups, and then

112:11

asking them for credit card details. We

112:13

had one on X. I think you probably saw

112:15

it, Dan, didn't you of me? But that

112:17

someone was running ad deep fake ads on

112:18

X of me. And it wasn't just one ad. It

112:21

was like there were it was like swatting

112:22

flies. There was 10 of them. And I

112:24

messaged them to X and there was 10

112:26

more. Then the day after there was 10

112:27

more. Then the day after there was 10

112:29

more. Then it started happening on on

112:30

Meta. So it's a video of me basically

112:33

asking you to come to a Telegram group

112:34

where people are being scammed and

112:36

audience members of mine are being

112:37

scammed. And when I send them to Meta,

112:39

they thankfully remove them. But then

112:41

there's five more. And I went on

112:42

LinkedIn yesterday and my DMs are,

112:44

"Steve, by the way, there's this new

112:45

scam." And I actually at this point I c

112:47

I'd need someone fulltime just sending

112:50

this over to Meta. I'm the I'm the same

112:52

but on a smaller scale. Every week it's

112:55

did you really message me on Facebook

112:57

asking me for my crypto wallet and blah

112:59

blah blah. My my least favorite ones are

113:01

when the the single mother messages me

113:03

saying that she just paid £500 of her

113:05

money and how devastated she is and I

113:07

feel this moral obligation to give her

113:09

her money back um because she's fallen

113:11

for some kind of scam. That was me. It

113:13

was my voice. It was a video of me

113:14

telling her something. Yeah. And I don't

113:16

know I don't know how you deal with that

113:17

but sorry do continue. Well, I mean

113:19

that's actually on the list here. the

113:21

massive disruption to the way things

113:22

function both because people are going

113:24

to be unemployed in huge numbers and

113:27

because those who are not abiding by our

113:30

social contract are going to find

113:32

themselves empowered more than the

113:34

people who do. So in this case, not only

113:38

is this poor woman, you know, now out

113:40

500 bucks for whatever the scam was, but

113:44

you've also been robbed whether or not

113:46

you pay her back for the thing that she

113:48

thought she purchased. your credibility

113:51

is being stolen by somebody and you have

113:53

no capacity to prevent it. This has

113:55

happened to me also and it is profoundly

113:58

disturbing and it is only one of a dozen

114:00

different ways that AI enables those who

114:06

are absolutely willing to shrink the pie

114:11

from which we all derive in order to

114:14

enlarge their slice. you know, there

114:16

there are innumerable ways that this can

114:18

happen and um I think people do not see

114:21

it coming. They don't understand how

114:22

many different ways they are going to be

114:26

robbed every bit as surely as if

114:27

somebody was printing money. Um and then

114:30

the last one is that this just simply

114:33

accelerates demographic

114:36

uh processes that do potentially result

114:39

in the unleashing of technologies that

114:43

pre-existed AI. you know, this this can

114:45

easily result in an escalation

114:48

uh into wars that turn nuclear. Um, so

114:52

anyway, I think that list could probably

114:55

be augmented at this point now that

114:57

we've, you know, spent a little time in

114:59

the AI era. We can begin to put a little

115:01

more flesh on the bones both of what is

115:04

possible in this era and what we should

115:06

fear. One of those you you mentioned uh

115:08

truth, you know, the problem of truth.

115:11

Would you say just a thought experiment,

115:14

someone today like an average person,

115:17

college educated say

115:18

person, are they more

115:21

propagandized or led astray

115:26

than someone in Soviet Russia?

115:30

Well, I don't know because I didn't live

115:32

in Soviet Russia, but my understanding

115:34

from people who did was that there was a

115:39

wide awareness that the propaganda

115:42

wasn't true. Doesn't mean they knew what

115:43

to believe, but there was a cynicism,

115:46

which is one of my fears here, is that

115:48

the, you know, you're really stuck

115:50

choosing between two bad options in a

115:54

world where you can't tell what is true.

115:55

You can either be overly credulous and

115:57

be a sucker all the time, or you can

115:59

become a cynic and you can be paralyzed

116:01

by the fact that you just don't believe

116:02

anything. But neither of those is a

116:05

recipe for do you think Google search

116:06

first and maybe now chat GPT has helped

116:09

people more or

116:11

less to find truth? I

116:15

think it's not chat GPT exactly but all

116:18

of the various AI engines that we're

116:20

starting with Google have briefly

116:22

enhanced our capacity to know what's

116:24

true because in fact they allow us to

116:26

see through the algorithmic manipulation

116:29

because the AI is not well policed. you

116:33

can get it to recognize patterns that

116:35

people will swear are not true. Um, and

116:38

so anyway, a lot of us have found it

116:40

useful in just simply unhooking the

116:42

gaslighting. Um, so that's been very

116:44

positive. But I also remember the early

116:47

days of search and search used to be a

116:50

matter of there are some pages out

116:53

there. I don't know where they are.

116:54

Here's a mechanized something that's

116:56

looked through this stuff and just point

116:58

me at the direction of things that

116:59

contain these words. right before the

117:02

algorithmic manipulation started

117:03

steering us into believing pure nonsense

117:06

because somebody who controlled these

117:08

things decided it was useful for us to

117:10

believe those things. So my guess is at

117:13

the moment AI is enhancing our ability

117:16

to see more clearly but that really

117:18

depends on some kind of agreement to

117:21

protect that capacity that I'm not aware

117:24

of us having are you implying there that

117:27

AI will protect us from AI i.e. the

117:30

woman that got scammed in my audience,

117:33

the platforms would have a tool built in

117:34

which would be able to identify shortly

117:37

that that is not me and the ad is been

117:40

launched by someone in another country

117:42

potentially and then also when she

117:43

starts being asked for her credit card

117:45

details in such a way on Telegram 10

117:48

minutes later the system will able to

117:49

understand there that this is probably a

117:51

scam at that touch touch point too and

117:53

it will also be the defense not just the

117:55

offense.

117:56

First thing uh question is Meta

117:59

incentivized to solve this problem? Yes.

118:03

Yes. And so Meta is probably actively

118:06

working on AIs and again it's going to

118:08

be a cat and mouse game like every abuse

118:10

that happens out there. So I I think

118:13

that the market will naturally respond

118:15

to things like that in the same way that

118:18

you know we installed antiviruses as you

118:21

know annoying as they are. I think we'll

118:23

install uh AIS on our computers that

118:27

will allow us to at least help us kind

118:29

of sort the the fake from from the

118:31

truth. Well, but let's let's take the

118:33

example you say. Is Meta incentivized to

118:36

solve this problem? Superficially, it

118:38

seems that it should be, but how many

118:40

times in recent history have we watched

118:42

a corporation cannibalize its own

118:45

business over what at best is the

118:47

bizarre desires of its shareholders,

118:50

right? Why was X throwing off people

118:53

with large accounts or Facebook or

118:58

Google? It would seem that you would

119:00

expect based on the market choosing

119:03

search engines or social media sites,

119:05

you would expect these companies to be

119:08

absolutely mercenary and say, you know,

119:10

if Alex Jones has a big audience, who

119:12

are we to say? That's what I would have

119:14

expected. Instead, you had these

119:18

companies policing the morality of

119:21

thought even though it reduced the size

119:25

of the population using the platforms. I

119:27

have a hard time explaining why that

119:29

happened, but I have every reason to

119:30

expect the same thing will happen with

119:32

AI. What are you excited about with AI?

119:34

What's your your optimistic take?

119:37

Because at the start of this

119:37

conversation, you said that there's

119:39

infinite ways that it could improve our

119:41

lives and there's 10 times more ways

119:43

that it could hurt our lives. But let's

119:45

investigate some of those ways that it

119:46

could drastically improve our lives.

119:48

There's a couple of different ways. One,

119:50

we have, as we mentioned before, a der

119:53

of competent teachers and professors.

119:57

And that is a problem that will take

119:59

three generations at least to solve if

120:02

what we're going to do is start tomorrow

120:04

and start educating people in the right

120:06

way that would make them competent to

120:07

stand at the front of a room and

120:08

educate. But if we can augment that

120:10

process, if we can leverage a tool like

120:13

AI so that you know a small number of

120:16

competent teachers can maybe reach a

120:18

larger number of pupils, that's

120:20

plausible I think. Second thing is we

120:24

have a tremendous number of problems

120:26

that are obstacles to us living well on

120:30

this planet that AI might be able to

120:33

manage that human intellect alone

120:35

cannot. Right? Just in the same way that

120:39

you know compute power can calculate

120:41

things at a rate that human beings can't

120:43

keep up and there are certain things you

120:44

want calculated very well. There are

120:46

also some reasoning problems. You could

120:49

imagine that instead of having um static

120:54

laws that govern behavior poorly because

120:59

they get gamed that you could have a

121:01

dynamic interaction. You could specify a

121:05

an objective of something like a law and

121:08

then you could monitor whether or not a

121:10

particular intervention successfully

121:13

moved you in the direction that you were

121:14

hoping to go or did something

121:16

paradoxical which happens all the time

121:18

and you could have you could basically

121:19

have governance that is targeted to

121:22

navigation and prototyping rather than

121:24

to specifying a blueprint for how we are

121:27

to live. So we wouldn't need

121:29

politicians. Um, at the moment we're

121:32

stuck with, you know, constitutional

121:35

protections that are as good as has been

121:38

constructed and still inadequate to

121:41

modern realities.

121:43

Dan, what are you excited about with AI

121:45

from an individual level, but also from

121:47

a societal level? Yeah. Well, the big

121:49

ones are healthcare and education. I

121:51

mean, it's ridiculous that you uh are

121:54

sitting there in pain, having had an

121:56

MRI, and there just hasn't been someone

121:58

to look at that MRI yet. and and tell

122:00

you what to do. Um and that could easily

122:03

be solved there's all sorts of

122:04

healthcare issues where um and also not

122:07

only that throughout the entire world

122:09

there are places that just don't have

122:10

general practitioners and they don't

122:12

have you know medical advisers and and

122:15

you know the breakthroughs in global

122:16

healthcare will be phenomenal and the

122:18

breakthroughs in global education could

122:20

be transformational um on the planet. I

122:23

I'm excited at an individual level that

122:26

I think the industrial age created a

122:28

bunch of jobs that are very dehumanizing

122:30

and we've just kind of gotten used to

122:32

them and put up with them. The idea that

122:33

work should be repetitive and you know

122:36

you just repeat the same loop over and

122:38

over and over over again and over a

122:40

10-year period of time you might get you

122:42

know graduated up one gear and all that

122:44

kind of stuff. I don't think that's very

122:46

human. Um the idea that you could be

122:49

simultaneously writing a book, launching

122:51

a business, running a team, launching a

122:54

festival, having an event. Um that that

122:57

that you could actually be doing this

122:59

kind of like mini kingdom work where

123:01

you've got this little, you know, uh

123:04

ecosystem around you of fun things that

123:06

you're involved in that is actually made

123:08

possible for a vast majority of people

123:10

if they embrace these kind of tools. um

123:13

you can live an incredibly fulfilling

123:15

and amazing and impactful existence or I

123:19

know that I do as a result of having

123:21

these tools in my life. Like I'm I'm

123:23

doing things that I could have only

123:24

dreamed about uh as a kid. And what

123:26

would you say to entrepreneurs? I know

123:28

you you work with thousands of

123:29

entrepreneurs. What are you telling them

123:31

in terms of their current businesses or

123:33

business opportunities that you're

123:34

foreseeing? So I think that small teams

123:36

have infinite leverage now and that when

123:39

you have a team of say five to 10 people

123:43

who share an incredible passion for a

123:45

meaningful problem in the world and they

123:47

want to see that meaningful problem

123:48

solved and they come together in the

123:50

spirit of entrepreneurship to solve that

123:52

problem. That little 5 to 10 person team

123:56

armed with the technology that we now

123:57

have available, you can have a a big

124:01

impact. You can make a lot of money. You

124:03

can have a lot of fun. you can solve

124:04

meaningful problems in the world. You

124:06

can scale solutions. You can probably do

124:08

more in a three-year window than most

124:10

people did in a 30-year career. Uh and

124:13

then that little band of 5 to 10 people

124:15

could either go together onto a new

124:18

meaningful problem or they could disband

124:20

and you know work on other meaningful

124:22

problems with different teams. In such a

124:25

world where you have this sort of

124:26

infinite leverage but everyone else has

124:29

access to the same infinite leverage.

124:31

What becomes the USP? Going back to this

124:33

idea of the moat, like what is the thing

124:34

of value when we've all got access to

124:36

$20 infinite leverage? Well, first of

124:39

all, the first thing you need to do you

124:41

need to understand is that this moment

124:43

of time is the least competitive uh

124:47

moment. Like if you understand how to

124:49

use these tools, you can start making

124:51

money tomorrow. like we, you know, I see

124:53

countless examples of people making

124:56

thousands of dollars with these hustles

124:57

that I that I talked about or building

125:00

businesses that generates millions of

125:01

dollars in the first couple of months of

125:03

existence. So, I would say start moving

125:05

now. Start building things. So, it's an

125:08

unprecedented time of of of wealth

125:11

creation. Clearly at some point as the

125:15

market gets more efficient as people

125:17

more and more people understand how to

125:18

use these tools um there's less

125:22

potential for uh you know creating these

125:25

massive businesses quickly and we've

125:27

seen this like the dawn of the internet

125:29

or dawn of the web you know it was a lot

125:31

easier to create Facebook than it is now

125:34

then we had mobile and for three four

125:37

five years it was very easy to create

125:40

massive businesses and then it became

125:42

harder. Being just at the edge of what's

125:44

possible is going to be very very

125:46

important over the next couple years.

125:48

And that's that gets me really excited

125:49

because the entrepreneurs who are paying

125:51

attention are going to are going to be

125:53

having the most amount of fun, but

125:55

they're also going to be able to make a

125:56

lot of money. How many applications have

125:59

been built on Replet to date? So, you

126:02

know, I can talk about the millions of

126:03

things that have been built since the

126:05

since we started the company, but just

126:07

since uh September when we launched

126:10

Replent, there's been about 3 million

126:12

applications built purely in natural

126:16

language with no uh with no coding at

126:20

all, purely natural language. uh of

126:22

those I think uh 300 400,000 of them

126:25

were deployed in a real um in the site

126:30

was deployed and it is having people are

126:33

using it some kind of business some kind

126:35

of internal tool. I built one last night

126:37

by the way an internal tool or I uh

126:39

built an application to track um how my

126:43

kids earn pocket money. Amazing. So, I

126:46

just told it that I wanted to track the

126:48

tasks that are happening around the

126:49

house and put an assign a value to them

126:51

and I want to be able to at the end of

126:52

the week push a button and get a summary

126:54

of how much to pay each child um for

126:57

their pocket money. we are so screwed.

127:02

And within 15 minutes, it had created

127:04

this application and it was amazing.

127:06

Like you could toggle between like

127:08

here's the place where you have the kids

127:10

and here's the weekly reports and here's

127:12

the um how much per task and you can

127:15

tick off the tasks or remove tasks or

127:17

add tasks. So then I now have this

127:19

application for which took 15 minutes of

127:23

just talking about what I wanted and now

127:25

I have an application to run the pocket

127:27

money uh situation in the in the house.

127:29

And this by the way it's something

127:31

having run an IT agency years ago.

127:34

That's something that we would have

127:35

charged five to10,000 pounds to create

127:38

or5 to 10,000 US dollars to to create

127:42

and how much time probably talking

127:44

something that would have been a three

127:46

four week project and we're at the start

127:49

of the S-curve now that you're

127:50

describing and it's already if you if

127:53

it's a $20 replet's roughly $20 a month

127:55

25 for the for the base case you did one

127:58

day of usage let's say it's a dollar it

128:01

cost you and it cost you minutes and a

128:03

dollar now and we're at the start of the

128:04

scurve and and you talk to it like

128:06

you're chatting to a developer. So one

128:08

of the things that slows down the

128:10

development process is you have to send

128:12

the information to a developer and they

128:14

need to understand it and then they need

128:15

to create something and then come back

128:16

to you. This just happens in front of

128:18

your eyes uh while you're watching it

128:20

and it's actually showing you what's

128:21

being built and it's it's really wild.

128:25

This one change has transformed how my

128:27

team and I move, train and think about

128:29

our bodies. When Dr. Daniel Lieberman

128:31

came on the DEO. He explained how modern

128:34

shoes with their cushioning and support

128:36

are making our feet weaker and less

128:38

capable of doing what nature intended

128:40

them to do. We've lost the natural

128:42

strength and mobility in our feet and

128:44

this is leading to issues like back pain

128:46

and knee pain. I'd already purchased a

128:49

pair of Viva barefoot shoes. So, I

128:50

showed them to Daniel Lieberman and he

128:52

told me that they were exactly the type

128:54

of shoe that would help me restore

128:55

natural foot movement and rebuild my

128:57

strength. But I think it was

128:58

plantficitis that I had where suddenly

128:59

my feet started hurting all the time.

129:01

And after that I decided to start

129:02

strengthening my own foot by using the

129:04

Vivo barefoot. And research from

129:06

Liverpool University has backed this up.

129:07

They've shown that wearing Vivo barefoot

129:09

shoes for 6 months can increase foot

129:11

strength by up to

129:13

60%. Visit vivo

129:15

barefoot.com/doac and use code diary 20

129:19

from my sponsor for 20% off. A strong

129:21

body starts with strong feet. This has

129:25

never been done before. A newsletter

129:28

that is ran by 100 of the world's top

129:31

CEOs. All the time people say to me,

129:34

they say, "Can you mentor me? Can you

129:35

get this person to mentor me? How do I

129:37

find a mentor?" So, here is what we're

129:39

going to do. You're going to send me a

129:41

question. And the most popular question

129:42

you send me, I'm going to text it to 100

129:46

CEOs, some of which are the top CEOs in

129:49

the world running a hundred billion

129:50

dollar companies. And then I'm going to

129:52

reply to you via email with how they

129:54

answered that question. You might say,

129:57

"How do you hold on to a relationship

129:59

when you're building a startup? What is

130:00

the most important thing if I've got an

130:02

idea and don't know where to start?" We

130:04

email it to the CEOs. They email back.

130:05

We take the five, six top best answers.

130:07

We email it to you. I was nervous

130:09

because I thought the marketing might

130:10

not match the reality. But then I I saw

130:12

what the founders were replying with and

130:14

their willingness to reply and I thought

130:15

actually this is really good and all

130:17

you've got to do is sign up completely

130:20

free. I don't think we've spent a lot of

130:23

time talking about autonomous weapons.

130:25

This is the thing that really worries

130:27

me. And the thing that worries people

130:30

about AI is this idea is that it is uh

130:34

this you know emergent system and

130:36

there's no one thing behind it and it

130:38

can be it can act in a way that's uh

130:41

unpredictable and not really guided by

130:43

humans. also think it's true of

130:44

corporations of governments and so I

130:47

think individual people uh can often

130:50

have the best intentions but the

130:52

collective can land on doing things in a

130:56

way that's harmful or morally irrepant

131:00

and I think um we talked about China

131:03

versus the US and that creates a certain

131:06

race dynamics where um they're both

131:09

incentivized to cut corners and

131:11

potentially do do harmful things and in

131:14

the world of geopolitics

131:17

um and wars, you know, what really

131:20

scares me is is autonomous weapons. And

131:23

why does it scare you? Because

131:28

uh you know you can imagine

131:32

uh autonomous drones being trained on

131:36

someone's face and you can send a a

131:39

swarm of of drones and they can be this

131:43

um sort of autonomous killing

131:45

assassination machine and it can sort of

131:48

uh function as a you know country verse

131:51

country technology in in the world of

131:54

for which is still crazy but it can also

131:58

become a tool for governments to

132:01

subjugate the citizens and and people

132:04

think we're we're safe in the west but I

132:08

think the experience with co showed that

132:11

even the systems in the west can very

132:16

quickly become draconian. Yeah.

132:18

Apparently, I've heard in um Iran that

132:22

uh they have facial recognition cameras

132:25

that detect where the women are wearing

132:27

hijabs in their own cars and it

132:30

automatically detains the car. If you're

132:34

driving and you're not wearing a hijab

132:36

and if you're certainly if you're

132:38

walking down the street, it just picks

132:40

that up and immediately you're you're in

132:42

trouble. uh you can like it acts as a

132:45

police officer and a judge and you know

132:49

a law m lawmaker it's the judge jury and

132:52

executioner essentially and it's just

132:54

happens instantaneously what happened in

132:56

Canada with the truckers uh uh sort of

133:00

protest where they froze their bank

133:02

account by virtue of just being there

133:03

just by being in that location and just

133:06

to confirm that Iran has implemented a

133:08

comprehensive surveillance system to

133:09

enforce its mandatory hijab laws

133:12

utilizing various ious technologies, one

133:13

of which is cameras and facial

133:15

recognition. So they've put cameras in

133:17

public spaces to identify women who are

133:19

not adearing to the hijab dress code.

133:22

Yeah. And just on that, London has just

133:25

put those face cam facial recognition

133:27

systems into London and also all

133:29

throughout Wales. um and they're being

133:31

rolled out at speed

133:34

and like all you would need is a change

133:37

of government that wanted to implement

133:39

something similar and all the base layer

133:42

technologies already in there. It gets a

133:43

little bit worse in Iran because they

133:45

have this new app called the Nazar app

133:46

where the government has introduced the

133:48

Nazar mobile application which allows

133:49

you as a citizen to report another

133:51

citizen who is not wearing their hijab

133:53

and it it logs their location, their

133:55

time um when they weren't wearing it and

133:57

the vehicle license plate with the

133:59

crowdsource data. It can then go after

134:01

that individual. I would also just point

134:04

out that I think we're not being

134:05

imaginative enough. I agree with you. I

134:07

have the same concern about these

134:09

autonomous weapons, but I also think

134:11

this doesn't have to occur in the

134:13

context of war or even governmental

134:16

oppression that it is perfectly

134:17

conceivable that effectively this allows

134:20

this drops the price of a an

134:24

undetectable or an unprosecutable crime.

134:27

And maybe economic moes return in the

134:30

form of people taking out their

134:32

competitors or anybody who attempts to

134:33

compete with them using an autonomous

134:35

drone that can't be traced back to them.

134:38

You know, that follows facial

134:39

recognition. And you know, you don't

134:41

have to kill very many people for others

134:42

to get the message that uh this is a a

134:45

zone that uh you shouldn't mess around

134:47

in. So, I could imagine, you know,

134:50

effectively a new high-tech organized

134:53

crime that uh protects rackets and makes

134:58

tons of money and subjugates people who

135:00

haven't done anything wrong. I had

135:02

Mustafa Sullivan on the podcast in 2023

135:04

when this all of this stuff started

135:06

kicking off and he is the CEO of

135:08

Microsoft AI. You're familiar with

135:10

Mustafa? Of course. Yeah. Um and he one

135:12

of the things he said to me at the time

135:13

was one of my fears is a tiny group of

135:15

people who wish to cause harm are going

135:17

to have access to tools that can

135:19

instantly destabilize our world. That's

135:21

the challenge. How to stop something

135:23

that can cause harm or potentially kill.

135:25

That's where we need containment. And it

135:27

sounds a little bit like what you're

135:28

saying amad that we will now have these

135:31

these tools. you were talking in the

135:32

context of the military, but as Brett

135:35

said there, even smaller groups of

135:36

people that might have been, I don't

135:38

know, cartels or gangs can do similar

135:41

harm. And at the moment, in terms of

135:43

autonomous weapons, both the US and

135:45

China are investing heavily in AI

135:47

powered weapons, autonomous drones, and

135:48

cyber warfare because they're scared of

135:50

the other one getting it first. And we

135:53

we talked about how much of our lives

135:54

run on the internet, but cyber weapons

135:56

and cyber AI agents that could be

135:59

deployed to take down China's X, Y, or

136:02

Zed or vice versa are real concern.

136:05

Yeah. Yeah. I I think all of that is is

136:10

um is a real concern. You know, unlike

136:12

Mustafa, I I don't think containment is

136:14

is possible. Part of the reason why this

136:18

game theoretic system uh of competition

136:21

between the US, China, corporations,

136:26

individuals makes it so that this this

136:28

technology is already, you know, is

136:31

already out and really hard to put it

136:33

back in the in the bag. I did ask him

136:36

this question and I remember the answer

136:37

because it was such a stark moment for

136:38

me. I said to Mustafa, "Do you think

136:40

it's possible to contain it?" And he

136:41

replied, "We must." So I asked him

136:43

again. I said, "Do you think it's

136:44

possible to contain it?" and he replied

136:45

we must and I asked him again I said do

136:47

you think it's possible we must so the

136:49

problem with that uh uh chain of

136:51

thinking is that it might lead to an

136:53

oppressive system

136:55

uh there is uh one of the say doomers or

136:58

philosophers of of AI which I I respect

137:01

his work his name is Nick Bostonramm and

137:03

he he uh he's he was trying to think of

137:08

ways in which we can contain AI and the

137:13

thing that he came up with is perhaps

137:16

more oppressive than something that the

137:18

AI would come up with is like total

137:20

surveillance state. You need total

137:22

surveillance on compute on people's

137:25

computers on people's ideas to not

137:27

invent AI or AGI. It's like taking the

137:30

guns or something or Right. Exactly. I

137:32

mean there's always there's always this

137:34

problem of containing any sort of

137:36

technology is that you do need um

137:39

oppression and draconian policies to do

137:42

that. Are you scared of anything else or

137:44

concerned about anything else as it

137:45

relates to AI outside of autonomous

137:47

weapons? You know, we talked at the

137:49

about

137:50

the birthight crisis and I think a more

137:54

generalized problem there is creating

137:58

virtualized environments

138:01

uh via VR where everyone is living in

138:04

their own created universe and uh it's

138:09

so enticing and even create simulates

138:12

work and simulates struggle uh such that

138:15

you don't really need to leave this this

138:17

world and so every one of us will be

138:20

solopcystistic, you know, similar to the

138:21

Matrix. Ready Player One. Ready Player

138:24

One. We're all kind of uh plug even

138:26

worse than Ready Player One. At least

138:28

that's a massivelyworked

138:30

environment. I'm talking about AI

138:32

simulating everything uh for us and

138:36

therefore you're literally in the

138:37

matrix. You know, maybe that this is I

138:40

was about I had that same thought. I've

138:42

enjoyed this great simulation. Yes. and

138:45

and and so I mean are you familiar with

138:48

the Fermy's paradox? No, I'm not. So

138:50

Fermy's paradox is um the question uh

138:53

the you know professor his name is uh

138:56

Fermy he asked the question uh if the

139:00

universe is is that vast then where are

139:03

the aliens? the fact that humans exist,

139:07

you can deduce that other civilizations

139:10

exist. And if they do exist, then why

139:14

don't we see them? And then that spurred

139:17

a bunch of Fermy solutions. So there's

139:20

uh I don't know, you can find hundreds

139:21

of solutions on the internet. One of

139:24

them is the uh sort of house cat on a

139:26

thought experiment where actually aliens

139:29

exist, but they kind of put us in an

139:31

environment like the Amish in a certain

139:34

time and do not expose us to what's

139:37

going on out there. So they we're pets.

139:39

Maybe they're watching us and kind of

139:40

enjoying uh what we're doing, stopping

139:42

us from hurting ourselves, stopping us

139:44

from hurting ourselves. There are so

139:45

many things, but one of the things that

139:48

I think is potentially a solution to the

139:51

phrases paradox and one of the saddest

139:54

outcomes is that civilizations progress

139:58

until they invent technology that will

140:01

lock us into infinite pleasure and

140:03

infinite simulation such that we we

140:07

don't have the motivation to go into

140:11

space to seek out the explor

140:14

exploration, potentially other alien

140:17

civilizations. And perhaps that is a

140:20

determined outcome of humanity or like a

140:24

highly likely outcome of any species

140:27

like humanity. We like pleasure.

140:29

Pleasure and pain is the main

140:30

motivators. And so if you create an

140:33

infinite pleasure machine, does that

140:35

mean that we're just at home in our VR

140:38

environment with everything taking care

140:40

for us and literally like the matrix and

140:42

the world the real world would suck in

140:44

such a scenario? Yes. Be terrible. I

140:46

mean the other simpler explanation of

140:48

the Fermy paradox is that you generate

140:51

sufficient technology that you can end

140:53

your species and it's only a matter of

140:55

time from that point which you know we

140:57

can have that discussion about nuclear

140:59

weapons. We can have it about AI, but

141:02

does some technology, if if we stay on

141:04

that escalator, does some technology

141:06

that we generate ultimately whatever

141:08

allows you to get off the planet allows

141:10

you to blow up the planet? There you go.

141:12

I want to get everyone's closing

141:13

thoughts and closing remarks. And

141:16

hopefully in your closing remarks, you

141:17

can capture something actionable for the

141:20

individual that's listening to this now

141:22

on their commute to work or the single

141:24

mother, the average person who maybe

141:25

isn't as technologically advanced as

141:27

many of us at this table, but is trying

141:29

to navigate through this to figure out

141:31

how to live a good life over the next

141:33

10, 20, 30 years. Yeah. Take as long as

141:37

you need. I think we live in the most uh

141:40

interesting time in human history. So

141:43

for the single mother that's listening,

141:45

for someone who wouldn't be the

141:46

stereotype of a tech row, don't assume

141:49

that you can't do this stuff. It's never

141:52

been more accessible today within your

141:54

work. You can be an entrepreneur. You

141:56

don't have to take massive risk to go

141:58

create a business um by you quit your

142:02

job and go create a business. There are

142:04

countless examples. We uh we have a user

142:06

who's a product manager at a larger real

142:09

estate business and he built something

142:12

that created 10% lift in conversion

142:15

rates which generated millions and

142:17

millions of dollars of that business and

142:19

that person became celebrity at that

142:21

company and became someone who is

142:23

lifting everyone else up and teaching

142:25

them how to use these tools and

142:27

obviously that that is like a really

142:29

great for for anyone's career and you're

142:31

going to get a promotion and your

142:33

example of building a piece of software

142:36

for your family for your kids to to

142:39

improve and and to learn more to be

142:42

better kids uh as an example of being

142:44

entrepreneur in your family. So I really

142:48

want people to break away from this

142:51

concept of entrepreneurship being this

142:54

is your podcast a diary of a CEO. You

142:56

started this podcast by talking to CEOs

142:59

I assume right and over time uh it

143:02

changed to everyone can be a CEO

143:05

everyone is some kind of CEO in their

143:07

life and so uh I think that we

143:12

have unprecedented access to tools for

143:15

that vision to actually come to reality.

143:19

Well, it is obviously a moment of a kind

143:22

of human phase transition. Something

143:25

that I believe will be the equal of a

143:28

discovery of farming or writing or

143:34

electricity. And the darkness that I

143:38

think is valid in looking at all of the

143:40

possible outcomes of this scenario is

143:43

actually potentially part of a different

143:45

story as well. In evolutionary biology,

143:47

we talk about an adaptive landscape in

143:50

which a niche is represented as a peak

143:54

and a higher niche, a better niche is

143:57

represented as a higher peak. But to get

143:59

from the lower niche to the higher

144:00

niche, you have to cross through what we

144:02

call an adaptive valley. And there's no

144:04

guarantee that you make it through the

144:05

adaptive valley. And in fact, the

144:08

drawing that we put on the board, I

144:09

think, is overly hopeful because it

144:11

makes it in two dimensions. It looks

144:13

like you know exactly where to go to

144:15

climb that next peak. And in fact, it's

144:17

more like the peaks are islands in an

144:20

archipelago that is in fog where you

144:23

can't figure out what direction that

144:25

peak is and you have to reason out it's

144:27

probably that way and you hope not to

144:29

miss it by a few degrees. But in any

144:32

case, that darkness is exactly what you

144:36

would expect if we were about to

144:38

discover a better phase for humans. And

144:42

I think we should be very deliberate

144:43

about it this time. I think we should

144:45

think carefully about how it is that we

144:47

do not allow the combination of this

144:50

brand new extremely powerful technology

144:53

and market forces to turn this into some

144:57

new kind of enslavement. And I don't

145:00

think it has to be. I think the

145:02

potential here does allow us to refactor

145:05

just about everything. Maybe we have

145:07

finally arrived at the place where

145:10

mundane work doesn't need to exist

145:12

anymore and the pursuit of meaning can

145:14

replace it. But that's not going to

145:16

happen automatically if we don't figure

145:18

out how to make it happen. And I hope

145:21

that we can recognize that the peril of

145:24

this moment is best utilized if it

145:27

motivates us to confront that question

145:30

directly.

145:32

Each one of us has two parents, four

145:34

grandparents, eight great-grandparents,

145:37

16, 32, 64. You've got this inc like

145:41

this long line of ancestors who all had

145:44

to meet each other. They all had to

145:45

survive wars. They all had to survive

145:48

illness and disease. Everything had to

145:51

happen for us. One, each individual, all

145:53

of this this stuff had to happen for us

145:55

to get here. And if we think about all

145:57

the people in those thousands and

145:59

thousands of people, every single one of

146:01

them would trade places in a heartbeat

146:03

if they had the opportunity to be alive

146:05

at this particular moment. They would

146:07

say that their life was struggle,

146:11

disease, that their life was a lot of

146:14

mundane and meaningless work. It was

146:16

dangerous. You know, every single one of

146:18

us has probably got ancestors that were

146:20

enslaved, probably got ancestors that

146:24

died too young, uh, probably got

146:27

ancestors that worked horrific

146:29

conditions. We all have that. And they

146:32

would all just look at this moment and

146:34

say, "Wow." So, are you telling me that

146:36

you have the ability to solve meaningful

146:38

problems, to come up with adventures, to

146:41

travel the world, to pick the brains of

146:44

anyone on the planet that you want to

146:45

pick the brains of? You can just listen

146:47

to a podcast. You can just watch a

146:49

video. You can talk to an AI. Like, are

146:52

you telling me that you're alive at this

146:53

particular moment? Please make the most

146:56

of that. Like, do something with that.

146:59

You know, you can sit around

147:00

pontificating about society and how

147:03

society might work. But ultimately, it

147:05

all boils down to what you do with this

147:07

moment. and solving meaningful problems,

147:10

being brave, having fun, making your

147:14

little dent in the universe. You know,

147:16

that's that's what it's all about. And I

147:18

feel like there's an obligation to your

147:20

ancestors to make the most of the

147:22

moment.

147:24

Thank you so much to everybody for being

147:26

here. I I've learned a lot and I've

147:28

developed my thinking, which is much the

147:30

reason why I wanted to bring us all

147:31

together because I know you all have

147:32

different experiences, different

147:34

backgrounds in education. and you're

147:35

doing different things, but together it

147:37

helps me sort of pass through all of

147:38

these ideas to figure out where I land.

147:40

And I I I ask a lot of questions, but I

147:43

am actually a believer in humans. I'm I

147:46

I was thinking about this a second ago.

147:48

I was thinking, do I am I optimistic

147:50

about humans ability to navigate this

147:52

just because I have no other choice?

147:54

Because as you said, the alternative

147:56

actually isn't worth thinking about. And

147:57

so I do have a optimism towards how I

148:00

think we're going to navigate this in

148:02

part because we're having these kinds of

148:03

conversations and we in history haven't

148:06

always had them at the birth of a new

148:08

revolution when we think about social

148:09

media and the implications that had.

148:10

We're playing catchup with the with the

148:12

downstream

148:14

consequences. And I am hopeful. Maybe

148:17

that's the entrepreneur in me. I'm

148:18

excited. Maybe that's also the

148:20

entrepreneur in me. But at the same

148:22

time, to many of the points Brett's

148:23

raised and Amjud's raised and Dan's

148:25

raised, there are serious considerations

148:27

as we swim from one island to another.

148:29

And because of the speed and scale of

148:32

this transformation that Brett

148:33

highlights and you look at the stats of

148:35

the growth of this technology and how

148:37

it's spreading like wildfire and how

148:38

once I tried Replet, I walked straight

148:41

out and I told Cozy immediately I was

148:42

like, "Cussie, try this." And she was on

148:44

it and she was hooked. And then I called

148:45

my girlfriend in Bali who's the breath

148:47

work practitioner and I was like, "Type

148:48

this into your browser. R E P L I T."

148:51

And then she's making these breath work

148:53

schedules with all of her clients

148:55

information ahead of the retreat she's

148:56

about to do. It's spreading like

148:58

wildfire because we're internet native.

149:00

We were native to this technology. So

149:02

it's not a new technology. It's

149:03

something on top of something that's

149:04

intuitive to us. So that transition, as

149:07

Brett describes it, from one peak to the

149:08

other or one island to another, I think

149:10

is going to be incredibly destabilizing.

149:12

And I've having interviewed so many

149:14

leaders in this space from Reed Hoffman

149:15

who's the founder of LinkedIn to the CEO

149:17

of Google to Mustafa who I mentioned

149:20

they don't agree on much but the thing

149:22

that they all agree on and that Sam

149:23

Alman agrees on is that the long-term

149:26

future the long-term way that our

149:28

society functions is radically

149:29

different. People squab squabble over

149:31

the short term. They sometimes even

149:33

squabble over the midterm or the

149:34

timeline but they all agree that the

149:37

future is going to look completely

149:38

different. Amjud, thank you for doing

149:40

what you're doing. you're what we didn't

149:42

get to spend a lot of time on it today

149:43

and this is typically what I do here but

149:45

your story is incredibly inspiring

149:48

incredibly inspiring from where you came

149:50

from what you've done what you're

149:51

building and you are democratizing and

149:53

creating a level of playing field for

149:55

entrepreneurs in Bangladesh to Cape Town

149:58

to San Francisco to be able to turn

150:00

their ideas into reality and I do think

150:02

just on the surface that that's such a

150:04

wonderful thing that you know I was born

150:06

in Botswana in in Africa and that I

150:08

could have the same access to turn my

150:11

imagination into something to change my

150:13

life because of the work that you're

150:14

doing at Replet. And I highly recommend

150:16

everybody go check it out. You you

150:17

probably won't sleep that night because

150:19

it's so it's so for someone like me it

150:21

was so addictive to get to be able to do

150:23

that because it's been the barrier to

150:24

creation my whole life. I've always had

150:25

to call someone to build something. Dan,

150:28

thank you again so much because you

150:29

represent the voice of entrepreneurs and

150:30

you've really become a titan as a a

150:31

thought leader for entrepreneurs in the

150:32

UK and that perspective that balance is

150:35

incredibly important. So, I really

150:36

really appreciate you being here as

150:38

always and you're a huge fan favorite of

150:40

our show and Brett, thank you a

150:41

gazillion times over for being a a human

150:45

lens on complicated challenges and you

150:48

do it with a fearlessness that I think

150:50

is imperative for us finding the truth

150:52

in these kind of situations where some

150:54

of us can run off with optimism and we

150:56

can be hurtling towards the mouse trap

150:58

because we love cheese and I think

151:00

you're an important

151:01

counterbalance and voice in the world at

151:04

this time. So, thank all of you for

151:05

being here. I really really appreciate

151:06

it and um we shall

151:11

see. These things live forever.

151:17

So, this has always blown my mind a

151:19

little bit. 53% of you that listen to

151:22

this show regularly haven't yet

151:23

subscribed to the show. So, could I ask

151:25

you for a favor? If you like the show

151:26

and you like what we do here and you

151:28

want to support us, the free simple way

151:29

that you can do just that is by hitting

151:31

the subscribe button. And my commitment

151:33

to you is if you do that, then I'll do

151:35

everything in my power, me and my team,

151:36

to make sure that this show is better

151:38

for you every single week. We'll listen

151:40

to your feedback. We'll find the guests

151:41

that you want me to speak to, and we'll

151:43

continue to do what we do. Thank you so

151:45

much.

151:48

[Music]

152:05

[Music]

Interactive Summary

Ask follow-up questions or revisit key timestamps.

The video discusses the profound impact of Artificial Intelligence (AI) on society, covering its potential for both immense good and significant harm. It explores the concept of AI agents and their ability to perform tasks autonomously, the implications for job displacement, the ethical considerations of AI development, and the potential for AI to reshape industries like healthcare and education. The speakers debate whether AI will lead to a utopian future of abundance or a dystopian one with widespread unemployment and societal disruption. Key themes include the unprecedented speed and scale of AI's advancement, the challenges of controlling complex AI systems, and the need for humanity to adapt and prepare for a future profoundly altered by this technology.

Suggested questions

10 ready-made prompts